text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Polarization stability of the single-mode laser diodes radiation applied in radiation scattering study complexes Key features of semiconductor lasers and its serially manufacturing technology modernization have greatly expanded of its using at applied studies at last 20 years. But there is set of factors restricting such lasers application in a number of optical-electronic measuring complexes. Particularly in particle image velocimetry (PIV) and laser Doppler velocimetry (LDV) complexes commonly the gas and solid-state lasers is used due to more stability of spectral, energy and polarization characteristics of radiation then semiconductor lasers have. However gradual introduction of the serially manufacturing laser diodes into such systems picking up the pace that certainly characterizes the progress of reaching the required stability of its output laser radiation parameters. In laser measurement systems where medium investigation carried out by analyzing of scattering radiation in it the probe radiation polarization is often important. So the using in such systems the laser diodes as sources of radiation need to be followed by stability monitoring of its polarization characteristics which may be violated both by the outer factors and by natural degradation of inner laser diode structure. This work is devoted to the issues of monitoring the radiation polarization characteristics of the serially manufacturing single-mode laser diodes. Introduction The radiation scattering phenomena is spreadly use in such areas of sciences as chemistry, ecology, biology, medicine and for study of atmosphere and mixes containing nanoparticles [1][2][3][4][5][6][7][8][9][10][11][12]. In modern laser measuring systems where the scattered radiation characteristics measurement results depend of probe radiation polarization the stability of polarization is playing crucial role. In particular the particle image velocimetry (PIV) and laser Doppler velocimetry (LDV) complexes are such systems. At using the gas, fiber and solid-state lasers the stability of the probe radiation output parameters is saved during the tens of thousands hours of service. But the such type lasers exploitation requires the large power consumption, introduction of often bulky cool systems and the increasing of whole dimensions of system. These disadvantages may be eliminated by using semiconductor laser diodes (LD), which are characterized by easy operation, high radiation power with low power consumption and small dimensions. The significant disadvantage of the LD is the pronounced dependence of its output radiation parameters from the external ambient factors. These factors include ambient temperature or the presence of external electromagnetic fields. When using spatially single-mode LDs as sources of probing radiation, the output power and the 2 state of polarization of the laser radiation should be monitored. Moreover the latter property of radiation is most sensitive to changes in the waveguide and active medium associated with the degradation of the device. Thus, it is necessary to regularly monitor the state of polarization of the probe radiation the source of which is a single-mode commercially available LD. This paper presents examples of transformation of the polarization characteristics of commercially available laser modules. The analysis of degradation changes in the waveguide and the active medium of the LD is also carried out on the basis of the results of measurements of its radiation spatial-energy and polarization characteristics. Measurement complex for analysis the space-energy and polarization characteristics of LD radiation The most complete information of the LDs waveguide and heterostructure state can be obtained by analyzing the radiation pattern into free space. In this case, the analysis of the spatial-polarization characteristics of radiation can be carried out by scanning the radiation pattern using a polarizing prism [13]. The diagram of the measuring complex is shown in figure 1. The laser module 1 is mounted on a two-axis linear positioner so that the axis of the motorized platform OX' passes approximately through the center of the LD output mirror. In this case, the LD can be rotated around the beam axis, which makes it possible to scan the radiation pattern in different planes. The polarizing prism 5 carries out polarization selection of radiation. To analyze the polarization stability of LD radiation into free space it is sufficient to scan the radiation pattern in the plane where the divergence angle is maximal or minimal. This planes of the radiation pattern is called vertical and horizontal respectively. It was shown in [13] that when scanning radiation along these planes it is possible to set only two positions of the polarization prism at which it will transmit either the maximum or minimum of the radiation flux at any observation angle θ. Thus the angular dependence of the linear polarization degree or contrast K(θ) can be determined where Pmax(θ) and Pmin(θ) is respectively maximum and minimum power value of radiation transmitting across linear polarizer at observation angle θ. Theoretical description the space-energy and polarization LD radiation characteristics dependence of waveguide parameters The contrast angular distribution (1) substantially depends on the waveguide anisotropy, heterostructure internal stresses and the reflectivity of the resonator mirrors for the TE and TM modes [14]. Figure 2 shows the theoretical dependences of the contrast angular distribution and radiation pattern in two planes at variations in the difference between the effective refractive indices for ordinary (n1o) and extraordinary (n1e) waves inside the single-mode edge-emitting LD waveguide. Also the distribution of the total radiation energy between the TE and TM modes significantly influences on the single-mode strip lasers radiation contrast angular dependence. As a result the analysis of the radiation pattern and the contrast angular dependence of a single-mode edge-emitting LD with a strip, ridge, or rectangular waveguide makes it possible to: 1) estimate the parameters of the waveguide and the active medium including the degree of anisotropy and the energy distribution between the TE and TM modes; 2) to trace the dynamics of changes in these parameters by carrying out measurements at different times of the LDs operating time. As a result of such an analysis it is possible not only to control the stability of the radiation source polarization characteristics but also to select optimal sources from a batch of commercially available devices for solving problems of studying the characteristics of scattered radiation. Analysis of the free space radiation polarization stability and waveguide parameters of the edge-emitting single-mode LD Let us consider the change dynamics in time of the radiation polarization state and the waveguide parameters of the commercially available KLM-D650-5-5 laser module by comparing the calculation and measurement results of its radiation characteristics. Determination of the waveguide parameters and LDs active medium from the characteristics of its radiation is an inverse problem. For an unambiguous solution to this problem it is required to have information about the values of some input parameters of the waveguide and plates. For example if the materials stoichiometric composition included in the LD structure is known then the refractive indices values of the heterostructure layers materials can be set approximately. In this case solving the inverse problem is reduced to determining only the geometric parameters of the waveguide. Then having determined the geometry of the LD it is possible to refine the optical parameters of waveguide and the plates from the characteristics of radiation. Thus by comparing the found parameters of the LD heterostructure layers with the parameters used in solving the inverse problem it is possible to conclude the degree of their correspondence. However the information about the material and geometry of LD layers is rarely disclosed by the manufacturer. But as a first approximation the composition of the waveguide and active medium can be indirectly determined from the wavelength of the laser radiation. Thus when determining the material of a waveguide, it is possible to narrow the range of its search by semiconductor structures that are transparent to radiation of a known wavelength. In addition knowing the LD radiation wavelength one can determine the bandgap of the active region and find a material for which a direct quantum transition between laser levels is possible. The data sheet KLM-D650-5-5 does not contain information on the laser waveguide parameters. Therefore all further calculations are carried out by selection according to the methodology described in the previous paragraph. For the indicated refractive indices at a wavelength of 650 nm the following variants of semiconductor compounds are possible: AlxGa1-xAs (0.15 < x < 0.29), GaxIn1-xP (0.49 < x < 0.5) [15,16]. The axial value of the radiation contrast of the KLM-D650-5-5 equal to 0.95 corresponds to the TM and TE modes amplitudes ratio at the output mirror of the laser ATM / ATE = 0.23. The measurement results and modeling of the KLM-D650-5-5 radiation characteristics with the above parameters of its waveguide are shown in figure 4. Confidence intervals are marked with vertical lines over the points. In the radiation pattern the designations of these intervals are set in the places of experimental points greatest deviation from the theoretical curve. The results of measuring the radiation pattern and K(θ) within the error limits are in good agreement with the simulation results. Separately it should be noted that in the area of space adjacent to the vertical plane the radiation contrast of the KLM-D650-5-5 at the beginning of its operation was constant and close to 1 which also coincides with the calculation results. The description of radiation pattern an angular contrast distribution modeling is presented in [17]. Figure 5 shows the measurement results and modeling the normalized radiation patterns and contrast angular dependencies in the vertical plane after 50 and 100 hours of operating the KLM-D650-5-5. The figures show that after only 50 hours of operation the state of radiation polarization into free space has changed. The ratio of the amplitudes at the output mirror of the ATM/ATE laser increased to 0.47 and after 100 hours of operation to 0.50. The ATM/ATE value itself indirectly characterizes the ratio of the radiation amplification indices for each of the modes, which can be determined if the corresponding reflection coefficients of the resonator mirrors are known. Figure 5 especially shows the effect of waveguide anisotropy on the K(θ) dependence, in particular, in the form of a noticeable decrease in contrast with increasing angle θ. This indicates the formation of internal stresses in the waveguide which violate the polarization stability of the radiation. In the future such violations can lead to rapid degradation of the LD and a significant decrease in the radiation power. Thus control of the single-mode edge-emitting LDs radiation polarization stability makes it possible to define the instant of the onset of degradation. Conclusion In this paper we show the relationship between the dynamics of changes in the parameters of the commercially available single-mode edge-emitting LDs waveguide and the polarization stability of its radiation. It is noted that the change in the polarization stability of laser radiation is associated with the appearance of internal stresses in the waveguide, which is an indicator of the onset of degradation. Thus the most effective application of this type of LD in laser systems for diagnostics of scattered radiation should be accompanied by control of the spatial-energy and polarization characteristics of its radiation. In the case when the radiation is collimated the violation of polarization stability is registered much later than that of radiation propagating into free space [13]. However it is precisely the control of non-collimated radiation polarization state that makes it possible to define the moment of the onset of degradation. This makes it possible to predict an early failure of the LD or a significant deterioration in the energy and polarization characteristics of its collimated radiation. An additional and most undesirable factor in the development of degradation is fluctuations in the power and polarization state of the LD radiation within the continuous operating time. It has an extremely negative effect on the scattering indicatrix measuring results accuracy in the case when it is registered at various angles of observation occurs at different times. Therefore the registration of the radiation polarization state constancy of commercially available single-mode edge-emitting LDs from the moment of its operation is an important component of the effective use such devices in laser measuring systems.
2,773.4
2021-11-01T00:00:00.000
[ "Physics" ]
Ionizing Radiation Induces Resistant Glioblastoma Stem-Like Cells by Promoting Autophagy via the Wnt/β-Catenin Pathway Therapeutic resistance in recurrent glioblastoma multiforme (GBM) after concurrent chemoradiotherapy (CCRT) is a challenging issue. Although standard fractionated radiation is essential to treat GBM, it has led to local recurrence along with therapy-resistant cells in the ionizing radiation (IR) field. Lines of evidence showed cancer stem cells (CSCs) play a vital role in therapy resistance in many cancer types, including GBM. However, the molecular mechanism is poorly understood. Here, we proposed that autophagy could be involved in GSC induction for radioresistance. In a clinical setting, patients who received radiation/chemotherapy had higher LC3II expression and showed poor overall survival compared with those with low LC3 II. In a cell model, U87MG and GBM8401 expressed high level of stemness markers CD133, CD44, Nestin, and autophagy marker P62/LC3II after receiving standard fractionated IR. Furthermore, Wnt/β-catenin proved to be a potential pathway and related to P62 by using proteasome inhibitor (MG132). Moreover, pharmacological inhibition of autophagy with BAF and CQ inhibit GSC cell growth by impairing autophagy flux as demonstrated by decrease Nestin, CD133, and SOX-2 levels. In conclusion, we demonstrated that fractionated IR could induce GSCs with the stemness phenotype by P62-mediated autophagy through the Wnt/β-catenin for radioresistance. This study offers a new therapeutic strategy for targeting GBM in the future. Introduction Glioblastoma multiforme (GBM) is one of the most aggressive and recurrent malignant tumors classified as Grade IV astrocytoma by the World Health Organization [1]. Despite surgery and standard concurrent chemoradiotherapy (CCRT), a poor prognosis with a mean survival duration of <15 months under recurrence indicates that GBM is therapeutically resistant [2][3][4]. Recent clinical findings showed that therapeutic resistance is associated with chemo/radio-resistant cells, which present under the recurrence condition in the Informed consent was obtained from all subjects involved in the study data on the patient's basic profile; age, gender, therapy type, Karnofsky Performance scale, and tumor grading were all included. We also examined the LC3II score by a scoring system (from 0,1,2,3) and compared it to our clinicopathological findings. LC3II expression was graded by a scoring system (0 is the frequency of nucleus staining if <10% of the cell is positive; 1 is the frequency of nucleus staining if >10% of the cell is positive and intensity is low; 2 is the frequency of nucleus staining if >10% of the cell is positive, and intensity is moderate; 3 is the frequency of nucleus staining if >10% of the cell is positive and intensity is strong). All patients were also separated into two groups as radiation or chemotherapy (radiation/chemotherapy) and without any therapy. Temozolomide (TMZ) was used as chemotherapy. We discuss the details in the results. Cell Viability Assay U87MG and GBM8401 cells were seeded (2000 cells/well) in a 96-well plate. After 24 h of culture, the cells were exposed to radiation with doses of 2 Gy for 1 day, 2 Gy for 5 days, and 10 Gy for 1 day. After 24 h, the cell viability was measured by the CCK-8 kit (CCK-8, sigma 96992, Sigma-Aldrich; Merck KGaA, Darmstadt, Germany) at 450 nm according to the manufacturer's instructions. This experiment was repeated at least three times. Long Term Survival and Colony Formation Assay Cells were seeded into 6-well plates (5 × 10 2 cells/well). After 24 h of culture, the cells were exposed to radiation with doses of 2 Gy for 1 day, 2 Gy for 5 days, and 10 Gy for 1 day. After treatment, cells were cultured at 37 • C for 7-21 days. The cells were washed twice with PBS, fixed in 4% paraformaldehyde for 30 min, and stained with 0.1% crystal violet for 20 min at 25 • C. The colonies were carefully washed with tap water, and then the number of colonies, defined as >50 cells/colony, was counted and analyzed. Then the cells were washed with PBS and DMSO was added to induce a complete dissolution of the crystal violet. Absorbance was recorded at 570 nm by a 96-well-plate ELISA reader. Results were expressed as average colony count ± SE from three independent experiments. Immunoblot Analysis Protein expression was detected by immunoblot analysis. Cell lysates with equal protein content were prepared in SDS sample buffer, separated on NativePAGE Novexw Bis-Tris 4-16% gel for BN-PAGE analysis (Invitrogen; Thermo Fisher Scientific, Inc.) followed by transferring to polyvinylidene fluoride membranes. Proteins on the membrane were detected with specific primary antibodies and HRP-conjugated secondary antibodies. , and β-actin (1:1000; sc47778, Santa Cruz Biotechnology, Santa Cruz, CA). The signal of each target protein was visualized by incubation with ECL Reagent and exposure to X-ray film. Immunohistochemical (IHC) Staining The IHC staining was performed on 4 µm paraffin sections. The sections were dewaxed, hydrated, and placed at 4 • C overnight. For antibodies against CD133 (AP1802a, Abgent, San Diego, CA, USA), P62 (ab56416, Abcam, Cambridge, MA, USA) and LC3II (AP1802a, Abgent, San Diego, CA, USA), the standard avidin-biotin complex (ABC) procedures were employed. After the sections were returned to room temperature, biotinylated secondary antibodies and horseradish-labeled streptavidin were added. The samples were then incubated in an oven at 37 • C Subsequently, DAB color development, hematoxylin counterstaining, gradient alcohol dehydration, and xylene transparent were carried out. All samples were sealed with neutral gum afterwards. Databases To identify significant differentially expressed genes (DEGs) after irradiation, three whole gene expression databases, which covered cell base, xenograft animal model, and clinical GBM specimens from National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), were explored. Data of gene chip GSE107040, GSE117126, and GSE82139 were obtained from GEO database. GSE107040 was from Life Science in the department of Research Institute for Natural Sciences at Hanyang University with three cases of U87MG_non-irradiated as a control group and three cases of U87MG_2Gy × 1 as experimental group. GSE117126 was from Neurology in the Department of Medicine at Seoul National University with one case of brain nonirradiated as a control group and one case of brain irradiated in orthotopic U-87 MG xenograft mouse models. GSE82139 was from Glioma and Neural Stem Cell Group in the Department of Institute Catalan d'Oncologia IDIBELL at Barcelona with two cases of normal samples as a control group and two cases of GBM patients as the experimental group. To obtain the P-value, multiple testing corrections were applied with the Benjamini-Hochberg method. Only those genes exhibiting log2 fold change (FC) greater than 1.5 and adjusted p < 0.05 were considered to be DEGs by Ingenuity Pathway Analysis (IPA) software. The details of the significant probe sets are summarized in Table 1. The comparison resulted in a list of 17 autophagy-related DEGs. Among them, LC3 II, P62, CTSF, and VPS33A were consistently upregulated in all three databases. Statistical Analysis The Western blot protein bands were quantified via densitometry using Multi Gauge 3.0 program (Fujifilm, Tokyo, Japan). For the in-vitro and in-vivo studies, statistical significance was evaluated through one-way analysis of variance (ANOVA) followed by Tukey's post hoc test to correct for multiple comparisons. A two-tailed Student's t-test was used to compare data between two groups. Statistical significance was set as p < 0.05. The association between P62, LC3 II, and CD133 expressions, and clinicopathological characteristics of GBM patients were investigated using ANOVA test, Fisher's exact test, as well as Chi-squared test. Survival analysis was performed using the Kaplan-Meier method to calculate overall and disease-free survival rates among different groups. These groups were then compared using log-rank method. Two-sided p < 0.05 was considered as a statistically significant difference. Correlation of Clinicopathological Parameters with Autophagy and GSC Markers in GBM Samples In total, there were 56 males and 50 female patients without gender variants. Sixtyfour patients were over the age of 65 years old and 42 patients were lower than 65 years old. Tumor grading is 10 patients for grade I (pilocytic astrocytoma), 16 patients for grade II (astrocytoma), 17 patients for grade III (anaplastic astrocytoma), and 63 patients for grade IV (GBM). Fifty-seven patients received radiation/chemotherapy and 49 patients did not receive any therapy. Retrospective analysis showed that GBM patients who received radiation/chemotherapy had significantly higher levels of LC3II expression, when compared with those without receiving radiation or chemotherapy (p = 0.021, Table 2), meaning autophagic markers are related to radiation/chemotherapy. Although tumor grading had no significant difference in statistical level, it still had an increasing tendency with LC3II scores with high tumor grade. Grade I (pilocytic astrocytoma) had a much lower expression score of LC3II, and Grade IV (GBM) predominately had a higher expression score of LC3II. The relationship between the expression level of CD133 in correlation with LC3II and the prognosis of patients with glioma was also investigated. High levels of both CD133 and LC3II expression showed significantly shortened survival times ( Figure 1A). We further examined the specimens of pre-and post-CCRT in two representative GBM patients by immunohistochemical (IHC) staining. High expression levels of CD133/CD44/ LC3II/P62, were found in post-CCRT samples, whereas low expression levels of CD133/ CD44/LC3II/P62, were shown in pre-CCRT ones ( Figure 1B,C). Collectively, we found that the patients who received radiation/chemotherapy had high LC3II/P62 and CD133/CD44 We further examined the specimens of pre-and post-CCRT in two representative GBM patients by immunohistochemical (IHC) staining. High expression levels of CD133/CD44/ LC3II/P62, were found in post-CCRT samples, whereas low expression levels of CD133/CD44/ LC3II/P62, were shown in pre-CCRT ones ( Figure 1B,C). Collectively, we found that the patients who received radiation/chemotherapy had high LC3II/P62 and CD133/CD44 expression, indicating the possible correlation of radiation/chemotherapy with autophagy as well as the GSC phenotype in GBM. Pathway Analysis for Potential Involvement of Genes Associated with Autophagy and GSC Markers To explore possible mechanisms and potential gene involvement, we used three publicly available microarray datasets from the National Center for Biotechnology Information Gene Expression Omnibus (GEO; Table 1) and identified differences in the expression of autophagy-related genes between non-irradiation and irradiation treatments from bench to bedside. Subsequently, autophagy-related gene expression profiles were screened within differentially expressed genes (DEGs) between the non-irradiation and irradiation treatment groups from in vitro and in vivo experiments to the clinic. After data processing, 17 DEGs were identified between non-irradiation and irradiation treatments from in vitro and in vivo experiments to the clinic. These DEGs contained nine upregulated and eight downregulated genes. Among them, LC3II, P62, Cathepsin F (CTSF), and VPS33A Core Subunit of CORVET and HOPS Complexes (VPS33A) were consistently upregulated in all databases ( Figure S1). According to the CD133 expression level increased in two databases (Table 1), these data consist with our hypothesis that GSC stemness was associated with autophagy. Afterwards, we mapped these DEGs on the autophagy pathway using ingenuity pathway analysis (IPA, QIAGEN, CA, USA. Website: www.ingenuity.com, (accessed on 23 December 2019) software. These genes were color coded based on the cutoff of an absolute correlation coefficient. In consideration of the correlation of CD133 and LC3II expression in clinicopathologic parameters/samples and the GEO database archives of GBM, we hypothesized that potential autophagy-related genes could play a role in radiation resistance. Therefore, we examined endogenous P62 and LC3II proteins in different GBM cell lines, including U251, T98G, U87MG, H4, GBM8401, and MO59K. Immunoblotting assay against anti-P62 and anti-LC3II antibodies showed that the expressions of P62 and LC3II proteins were lower in U87MG and H4 cell lines than in other GBM cell lines (U251, T98G, GBM8401, and MO59K; Figure S2). Thus, all subsequent experiments were performed using GBM8401 and U87MG cells lines ( Figure S2). Clinical reports showed that the effects of IR were dependent on the total dose and fractionation ratio [26,27]. In the present study, we followed published clinical guidelines and used 2 Gy per day for 5 days per week [3]. Therefore, we designed our radiation experiments in dose-and fraction-dependent manners with 2 Gy (2 Gy for 1 day), 4 Gy (2 Gy per day for 2 days), 6 Gy (2 Gy per day for 3 days), 8 Gy (2 Gy per day for 4 days), 10 Gy (2 Gy per day for 5 days), and 10 Gy alone for 1 day ( Figure S3). Cell viability results showed that IR affected cell growth in dose-and fraction-dependent manners at 24, 48, and 72 h, ensuring a reliable model for IR effects. From these preliminary results, IR models 2 Gy × 1, 2 Gy × 5 times, and 10 Gy alone in GBM8401 and U87MG were chosen in order to perform subsequent experiments combining dose-and fraction-dependent manners ( Figure 2A). The clonogenic assay was performed to verify the cytotoxic effects of IR on GBM8401 and U87MG after IR at 7, 14, and 21 days; the results revealed that GBM cells after IR regrowth appeared as non-IR treated cells ( Figure 2B). The results suggest that GBM cancer cells were recurrent and radioresistant, implying the possibility of inductive IR-resistant glioma cells. Clonogenic assays were performed to assess the effect of irradiated cells on colony formation. Image showing colonies produced by U87MG and GBM8401 cells following plating of 500 cells and 7-21 days incubation. Cell numbers were quantified and the error bar indicates mean ± SEM of three independent experiments. The level of significance was determined using Student's t-Test with ns representing p > 0.05, ** p < 0.01, and * p < 0.05 compared with the 2 Gy × 1 times (2 Gyfraction) group. # p < 0.05, ## p < 0.01, and ### p < 0.005 compared with the 2 Gy × 5 times (10 Gyfraction) group. † p < 0.05, † † p < 0.01, and † † † p < 0.005 compared with the 10 Gy × 1 times (10 Gyonly) group. Co-Expression of Autophagy and GSC Markers in GBM8401 and U87MG Cells To investigate whether inductive IR-resistant glioma cells in our experiments were related to autophagy and GSCs, we examined the expressions of P62 and LC3II as after which cell viability was assessed using MTT assays. U87MG and GBM8401 cells were divided into the following four groups: control (0 Gy), 2 Gy × 1 times (2 Gy-fraction), 2 Gy × 5 times (10 Gy-fraction), and 10 Gy × 1 times (10 Gy-only). (B) Clonogenic assays were performed to assess the effect of irradiated cells on colony formation. Image showing colonies produced by U87MG and GBM8401 cells following plating of 500 cells and 7-21 days incubation. Cell numbers were quantified and the error bar indicates mean ± SEM of three independent experiments. The level of significance was determined using Student's t-Test with ns representing p > 0.05, ** p < 0.01, and * p < 0.05 compared with the 2 Gy × 1 times (2 Gy-fraction) group. # p < 0.05, ## p < 0.01, and ### p < 0.005 compared with the 2 Gy × 5 times (10 Gy-fraction) group. Co-Expression of Autophagy and GSC Markers in GBM8401 and U87MG Cells To investigate whether inductive IR-resistant glioma cells in our experiments were related to autophagy and GSCs, we examined the expressions of P62 and LC3II as autophagy markers and CD133, CD44, and Nestin as GSC markers in two inductive IR-resistant glioma cell lines (GBM8401 and U87MG) irradiated with 2 Gy × 1 time dose and 2 Gy × 5 times dose. The results showed that mRNA levels of autophagic markers (P62/LC3II) and GSC markers (CD133/CD44/Nestin) increased in a dose-dependent manner, indicating that autophagy and stemness are correlated with the IR accumulative dose ( Figure 3A). In addition, these autophagic markers (P62/LC3II) and GSC markers (CD133/CD44/Nestin) were also evaluated in a fraction-dependent manner with 2 Gy × 5 times (10 Gy total) and 10 Gy single dose ( Figure 3B,C). The data revealed that autophagy was predominant in the 2Gy × 5 group, where survival and regrow of cells were observed. In contrast, cell death without regrowth in the single dose 10 Gy group was seen under the same conditions. Taken together, although all these markers (GSC markers and autophagic markers) were induced in GBM8401 and U87MG, there are not obviously different expression levels between these two cell lines. Furthermore, Western blot analysis for P62/LC3II, CD133/CD44/Nestin and SRY (sex determining region Y)-box2 (SOX2; GSC transcript factor) were performed under both doseand fraction-dependent manners in irradiated U87MG and GBM8401 cells ( Figure 3D). The results revealed that autophagy and GSC markers increased in a fraction-dependent manner rather than a dose-dependent manner. The results also showed that GBM8401 cells had higher expression of the autophagic markers (P62/LC3II) and GSC markers (CD133/CD44/Nestin/SOX2) than did U87MG cells. Expression of the Wnt/β-Catenin/GSK3β Pathway in IR-Induced GSC-Like Phenotype Cells The aforementioned results demonstrated that IR could induce GSCs with a stemness phenotype through autophagy in both dose-and fraction-dependent manners in two glioma cell lines. However, the involved pathway was still unclear. As presented in Figure 4A, IR reduced the protein levels of frizzled (indirectly), β-catenin (directly), and phospho-β-catenin (Ser33/Ser37/Thr41) through phosphorylation by GSK3β in GBM8401 and U87MG cells in dose-and, more noticeably, fraction-dependent manners. Moreover, the phosphorylation level of GSK3β (S9) decreased with IR, indicating that GSK3β activity increased and phosphorylated β-catenin. Expression of the Wnt/β-Catenin/GSK3β Pathway in IR-Induced GSC-Like Phenotype Cells The aforementioned results demonstrated that IR could induce GSCs with a stemness phenotype through autophagy in both dose-and fraction-dependent manners in two glioma cell lines. However, the involved pathway was still unclear. As presented in Figure 4A, IR reduced the protein levels of frizzled (indirectly), β-catenin (directly), and autophagic genes (P62 and LC3II) after receiving the indicated radiation doses in 72 h. GAPDH were used as an internal control. Bar graphs represent mean of triplicates ± SD. * p < 0.05, ** p < 0.01, *** p < 0.005 compared with the 2 Gy × 5 times (10 Gy-fraction) group. # p < 0.05, ## p < 0.01, ### p < 0.005 compared with the 10 Gy × 1 time (10 Gy-only) group. To further verify whether β-catenin protein decreased due to protein degradation, the proteasome inhibitor MG132 was applied to IR-treated cells. As expected, MG132 application reversed IR-induced β-catenin degradation and slightly suppressed P62 expression ( Figure 4B). These results clearly demonstrated that IR enhanced GSK3β activity through Ser9 phosphorylation downregulation, which in turn enhanced β-catenin phosphorylation at Ser33/Ser37/Thr41, triggering protein degradation. We concluded that IR Figure 4. Expression of Wnt/β-catenin/GSK3β pathway under both dose-and fraction-dependent manners in irradiated U87MG and GBM8401 cells. (A) GBM8401 cells were treated with radiation for 72 h, and the protein expression patterns of the Wnt pathway were determined. Phospho-β-catenin (Ser33/Ser37/Thr41) was used as β-catenin active forms for detecting the β-catenin status. GSK3β (S9) was used for β-catenin phosphorylation to degrade β-catenin. (B) The cells were irradiated with indicated dosages and were then incubated with MG-132 (10 µM) for 8 h. Cytosolic fractions were prepared and subjected to Western blotting with frizzled, GSK3β, phosphor S9 GSK3β, β-catenin, phosphor S33/37/41-β-catenin, and P62 antibodies. GAPDH was used as an internal control. Bar graphs represent the mean of triplicates ± SD. * p < 0.05, ** p < 0.01, *** p < 0.001 compared with the control group. Modulation of Autophagy in IR-Induced GSC-Like Phenotype Cells In Vitro: Effect of CQ and BAF To confirm that autophagy was involved in IR-induced GSCs with the stemness phenotype, BAF and CQ were used as autophagy inhibitors to determine whether or not autophagy was involved in the pathway. The result show decreased viability of U87MG and GBM8401 cells in the presence of BAF or CQ, compared with radiation only ( Figure 5A). The effect of irradiation on colony formation in U87MG and GBM8401 cells with or without 72 h BAF or CQ treatment was assessed by the clonogenic assay ( Figure 5B). BAF or CQ treatment clearly reduced colony formation for both cell types. Finally, transcript levels of autophagy and GSCs-related genes were determined by Western blotting in irradiated cells with or without BAF or CQ treatment. The results revealed that the expression levels of LC3II and P62 were reduced in irradiated GBM8401 cells with BAF or CQ treatment in IR dose-and fraction-dependent manners, when compared with GBM8401 cells treated with IR only. Similarly, decreased expression levels of GSC markers, CD133, Nestin, and SOX2, except for CD44, in irradiated GBM8401 cells incubated with BAF or CQ were also detected, comparing with the same cells treated with IR only ( Figure 5C). In all experiments, the correlation between autophagy and GSC markers was stronger in GBM8401 than in U87MG. The image shows colonies produced by the U87MG and GBM8401 cells following plating of 500 cells and 21 days incubation. Cells were quantified and error bars represent mean ± SEM of three independent experiments. The level of significance was determined using Student's t-Test with ns representing p > 0.05, *** p < 0.005, ** p< 0.01, and * p< 0.05 compared with non-irradiated cells without BAF or CQ treatment. (C) Transcript levels of autophagic and CSCs-related genes were detected by Western blotting. Cells were irradiated, with or without BAF (0.1 μM) and CQ (100 μM) treatment for 72 h, followed by Western blot analysis of the autophagic and CSCsrelated genes. GAPDH was used as an internal control. Bar graph represents mean of triplicates ± SD. * p < 0.05, ** p < 0.01, *** p < 0.005 compared with the 2 Gy × 5 times (10 Gy-fraction) group. # p < 0.05, ## p < 0.01, ### p < 0.005 compared with the single dose 10 Gy (10 Gy-only) group. The image shows colonies produced by the U87MG and GBM8401 cells following plating of 500 cells and 21 days incubation. Cells were quantified and error bars represent mean ± SEM of three independent experiments. The level of significance was determined using Student's t-Test with ns representing p > 0.05, *** p < 0.005, ** p < 0.01, and * p < 0.05 compared with non-irradiated cells without BAF or CQ treatment. (C) Transcript levels of autophagic and CSCs-related genes were detected by Western blotting. Cells were irradiated, with or without BAF (0.1 µM) and CQ (100 µM) treatment for 72 h, followed by Western blot analysis of the autophagic and CSCs-related genes. GAPDH was used as an internal control. Bar graph represents mean of triplicates ± SD. * p < 0.05, ** p < 0.01, *** p < 0.005 compared with the 2 Gy × 5 times (10 Gy-fraction) group. # p < 0.05, ## p < 0.01, ### p < 0.005 compared with the single dose 10 Gy (10 Gy-only) group. Discussion GBM is one of the most aggressive tumors with poor prognosis. For more than a decade, standard clinical therapy has followed the published guidelines, which have combined TMZ and radiation (total 60 Gy in 30 daily fractions of 2 Gy), and the results have shown improved survival times from 12.1 to 14.6 months [3,4]. However, poor prognosis has persisted due to an inevitable high recurrence rate despite complete CCRT treatment. The recurrence condition was regarded as therapeutic resistance. Evidence showed that chemo-or radio-resistant cells provided therapeutic resistance and existed in the IR-field area [5]. It has been reported that the canonical Wnt signaling (also known as Wnt/βcatenin) pathway plays an essential role in stem cell fate decision. Frizzled is the upstream protein of the Wnt signaling pathway. It also regulates the relative stability of β-catenin through GSK3β-dependent phosphorylation. P62-mediated autophagy was found to be enhanced through not only the AMPK activity but also the Wnt/β-catenin signaling pathway to regulate glioma cells [20,28,29]. Therefore, we postulated that the Wnt/β-catenin signaling pathway could regulate P62-mediated autophagy and used phospho-β-catenin (Ser33/Ser37/Thr41) as β-catenin active forms for detecting the β-catenin status. GSK3β (S9) was used for β-catenin phosphorylation to degrade β-catenin. In addition, increasing reports have demonstrated that IR could induce GSCs in different tumor types [30]. Thus, we hypothesized that standard fractionated IR (fraction-dependent manner) could induce GSCs for therapy resistance. Herein, we reported for the first time that fractionated IR (fraction-dependent manner as to the clinical daily used dose: 2 Gy per day) induced GSCs with the stemness phenotype by p62-mediated autophagy through the Wnt/β-catenin/GSk3β/P62 axis signaling pathway in human glioma cells. CSCs, also known as tumor-initiating cells or tumor-propagating cells, are a small subpopulation of cancer cells with the capacity for self-renewal and pluripotency [31]. These capacities led to differentiation and tumor heterogeneity for resisting therapy. CSCs were first identified in acute myeloid leukemia [32,33], and the first solid tumor CSC was found in glioma tumor [13,14,34]. Since then, CSCs have been identified in many cancers, such as breast [35], pancreatic [36,37], colon [38,39], and lung [40,41]. Two main CSC models have been proposed to explain the origin of CSCs: stochastic and hierarchical models. The stochastic model is based on the concept that all tumor cells are capable of producing new cancer cells through converting the non-CSC phenotype to the CSC phenotype under specific conditions [42,43]. By contrast, the hierarchical model considers that a small unique subpopulation of tumor cells known as CSCs give rise to tumor cells without the conversion concept ( Figure 6A) [32]. However, the issue of which of the two models to apply has been debated for decades. Studies have identified CD44 + CD24 − in breast cancer [44]; CD44 + CD24 + EpCAM + in pancreatic or ovarian cancer [45,46]; and a high expression of neuron stem markers, such as CD133, CD44, and Nestin, and transcript factors, such as SOX2 and OCT4, in GBM. In the present study, we used CD133/CD44/Nestin/SOX2 as GSC markers and attempted to distinguish the two CSC models. Our results showed that GSC markers increased in GBM tissues, and they were highly correlated to GBM after CCRT in IHC specimens. Furthermore, our experiments showed increased protein expression of GSC markers in both IR dose-and fraction-dependent manners in the two GBM cell lines. Although our data revealed that CD133/Cd44/Nestin/SOX2 was increased by IR, whether a stochastic model or hierarchical model is more suitable for GSC by IR still needs evidence to prove. Autophagy is considered a double-edged sword due to its dual functions in tumors. It suppresses tumor growth in the initial stage and promotes growth or survival in a stressful environment. Increasing evidence has revealed that autophagy is associated with therapeutic resistance in a stressful environment. In addition, studies have shown that autophagy in CSCs mediates therapeutic resistance [17][18][19]. Our study revealed that autophagy increased with GSC markers, including CD133, CD44, Nestin, and SOX2 and correlated with IR in dose-and fraction-dependent manners, especially the fraction-dependent manner (clinical manner). It has been reported that multiple sequential steps are involved in autophagy influx, such as sequestration, transport to lysosomes, degradation, and utilization of degradation products. Therefore, different autophagy inhibitors could inhibit in different steps and present different results in the target proteins. In our study, two different inhibitors (BAF and CQ) were used as autophagy inhibitors to determine the participation of autophagy. The data showed that the levels of GSC markers, including CD133, CD44, Nestin, and SOX2, were reduced in the presence of autophagy inhibitors. The results support that autophagy and GSCs with the stemness phenotype are induced in response to cytotoxic agents. In addition, from cell viability studies, the GBM8401 cells could survive through autophagy and present as inductive IR-resistant glioma cells with the GSC-like phenotype. Furthermore, in the colony assay, it was found that inductive IRresistant glioma cells could present more regrowth and exhibit recurrence and radioresistance in GBM8401 cells than in U87MG cells. Overall, we believe that the inductive IRresistant glioma cells could be regarded as IR-induced GSCs with the stemness phenotype through autophagy, especially GBM8401 cells, in a fraction-dependent manner. Similar to our previous reports, our study found that the Wnt/β-catenin/P62 axis is a potential signaling pathway for glioma cells in therapy selection [20]. Therefore, we used this signaling pathway to elucidate the possible signaling pathway for IR-induced GCSs. We examined β-catenin, phospho-β-catenin (Ser33/Ser37/Thr41), GSK3β, and GSK3β (S9). Furthermore, we used MG132 as a proteasome inhibitor. The results demonstrated that Autophagy is considered a double-edged sword due to its dual functions in tumors. It suppresses tumor growth in the initial stage and promotes growth or survival in a stressful environment. Increasing evidence has revealed that autophagy is associated with therapeutic resistance in a stressful environment. In addition, studies have shown that autophagy in CSCs mediates therapeutic resistance [17][18][19]. Our study revealed that autophagy increased with GSC markers, including CD133, CD44, Nestin, and SOX2 and correlated with IR in dose-and fraction-dependent manners, especially the fraction-dependent manner (clinical manner). It has been reported that multiple sequential steps are involved in autophagy influx, such as sequestration, transport to lysosomes, degradation, and utilization of degradation products. Therefore, different autophagy inhibitors could inhibit in different steps and present different results in the target proteins. In our study, two different inhibitors (BAF and CQ) were used as autophagy inhibitors to determine the participation of autophagy. The data showed that the levels of GSC markers, including CD133, CD44, Nestin, and SOX2, were reduced in the presence of autophagy inhibitors. The results support that autophagy and GSCs with the stemness phenotype are induced in response to cytotoxic agents. In addition, from cell viability studies, the GBM8401 cells could survive through autophagy and present as inductive IR-resistant glioma cells with the GSC-like phenotype. Furthermore, in the colony assay, it was found that inductive IR-resistant glioma cells could present more regrowth and exhibit recurrence and radioresistance in GBM8401 cells than in U87MG cells. Overall, we believe that the inductive IR-resistant glioma cells could be regarded as IR-induced GSCs with the stemness phenotype through autophagy, especially GBM8401 cells, in a fraction-dependent manner. Similar to our previous reports, our study found that the Wnt/β-catenin/P62 axis is a potential signaling pathway for glioma cells in therapy selection [20]. Therefore, we used this signaling pathway to elucidate the possible signaling pathway for IR-induced GCSs. We examined β-catenin, phospho-β-catenin (Ser33/Ser37/Thr41), GSK3β, and GSK3β (S9). Furthermore, we used MG132 as a proteasome inhibitor. The results demonstrated that IR enhanced GSK3β activity through Ser9 phosphorylation downregulation, which in turn enhanced β-catenin phosphorylation at Ser33/Ser37/Thr41, triggering protein degradation. Moreover, other studies have reported that β-catenin could translocate to the cell membrane for stabilizing CD133, and CD44 could regulate Wnt activity [23][24][25]. In addition, CD133 could be recycled through the autophagy process [47]. These concepts could offer an explanation regarding the crosstalk between CD133 or other GSCs markers and Wnt/β-catenin/P62. Overall, we attempted to elucidate the IR-induced GCS stemness phenotype through autophagy with P62-mediated β-catenin degradation through the Wnt/β-catenin signaling pathway in glioma cell lines, especially in a fraction-dependent manner. On the basis of our observations, we suggest a working model for frizzled/β-catenin/ P62/GSCs markers ( Figure 6B). Our results implicated that the stochastic model may be more suitable for determining the origin of GSCs due to different inducible levels of GSCs and increasing the expression of GSC markers to present GBM heterogeneity. Under normal conditions, cancer cells turn on the canonical Wnt/β-catenin pathway to disintegrate the destruction complex (Axin, APC, GSK3β, and β-catenin; dash circle) and stabilize βcatenin. Subsequently, β-catenin binding to TCF in the nucleus upregulates target genes to present proliferation capacity and suppress P62 activation. However, under stress conditions (such as IR, hypoxia, or chemotherapy), cancer cells survive through shutting down the Wnt/β-catenin pathway to degrade β-catenin and then induce P62-mediated autophagy. Therefore, GBM cancer cells could shift cancer cells from the proliferation state to hibernation state through autophagy as a survival function, followed by enhancement of CD133/CD44/Nestin/SOX2 as GSC markers to support the stochastic model. Moreover, our data revealed different expression levels in the two glioma cell lines. GBM8401 cells are more predominant than U87MG cells in inducing/enhancing GSCs with the stemness phenotype. Thus, P62-mediated autophagy via Wnt/β-catenin/GSk3β/P62 axis may play a vital role for IR-induced GSCs with the stemness phenotype, especially GBM8401. It has been previously reported that GBM8401 is a P53-mutant type and U87 is a P53wide type [20]. We speculate that the P53 status for radiosensitivity may have a role in IR-induced GSCs with the stemness phenotype for further study in the future. Conclusions The fractionated IR could induce the stemness phenotype in GSCs with P62-mediated autophagy through the Wnt/β-catenin signaling pathway. IR-induced GSCs provide therapeutic resistance for tumor progression after treatment and thus contribute to recurrence and aggressive behavior. Our results could aid in developing a new therapeutic strategy for GBM treatment in the future.
7,520.8
2021-05-01T00:00:00.000
[ "Biology", "Medicine" ]
Percolative nature of the dc paraconductivity in the cuprate superconductors We present an investigation of the planar direct-current (dc) paraconductivity of the model cuprate material HgBa$_2$CuO$_{4+\delta}$ in the underdoped part of the phase diagram. The simple quadratic temperature-dependence of the Fermi-liquid normal-state resistivity enables us to extract the paraconductivity above the macroscopic $T_c$ with great accuracy. The paraconductivity exhibits unusual exponential temperature dependence, with a characteristic temperature scale that is distinct from $T_c$. In the entire temperature range where it is discernable, the paraconductivity is quantitatively explained by a simple superconducting percolation model, which implies that underlying gap disorder dominates the emergence of superconductivity. well established for the dc conductivity response [19,20], whereas calculations for other observables (e.g., magnetic susceptibility) are very challenging. In this Letter, we present benchmark dc conductivity data for a pristine cuprate compound along with modeling results that support both the Fermi-liquid nature of the normal state and the percolative superconductivity emergence in a quantitative manner. The principal problem in previous investigations of the pre-pairing regime in cuprates has been the separation of the superconducting response from the normal-state response. Different experimental probes can be sensitive to distinct aspects of the normal state. Moreover, it is well established that the underdoped cuprates also exhibit other electronic ordering tendencies including charge-density-wave order [21][22][23][24][25][26][27], which has further precluded an unequivocal extraction of superconducting contributions. Prominent examples of such problems include the analysis of the Nernst effect [8,9] and of the optical conductivity [11,13], where a chargestripe related signal might be mistaken for superconducting fluctuations [28][29][30], or linear magnetization and conductivity measurements [10,31], where the normal-state behavior is assumed to be linear in temperature, which is not necessarily the case. Several schemes to systematically subtract the presumed normal-state contribution have been devised, mainly based on the suppression of superconductivity with external magnetic fields [3,5,14]. However, so far only two experimental techniques can claim to be genuinely sensitive only to superconducting signals: nonlinear torque magnetization [6] and nonlinear conductivity [7]. A number of recent experimental investigations consistently point to a simple picture for both the normal state [15][16][17][18] and the superconducting emergence regime [3,4,6,7]. Measurements of transport properties, such as the dc resistivity [17], Hall angle [15] magnetoresistivity [16], and optical experiments [18], clearly show that the mobile charge carriers behave as a Fermi liquid, even in strongly underdoped compounds. The dual observations that the magnetoresistivity obeys Kohler scaling with a 1/τ ∝ T 2 scattering rate [16] and that the optical scattering rate exhibits conventional scaling with temperature and frequency [18] are particularly clear-cut signatures of Fermi-liquid transport. Moreover, magnetization [6], highfrequency linear conductivity [3][4][5] and nonlinear response measurements [7] indicate that the superconducting emergence regime is limited to a rather narrow temperature range above T c and, importantly, that it can be described with a simple percolation model [7]. In the present work, we start from the fact that the normal state displays robust Fermi-liquid behavior in a rather wide temperature (and doping) range. We subtract its contribution to the planar resistivity with a reliability approaching the background-free techniques. With the inherent sensitivity of the dc conductivity to superconducting contributions, this enables us to obtain highly precise insight into the emergence of superconductivity. In particular, we performed measurements of the direct current (dc) conductivity for the cuprate HgBa 2 CuO 4+δ (Hg1201) in the underdoped part of the phase diagram. Hg1201 may be viewed as a model compound due to its simple tetragonal structural symmetry, with one CuO 2 layer per formula unit, and the largest optimal T c (nearly 100 K) of all such single-layer compounds [32]. Further evidence for the model nature of Hg1201 comes from the observation of a tiny residual resistivity [17,33], of Shubnikov-de-Haas oscillations [34,35], and of a small density of vortex pinning centers [33], which has enabled the measurement of the triangular magnetic vortex lattice [36]. Below the characteristic temperature T** (T** < T*; T* is the pseudogap temperature), the planar resistivity of Hg1201 exhibits quadratic temperature dependence, ρ ∝ T 2 , the behavior characteristic of a Fermi liquid [17]. We studied two Hg1201 samples with T c ≈ 80 K (the estimated hole doping level is p ≈ 0.11) that were prepared following established procedures [37,33]. This particular doping level was chosen because of a relatively wide temperature range between T** and T c in which pure quadratic-in-temperature resistivity is seen, while being reasonably far away from the doping level (p ≈ 0.09) where weak short-range CDW correlations are most prominent in Hg1201 [26,27,38]. Figure 1 shows dc resistivity data for one of the two samples along with the three characteristic temperatures T c , T** and T*. The purely quadratic behavior seen below T** is in agreement with the Fermi-liquid character of the mobile holes [15,17]. The considerable difference between T** and T c provides for an extremely simple way to assess the superconducting paraconductivity contribution. In order to subtract the normal-state signal and obtain the purely superconducting contribution above T c , we fit ρ(T) = ρ 0 + a 2 T 2 to the resistivity data in a temperature range from 100 K to T** ≈ 150 K, where ρ 0 is the small residual resistivity (the estimated residual resistivity ratio is approximately 120) and a 2 a constant. The resultant value of a 2 = 9.8(1) nΩcm/K 2 is consistent with previous measurements on Hg1201 [17]. A narrowing of the fit range by 10-20 K does not change the result of our analysis, which demonstrates the robustness of the procedure. Furthermore, if a power law of the form ρ(T) = aT α is fit in the same temperature range, the exponent is α = 1.98 (2), and when the temperature range is varied by ± 20 K, it stays within 5% of this value. The fidelity of the quadratic fit is very high (Fig. 1), which demonstrates that indeed in this temperature range the only contribution to the resistivity is the Fermi-liquid temperature dependence. We may therefore safely extrapolate the fit to T c in order to obtain the underlying normal-state contribution. Inversion of the experimentally determined resistivity and subtraction of the extrapolated quadratic temperature dependence then gives the superconducting paraconductivity contribution, ∆σ dc , shown in Fig. 2. We present results for two samples (A and B), which were chosen from a larger batch of samples with T c ≈ 80 K due to their well-defined superconducting transitionsin Hg1201, the sample-contacting procedure often induces spurious doping of the sample surface [17], which can 'short out' the current path at temperatures above the nominal T c and artificially broaden the transition. Such samples are not considered here, although even they give similar results, except in a narrow (less than 1-2 K) temperature range above T c . For samples A and B, we also performed magnetic susceptibility measurements using vibrating sample magnetometry (VSM) that show sharp transitions, with T c values that agree well with the resistive T c . The zero-field-cool/field-cool susceptibility ratios approach one and are among the highest observed in the cuprates [33], demonstrating the very high quality of the samples (Fig. 1c and d). The excellent agreement of the paraconductivity results for these two distinct samples seen in Fig. 2, especially away from T c , demonstrates the reproducibility of the experiment and the robustness of our result. The superconducting response clearly exhibits exponential-like temperature dependence away from T c , consistent with prior magnetization [6], nonlinear response [7], and microwave conductivity [5] results. We emphasize four crucial points: (i) the observed exponential dependence is qualitatively different from the underlying normal-state power-law behavior and hence a very robust result; (ii) the agreement with other experiments, some of which require no background subtraction [6,7], provides additional justification for the validity of our approach to subtract the Fermi-liquid normal state contribution; (iii) the signal-to-noise ratio of the present data is very high, which enables us to follow the paraconductivity over more than four orders of magnitude; (iv) both the exponential temperature dependence and the fact that the characteristic temperature is distinct from T c are incompatible with standard models of superconducting fluctuations, such as Ginzburg-Landau theory [39]. A simple superconducting percolation model with a compound-independent (and nearly doping-independent) underlying temperature scale T 0 (or energy scale k B T 0 ) was recently shown to explain nonlinear response data [7]. The present dc paraconductivity result provides an ideal testing ground for this model, since the model is naturally formulated in terms of the dc conductivity. In particular, the model assumes that, above T c , the material consists of patches that are normal and have a resistance R n , and of patches that are superconducting and have a resistance R 0 (where we will take the limit R 0 → 0) [7,40]. The fraction of superconducting patches, P, is temperature-dependent: at a critical fraction P π (corresponding to the critical temperature T π ), a sample-spanning superconducting cluster is formed, and hence percolates. In the limit of vanishingly small currents, T π equals T c , but in any experiment, T c is shifted slightly below T π due to the required nonzero currents. The temperature-dependent superconducting fraction originates from an underlying distribution of superconducting gaps, and P is hence directly obtained as the temperature integral of the distribution, as shown schematically in Fig. 3. For concreteness, we use the simplest (Gaussian) distribution with a full-width-at-half-maximum equal to T 0 , consistent with previous work [7]. Other distributions, such as the gamma or the logistic distributions, were also tested, but resulted in no significant differences in the outcome of the calculationslight discrepancies between calculations with different distributions only appear in the temperature range in which the signal is close to the noise level. This insensitivity to distribution shape is simply the result of the integration over the whole distribution, rendering the exact shape unimportant. The dc conductivity in dependence on temperature is now obtained using effective medium theory (EMT) [19], in a form derived specifically for site percolation problems [20]. While it is known that EMT becomes unreliable in the critical regime close to the percolation threshold [19] (in our case, about 1 K above T c ), we use it for simplicity and accuracy in the interesting higher-temperature regime away from T c . The narrow critical regime is presumably not purely percolative anyway, with critical exponents modified by thermal effects [41]; in order to see a discrepancy between the data and the EMT calculation, a careful power-law analysis of the critical regime would need to be undertaken, with more closely-spaced measurements around T c . The investigation of criticality is thus not within the scope of the present work. In order to obtain the limit of zero R 0 , we use different small values in the numerical calculation, until no significant changes in the output are seen (typically for R 0 on the order of 10 -5 R n ). We take R n to be constant in the temperature interval of interestthis is appropriate, because its relative change (due to the T 2 dependence) is about 25% over a 10 K interval, whereas the paraconductivity changes by a factor of about 10 2 in the same interval. The calculated temperature dependence shown in Fig. 2 closely matches the experimental findings over the entire range of about four orders of magnitude in ∆σ. As demonstrated in Fig. 2, the agreement between the data and the calculation can be further improved by adding a small offset in order to account for the crossover to the noise level. Effectively, the percolation calculation of the paraconductivity only has one free parameter: the width k B T 0 of the gap distribution. Other parameters that enter the calculation are constrained: R n is simply the normal-state resistivity, T π is slightly larger than T c (in the present calculation it was taken to be T c + 1 K, but we note that our definition of T c as the lowest temperature with non-zero resistivity is somewhat arbitrary; different definitions, such as the midpoint of the transition measured by susceptibility, easily lead to a 1 K difference), and the critical concentration P π was taken to be 0.3, consistent with the prior nonlinear conductivity analysis [7]. The critical concentration P π is not arbitrary; it is determined by the details of the percolation model [42] site or bond percolation, percolation with or without farther-neighbor corrections, etc.and by the dimensionality of the percolation process. The model yields virtually the same temperature dependence for different values of P π , with a corresponding change in T 0 : a smaller P π implies a larger T 0 , and vice versa. We therefore cannot distinguish among specific percolation scenarios, such as two-dimensional versus three-dimensional percolation. Prior comparison between linear and nonlinear response indicated that a three-dimensional site percolation model with P π ≈ 0.3 is appropriate [7], leading us to use the same value here. Remarkably, the value T 0 = 26(1) K that yields the best agreement with the data in Fig. 2 is in excellent agreement with nonlinear conductivity and microwave linear response for a number of cuprate compounds and a range of doping levels, including Hg1201 [7]. The present work does not provide microscopic insight into the gap inhomogeneity and its origin, and in this respect the percolation model is phenomenological. Yet the model is highly consistent with experiments sensitive to real-space superconducting gap disorder, such as scanning tunneling microscopy [43,44], which have observed gap distributions with a width comparable to k B T 0 . It is furthermore consistent with NMR results that demonstrate a considerable distribution of local electric field gradients [45,46], and with X-ray experiments that find percolative structures in oxygen-doped La 2 CuO 4+δ and YBa 2 Cu 3 O 6+δ [47,48]. In conclusion, for the simple-tetragonal cuprate Hg1201 the paraconductivity is a very sensitive probe of the emergence of superconductivity, and it is accurately described by the superconducting percolation scenario, with the same universal characteristic temperature scale observed for other observables [4][5][6][7]. We demonstrate that the superconducting contribution can be simply obtained upon assuming a Fermi-liquid normal state below the characteristic temperature T**. This procedure is not possible for optimally-doped compounds, where T** becomes comparable to, or smaller than T c and the resistivity no longer exhibits quadratic temperature dependence [17]. It also is not possible for compounds such as the bismuth-based cuprates or twinned YBa 2 Cu 3 O 6+δ , in which the underlying quadratic Fermi-liquid temperature dependence is masked due to disorder effects and/or low structural symmetry [15,16]. However, the clear confirmation of the superconducting percolation scenario in the present work implies that, fundamentally, both the normal-state carriers and superconducting emergence are rather conventional in the underdoped cuprates, once the underlying gap disorder is taken into account. Our result excludes the possibility of extended fluctuations usually associated with non-Fermi liquid models [8][9][10]49,50]. It also shows that it is difficult to observe the usual Ginzburg-Landau fluctuation regime in the conductivity, because inhomogeneity effects dominatethe percolation description holds down to temperatures very close to T c . Along with magnetometry as well as linear and nonlinear conductivity data, the basic percolation model naturally explains other seemingly unconventional features such as the 'gap filling' seen in photoemission data [51], and thus provides a unifying understanding of superconducting pre-pairing in the cuprates [7]. The dc conductivity measurements presented here have put the scenario to a stringent quantitative test, and hence constitute a crucial, independent confirmation in a model cuprate system. The robustness of our result mandates a paradigm change in the field of cuprate superconductivity, namely that the itinerant carriers are well described by Fermi-liquid concepts, whereas the emergence of superconductivity is dominated by the gap inhomogeneity inherence to these lamellar oxides . FIG 1. (a) Temperature dependence of the dc resistivity of a Hg1201 single crystal with characteristic temperatures T c ≈ 80 K (defined here to correspond to the lowest measurable non-zero resistivity), T** ≈ 140 K (defined as the deviation from low-temperature quadratic behavior), and T* ≈ 260 K (defined as the deviation from high-temperature linear behavior). (b) The dc resistivity, plotted versus the square of temperature and fit to ρ = ρ 0 + aT 2 between 100 K and 150 K (dashed line), demonstrating Fermi-liquid behavior below T** and a very small residual resistivity in the zero-temperature limit (see text). The inset shows the residuals obtained upon subtracting the fit result from the data for fits between 100 K and 150 K (line) and between 110 K and 130 K (symbols). The T 2 behavior prevails over a ~ 50 K range. (c) and (d) normalized VSM magnetization measurements of samples A and B, respectively, obtained with an external field of 15 Oe applied perpendicular to CuO 2 planes. Grey solid circles: zero-field-cooled (ZFC) data; orange open circles: field-cooled (FC) data. The measurements demonstrate well-defined superconducting transitions at T c ≈ 80 K and very low vortex pinning, indicative of high sample quality. FIG 2. The dc paraconductivity for two underdoped Hg1201 samples (A and B) with T c ≈ 80 K, obtained by subtracting Fermi-liquid normal-state behavior from the measured resistivity. The very good agreement between the two data sets demonstrates a high level of reproducibility and robustness of the result. The paraconductivity exhibits strong exponentiallike temperature dependence. The full line is the prediction of the superconducting percolation model obtained with effective medium theory. The dashed line includes a small heuristic constant offset and better captures the crossover to the noise level around T ≈ T c + 25 K (since, on a logarithmic scale, only the positive noise in ∆σ dc is visible). T c is defined here as the lowest temperature at which a non-zero conductivity was measurable, whereas T π is the temperature at which the calculated conductivity diverges. T π is slightly larger than T c due to the nonzero current required to perform the experiment (see text). FIG 3. Schematic representation of the superconducting site percolation model, as a twodimensional cross-section of the full three-dimensional model (upper row). Dark red patches are superconducting with vanishing resistance, whereas light grey patches have nonzero normal-state resistance. The fraction of superconducting patches is simply obtained by integrating the local gap distribution function taken to be a Gaussian for simplicity (lower row). Note that for a typical three-dimensional percolation model the critical fraction is approximately 0.3.
4,317.6
2017-10-27T00:00:00.000
[ "Physics" ]
Multidimensional and Multi-Parameter Fortran-Based Curve Fitting Tools The Levenberg-Marquardt algorithm has become a popular method in nonlinear curve fitting works. In this paper, following the steps of Levenberg-Marquardt algorithm, we extend the framework of the algorithm to two and three dimensional real and complex functions. This work briefly describes the mathematics behind the algorithm, and also elaborates how to implement it using FORTRAN 95 programming language. The advantage of this algorithm, when it is extended to surfaces and complex functions, is that it makes researchers to have a better trust during fitting. It also improves the generalization and predictive performance of 2D and 3D real and complex functions. Let us construct the chi-square function: is called residue function. The goal of the least square method is to determine the parameters P of the regression function ( ) P X f , so as to minimize the squared deviations between i f and ( ) P X f i , for all data points: If we assume that all measured values of i f are normally distributed with standard deviations given by , i σ then 'statistically-the-best' match would correspond to the minimal value of 2 χ . Thus, the suitable model is essentially the one which gives the minimum value of the chi-square with respect to the parameters. That is why the method itself is called the 'least-square' technique. Of course, the error bars are determined not only by a statistical noise, but also by systematic inaccuracies, which are very difficult to estimate and are not normally distributed. However, to move on, we assume that they are (2) Where J is called Jacobian matrix of the residue ) ( c i P r which is defined in Eqn. 1. The one-half coefficient is put to simplify the formulas. To improve the fit, we can shift the parameters , k kc kc The steepest descent strategy is justified, when one is far from the minimum, but suffers from slow convergence in the plateau close to the minimum, especially in the multi- The chi function (which is quadratic) to be minimized has almost parabolic shape. The Hessian matrix, which is proportional to the curvature of 2 χ , is given by (the one-half here is also added for the sake of simplicity). The components kl α of the Hessian matrix in Eqn. (7) depends both on the first derivative, The Second derivative can be ignored when it is zero, or small enough to be negligible when compared to the term involving the first derivative. In practice, this is quite often small enough to neglect. If one looks at Eqn. (7) carefully, the second derivative is . For the successful model, this term should just be the random measurement error of each point. This error can have either sign, and should in general be uncorrelated with the model. Therefore, the second derivative terms tend to cancel out when summed over time i . Inclusion of second derivative term can in fact be destabilizing if the model fits badly or is contaminated by outlier points that are unlikely to be offset by compensating points of opposite sign. So, instead of Eqn. (7) we shall define the α-matrix simply as: After computing, numerically or analytically, the gradient and Hessian matrices for the current set of parameters, one can immediately move to the minimum by shifting the parameters , where the displacement vector k p δ is determined from the linear system derived in Eqn. (5), i.e., One of the problems associated with Newton's method (Levenberg, 1944;Kelley, 1999;Madsen, et al., 2004;and Lawson & R.J. Hanson, 1974) The Levenberg-Marquardt Algorithm In order for the chi-square function to converge to a minimum rapidly, one needs a large step in the direction along with the low curvature (near the minimum) and a small step in the direction with the high curvature (i.e. a steep incline). The gradient descent and Gauss-Newton iterations provide additional advantages. The LM algorithm is based on the self-adjustable balance between the two minimizing strategies: the Vanilla Gradient Descent and the Inverse Hessian methods. Coming back to the steepest descent technique 2 χ is dimensionless but k β has the same dimension as The update rule is used as follows. If the error goes down following an update, it implies that our quadratic assumption on 2 χ is working and reduce λ (usually by a factor of 10) to reduce the influence of gradient descent. On the other hand, if the error goes up, we would like to follow the gradient more and so λ is increased by the same factor. If the initial guess is good but 2 χ does not fall down to the required minimum value, we have to change the initial value of λ slightly. IMPLEMENTATION OF THE LM ALGORITHM In this paper Gauss's elimination and Gauss's Jordan matrix inversion methods are used to determine the shift parameters. Among the several tests made on real and complex non linear functions, only three examples are illustrated to see how much this method is effective and faster than the other methods. Test on real three dimensional wave function The first test is applied to two dimensional data coordinate ( ) (yellow) after iteration From the above results (Table.1), one can easily see that the data (surface) follows the wave function having the form We have then made a fitting, using the LM approach, in order to find the values of the parameters ( ) Fig. 3 (a) and (b)). In this case the dimension is As one can see from the above results, the LM model is highly useful when it is implemented to complicated-shaped surfaces. What is also important here is here that selecting an appropriate type of function (such as sine, power, decay, etc functions) and lambda. The shift parameters are not that much changed by normalized random errors only minimum of chi-function increases. Hence, based on the above two figures (Figs. 3 (a) and (b)), one can conclude that new equations/relations and modifications to the already existing formulas can be obtained from experimental data having disturbed/complicated surfaces. Test made on complex two dimensional function In ellipsometery the complex ratio Test on complex two dimensional power function The third test was made on complex three dimensional power functions (their derivatives are logarithmic functions). Consider the following experimental data: Table 3. Experimental data on 2D power functions. Based on the above results we can conclude that the LM algorithm is popular method and has the following advantages (i) The parameters converge rapidly around the minimum in multi dimensional surfaces with complicated landscapes. (ii) Even though the initial guess is poor, LM fits partly/most of the parameters to make fresh start. (iii) The convergence speed needed to reach the minimum, is not significantly influenced by the number of parameters. (iv) The shift parameters are not that much changed by normalized random errors. Only the minimum of the chi-function increases. (v) Normalized random errors do not bring much change on the convergence speed, etc. Like any other non-linear optimization techniques, the LM algorithm method
1,650.6
2009-01-01T00:00:00.000
[ "Computer Science" ]
A Semantic Segmentation of Nucleus and Cytoplasm in Pap-smear Images using Modified U-Net Architecture : Pap-smear images can help in the early detection of cervical cancer, but the manual interpretation by a pathologist can be time-consuming and prone to human error. Semantic segmentation of the cell nucleus and cytoplasm plays an essential role in Pap-smear image analysis for automatically detecting cervical cancer. This research proposes a modified U-Net architecture by adding batch normalization to each convolution layer. Batch normalization aims to accelerate the convergence of the weight during training, thus over-coming the vanishing gradient problem. The application of U-Net and batch normalization to pap-smear image segmentation provides good performance results, including accuracy of 91.4 %, specificity of 87.7 %, F1-score of 81.7 %, and precision of 83.7 %. Unfortunately, the sensitivity result obtained is only 79.9 %. The results show that the proposed modification of the U-Net architecture with batch normalization improves the segmentation performance for cervical cancer cells in pap-smear images. However, improvement in architecture is still required to increase the ability to overcome overlapping areas between the nucleus, cytoplasm, and background. Introduction Cervical cancer is one of the most common cancers in women worldwide.The disease is characterized by the uncontrolled growth of malignant cells in the cervix or cervix area.According to the World Health Organization (WHO), in 2020, 604,000 women in the world are expected to develop cervical cancer, and about 342,000 women will die from the disease [1].Detecting and diagnosing cervical cancer is very important to increase the patient's chance of recovery and reduce the mortality rate.The pap-smear examination has become a commonly used method for early detection of cervical cancer.Samples of cervix cells are collected and analyzed under a microscope for indications of precancerous or cancerous changes [2].However, manual interpretation and analysis by pathologists of papsmear images is usually complex and time-consuming.In addition, there is a risk of human error in identifying and categorizing cervical cancer cells with high accuracy [3].An automatic diagnosis system is needed to analyze pap-smear images and diagnose cervical cancer quickly and accurately, one of which is image segmentation.Cervical cancer cell segmentation is very important in pap-smear image examination because cervical cancer cells can provide important information about the presence and severity of cervical cancer [4]. Research by Wijaya et al. [5] segmented the nucleus and cytoplasm of cells in pap-smear images using the Markov Random Field method.This research only obtained an accuracy value of about 75%, while other evaluation values were not calculated.Other research by Purwono et al. [6] segmented cervical cancer cells on CT-scan images using the K-Nearest Neighbors (KNN) method.This research also only obtained an accuracy value between 57-62%, while other evaluation values were not calculated.However, both researchers still used conventional methods.Conventional methods need to improve in distinguishing one object from another, especially in complex images that have many details. The use of deep learning techniques has grown in recent years.Convolutional Neural Network (CNN) is one such deep learning method that has made significant progress in complex image analysis.A CNN architecture commonly used in complex image analysis is the U-Net.U-Net has the advantage of segmenting and diagnosing diseases accurately [7].Research by Zhang et al. [8] segmented cervical cancer cell images using dilated CNN.This research resulted in an F1-score and precision below 83%, while other evaluation values were not calculated.Another research by Li et al. [9] segmented cell nucleus and cytoplasm images using GDLA U-Net.However, the precision and sensitivity obtained for cytoplasmic cells are still below 80%.However, both researches only performed binary segmentation.Semantic segmentation is required to detect cervical cancer cells accurately.Semantic segmentation in cervical cancer involves extracting the nucleus, cytoplasm, and background objects simultaneously rather than just one of the cells. U-Net architecture is one of the suitable architectures for semantic segmentation as it is a deep network.However, the number of layers in the U-Net architecture can increase the parameters and complexity of the network.A too complex network can hinder the convergence of the weights and cause vanishing gradients [10].Batch Normalization is a regularization method applied to accelerate convergence and enhance stability during the training process.Batch Normalization works by normalizing the input to each layer in the network [11].Research by Ju et al. [12] conducted cervical cancer CTV image segmentation using the addition of batch normalization to the encoder path on a Dense V-Net architecture.The result obtained is the F1-score value reaches 87.5 %.However, this research only https://ejournal.ittelkom-pwt.ac.id/index.php/infotelused 113 CT data and only performed binary segmentation.Another research by Rhee et al. [13] also segmented CT scan images of cervical cancer using the addition of batch normalization at the end of each convolution layer.The average F1-score value obtained is 86 %.The data used is quite large, namely 2254 CT data, but only performs binary segmentation. This research proposes a modification to the U-Net architecture with batch normalization.Batch normalization is added to each convolution layer on the encoder and decoder paths of the U-Net architecture.The addition of batch normalization can reduce the variation of input distribution to the network layers during the training process, thus accelerating weight convergence.The addition of batch normalization to the U-Net architecture is expected to improve the model's performance in performing semantic segmentation with 3 labels (nucleus, cytoplasm, and background) on pap-smear images. Research Method The workflow in this research is divided into several steps.These steps are data description, pre-processing, training data, testing data, and performance evaluation.The workflow in this research is represented in Figure 1. Data Description This research uses the dataset Herlev pap-smear comprising 917 BGR images in .BMP (Bitmap image file) format.This dataset was obtained from Herlev University Hospital at the Department of Pathology and can be accessed through the website [14].Images of pap-smears have different dimensions and resolutions.The structure of the nucleus and cytoplasm within the pap-smear image is shown in Figure 2. In Figure 2, it shows that the structural part of the pap-smear image consists of the nucleus (cell nucleus) labeled by the red circle and the cytoplasm (cells surrounding the nucleus) labeled by the blue circle.The structure of the nucleus and cytoplasm is what the ophthalmologist uses as a way to diagnose cervical cancer. Preprocessing Preprocessing is the initial image processing process that aims to improve and increase the image quality. Data augmentation Data augmentation is a technique used to increase the number of training data.Data augmentation aims to make the model created identified and well-recognized [15].The data augmentation technique employed in this research is flipping, which involves duplicating the data by flipping the image horizontally or vertically [16]. Image enhancement Image enhancement aims to remove noise, increase contrast, and preserve all details in the image to prevent any loss of information.Several image quality enhancement techniques used in this research include sharpening filters and image resizing.A sharpening filter is a technique that enhances contrast by sharpening object boundaries and details in the image.This technique is accomplished by increasing the intensity differences between adjacent pixels [17].Mathematically, the sharpening filter is computed using the Laplacian filter approach using (1) [18]. https://ejournal.ittelkom-pwt.ac.id/index.php/infotel Where, ▽ 2 is the Laplace operator, S(x, y) is a two-dimensional image function of -axis and -axis.After applying the sharpening filter, the next step is to resize the images to the same dimensions using image resize.Image resize is a method used in the field of image processing that involves changing the pixel size of an image without altering the essential information contained within the image [19]. Semantic Segmentation Semantic segmentation is a method within digital image processing that focuses on recognizing and separating image objects at the pixel level.This involves labeling each pixel based on existing categories or classes of objects [20].Semantic segmentation in cervical cancer involves extracting the nucleus, cytoplasm, and background objects.Some of the operations performed in semantic segmentation include: Convolutional layer The convolution layer is the base layer in CNN performing convolution operations on the input images.This layer consists of some filters or kernels that are shifted gradually on the input image to generate feature maps.The convolutional layer learns the visual features representation of the input image through a convolution process with customized filters or kernels.The convolution calculation process in the convolutional layer is obtained using (2) [21]. for i = 1, 2, . . ., n and j = 1, 2, . . ., n, a ij represents the entry of the input matrix resulting from the convolution process at the i-th row and the j-th column, d u+i,v+j represents the entry of the input matrix at the u + i-th row and v + j-th column, k u+1,v+1 represents the entry of the kernel matrix at the u + 1-th row and v + 1-th column and b q is the biar for the q-th kernel. Batch normalization Batch normalization is a normalization process performed on each layer within a CNN network, aiming to improve accuracy and time efficiency during the training process.The batch normalization process is carried out by calculating the mean value (µ j ) and variance (σ 2 j ) for each mini-batch using (3) and (3) [21]. where, j represents the count of columns within the mini-batch, m represents the quantity of data present in one mini-batch, and a ij represents the entry within the input matrix at the i-th row and j-th column.Furthermore, the entry of the input matrix (a ij ) is normalized using (5). where, âij is the entry of the normalized matrix, and is the smallest constant value. Activation function The activation function serves as a non-linear function utilized for the purpose of introducing non-linearity and complex mapping capabilities in a CNN network.The activation function does not change the dimensions of the feature maps but only alters the values of the input feature maps [22].The activation functions used in this research are rectified linear unit (ReLU) and softmax.The ReLU activation function is a non-linear function that assigns a value of 0 to all negative pixel values within an image.The calculation of the ReLU activation function is obtained using (6) [22]. where, âij is the input value of the image and r(â ij ) is the output result of the ReLU.The softmax activation function is a mathematical function utilized to compute the probabilities for each predicted label, where the probabilities are exponential probabilities normalized from the class observations.The softmax activation function is obtained using (7) [23]. for k = 1, . . ., K where K represents the quantity of classes and t j represents the entry of the input matrix. Max pooling layer The max pooling layer is one of the types of pooling layers that diminishes the dimensionality of the feature maps produced by the preceding layer.It achieves this by extracting the patch from the convolutional feature maps and selecting the highest value in each segment to undergo shifting [24]. Transposed convolution A Transposed convolution is a convolutional layer used to increase the dimensionality of the input by inserting zeros between adjacent elements.This layer performs the inverse operation of a regular convolutional layer [25]. Concatenate layer The concatenate layer is a layer in a CNN network used to combine the outputs from multiple preceding layers into one.In this layer, the concatenation is done horizontally by https://ejournal.ittelkom-pwt.ac.id/index.php/infotelcombining information from different layers and features obtained from different levels of hierarchy in the network [26]. Loss function The loss function is a metric utilized during the process of training a model to assess the discrepancy or gap between the expected (ground truth) values and the model's predicted values.In semantic segmentation, the loss function commonly used for multiclass labels or labels with more than two object classes is categorical cross-entropy.The categorical cross-entropy value is obtained using (8) [27]. where, m represents the number of rows within the resultant output matrix, s i is the entry of the predicted segmentation output matrix at the i-th row, y i represents the entry of the ground truth matrix at the i-th row, and L is the value of the resulting categorical cross-entropy. Modified Architecture The semantic segmentation process of the nucleus and cytoplasm is performed by applying the U-Net architecture with the addition of batch normalization into every convolutional operation.The addition of batch normalization aims to enhance stability and training speed, as well as help in overcoming the vanishing gradient problem.The modification of the architecture proposed in this research for performing semantic segmentation is shown in Figure 3.It shows the modified U-Net architecture consisting of two paths: the left side containing the encoder path and the right side containing the decoder path.The encoder path includes a convolution block, batch normalization, ReLU activation, and max pooling.Meanwhile, the decoder path consists of a convolution block, transposed convolution, and softmax.The encoder path begins with a convolution operation using a 3×3 kernel and filters.This convolution process is performed concurrently alongside the ReLU activation function.Next, the resulting feature maps from the convolution process will undergo a batch normalization process to be normalized.Then, a max pooling operation with a size of 2×2 is performed to reduce the dimension of the feature maps.In the encoder path, there are four convolution blocks, where each block doubles the number of feature maps using filters of sizes 64, 128, and 256 respectively.This is followed by a fifth block that serves as a bridge between the path of the encoder and the decoder.It involves identical procedures to those employed in the initial block but without the need for subsequent max pooling.Next, the decoder path begins with a transposed convolution operation of size 2×2, performed simultaneously with the concatenate operation between the feature maps from the encoder path and the feature maps from the decoder path.This step aims to restore the dimensionality of the feature maps to their original size.Then, the decoder path continues with the same process as the first block in the encoder path, without using max pooling.In the decoder path, there are four convolution blocks, where the count of feature maps in each block is divided by two until it returns to the original count of feature maps.The final step in the decoder path is a convolution process with a 1×1 kernel, performed simulta-neously with the softmax activation function.This process aims to generate an image that has undergone segmentation by obtaining probabilities for each object class. Evaluation In this research, a performance evaluation is carried out on the results of image enhancement that has been improved using the sharpening filter method.This performance evaluation uses the Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Metrics (SSIM) metrics.Furthermore, in the semantic segmentation of the nucleus and cytoplasm, each pixel is grouped into three classes: nucleus cells, cytoplasm cells, and background.Evaluation of the model's performance in the semantic segmentation process is done using the confusion matrix.These results of the methods used in segmentation provide insight into the U-Net Batch Normalization architecture's performance in accurately segmenting the nucleus and cytoplasm.In this research, the performance evaluation metrics used include accuracy, sensitivity, specificity, F1-score, and precision. Preprocessing Data augmentation can improve the amount of training data without losing semantic information and help reduce bias in the data.In this research, horizontal and vertical flipping techniques were used in the data augmentation process.The data augmentation process on pap-smear images is shown in Figure 4.It shows that in the horizontal reversal technique, the image is rotated horizontally, while in the vertical reversal technique, the image is rotated vertically.This creates a new variation in the dataset by changing the direction or orientation of the images.The original Herlev dataset consists of 917 images.Through the data augmentation process, the total amount of data increased to 2,751 with each addition https://ejournal.ittelkom-pwt.ac.id/index.php/infotel of data from vertical flipping.Furthermore, the augmented images undergo a process of image quality enhancement, where the flow of the image quality enhancement process is shown in Figure 5.It shows that the result of data augmentation is used as an input image of type BGR.Then, the BGR image is converted to an RGB image.Images of RGB are then subjected to contrast enhancement using the sharpening filter method.The goal is to make the nucleus and cytoplasm structures appear clearer and sharper.Furthermore, the image undergoes an image resize process, changing its size to 256×256 pixels. Image resize is a technique used to change the pixel size in an image without altering the important information contained in it.In this research, quantitative image quality measurements are performed by comparing the PSNR and SSIM values between the original image and the preprocessed image.The measurement results are presented in a comparison graph as shown in Figure 6.It, shows that the performance evaluation results using the sharpening filter method show the average PSNR and SSIM values that have approached or reached a number that is considered good.The PSNR value graph in Figure 6(a) shows the average PSNR value is 42.887.The PSNR value is used to measure the level of noise or distortion in the image after the preprocessing process.If the value of PSNR is higher, then the noise level in the enhanced image is lower.Meanwhile, Figure 6(b) shows a graph of the SSIM value with an average value is 0.908.A high SSIM value indicates good structural similarity between the enhanced image and the ground truth.Thus, it can be said that the image quality after enhancement is good. Training Data The training data process was performed using the preprocessed results, totaling 2,751 data, then split into 80 % training data and 20 % testing data.This resulted in approximately 2,200 training data randomly split.Furthermore, this training data was further divided The results of the graphs indicate that the used model does not experience overfitting and is capable of recognizing and learning patterns in the trained data.Based on Figure 7(a) and Figure 7(b), the performance of the modified U-Net architecture model is good in nucleus and cytoplasm segmentation, as indicated by an accuracy above 90 % and a loss value approaching 0 %. Testing The testing process is a step to test the model from the results of the training process using new data that has never been learned by the model before.The testing data consists of 551 data obtained from split data.At this stage, semantic segmentation predictions are performed for the nucleus and cytoplasm are performed, and evaluate the accuracy of the model. Several comparisons between the original image, segmentation result, and ground truth are shown in Table 1.It shows the comparison between the segmented image and the ground truth.The structure of the pap-smear image consists of the nucleus (cell nucleus) labeled in light blue, the cytoplasm (cells surrounding the nucleus) labeled in dark blue, and the background labeled in red.Seen in Table 1, the segmentation results performed using the modified U-Net architecture with Batch Normalization have shown similarity with the ground truth.However, the segmentation results of the nucleus area are still not fully predictable.In addition, in some results, there are still points in the background that are incorrectly predicted as cytoplasm. The performance evaluation metrics used for semantic segmentation of nucleus and cytoplasm cells include accuracy, sensitivity, specificity, F1-score, and precision.Accuracy is used to measure the extent to which the segmentation model can correctly identify between nucleus and cytoplasm cells in pap-smear images.Sensitivity is used to measure the ability of the model to correctly identify cancer cells, including both nucleus and cytoplasm cells.Specificity is used to measure the ability of the model to correctly identify the background.Precision is used to measure how precise the model is in identifying nucleus and cytoplasm cells.F1-score is used for the harmonic mean between precision and sensitivity.A comparison of the obtained semantic segmentation performance evaluation results with other studies is shown in Table 2. Table 2 shows the comparison of research results using the same dataset for pap-smear image segmentation.It is observed that the semantic segmentation method proposed in this research achieved the highest values in terms of accuracy, F1-score, and specificity compared to previous studies.Specifically, the accuracy is 91.48 %, F1-score is 81.7 %, and specificity is 87.7 %.However, another study by [30] obtained the highest precision, and a study by [31] obtained the highest sensitivity, although these two studies only calculated three evaluation performance values.Compared to other studies, it can be seen that these studies only measured 2 to 4 evaluation performance metrics.According to the comparison, it is concluded that the proposed method has provided optimal performance in semantic segmentation. Discussion In the process of nucleus and cytoplasm segmentation using the U-Net architecture, each pixel is grouped into three different classes.Class 0 is used for the cytoplasm label, class 1 for the nucleus label, and class 2 for the background label.A comparison of the performance evaluation of each label is shown in Figure 8.This figure shows the performance https://ejournal.ittelkom-pwt.ac.id/index.php/infotelevaluation results of each label, where it can be seen that class 1 has the highest accuracy and specificity values compared to other classes.Meanwhile, class 0 obtains higher F1score, sensitivity, and precision values compared to the other classes.The performance evaluation per label indicates that the accuracy obtained for all classes is very good, with results that are close to the ground truth.The F1-score shows the model's excellent performance in segmenting the nucleus and cytoplasm.Sensitivity shows that the models have a high ability to detect nucleus and cytoplasm objects, with higher values than the background.Specificity shows that the models have a high ability to detect objects other than the nucleus and background, although the specificity for the cytoplasm is slightly lower.Precision shows that the model can accurately identify boundaries for the cytoplasm, nucleus, and background, thereby reducing errors in classifying adjacent pixels for each object. Conclusion Based on previous research, the use of modified U-Net architecture for nucleus and cytoplasm segmentation has been proven effective in predicting the pixels representing the nucleus and cytoplasm from the given image data.This research modified the U-Net architecture by adding batch normalization in the semantic segmentation process but did not involve the classification process.Therefore, the future research of this research will focus on classification for cervical cancer detection based on the segmentation results obtained. Performance evaluation shows that the modified U-Net architecture has provided good segmentation results.The problem of network complexity and vanishing gradient during the training process was successfully overcome by the addition of batch normalization to the basic U-Net architecture.This led to accurate segmentation predictions based on the evaluation values obtained. Figure 1 : Figure 1: Workflow of the research in cervical cancer cell segmentation. Figure 2 : Figure 2: Nucleus and cytoplasm structure in pap-smear image. Figure 3 : Figure 3: Modified U-Net architecture with batch normalization. Figure 4 : Figure 4: Data augmentation using horizontal flipping and vertical flipping. Figure 5 : Figure 5: Flow of the image quality enhancement process. Figure 6 : Figure 6: Comparison graph of values between the original image and the preprocessed image: (a) PSNR and (b) SSIM. Figure 7 : Figure 7: Graphs obtained during the training process: (a) accuracy and (b) loss. Figure 7 ( Figure 7(a) represents the graph of accuracy for training data and validation data using the modified U-Net architecture.In the training data, the accuracy graph shows an increase in each epoch.Starting from 14 % in the first epoch, this number continues to increase until reaching 92 %.The same thing can be observed in the accuracy graph for the validation Figure 8 : Figure 8: Comparison of performance evaluation results per label. Table 1 : Comparisons of Original Image, Segmentation Result, and Ground Truth Table 2 : Evaluation result comparison
5,423.6
2024-05-08T00:00:00.000
[ "Computer Science", "Medicine" ]
Magnon interactions in a moderately correlated Mott insulator Quantum fluctuations in low-dimensional systems and near quantum phase transitions have significant influences on material properties. Yet, it is difficult to experimentally gauge the strength and importance of quantum fluctuations. Here we provide a resonant inelastic x-ray scattering study of magnon excitations in Mott insulating cuprates. From the thin film of SrCuO2, single- and bi-magnon dispersions are derived. Using an effective Heisenberg Hamiltonian generated from the Hubbard model, we show that the single-magnon dispersion is only described satisfactorily when including significant quantum corrections stemming from magnon-magnon interactions. Comparative results on La2CuO4 indicate that quantum fluctuations are much stronger in SrCuO2 suggesting closer proximity to a magnetic quantum critical point. Monte Carlo calculations reveal that other magnetic orders may compete with the antiferromagnetic Néel order as the ground state. Our results indicate that SrCuO2—due to strong quantum fluctuations—is a unique starting point for the exploration of novel magnetic ground states. One of the most studied electronic models is that of a two-dimensional square lattice [1]. The physics of these systems is well captured by the Hubbard model, from which an effective Heisenberg model can be generated to describe the AF ground state.In this strong-coupling limit, the spin exchange interactions are realized through virtual hopping processes.Upon down-tuning the interaction strength, the AF Mott state remains a theoretical ground state.However, in this limit, small perturbations (for example doping) can trigger new magnetic or metallic ground states [2,5].Pushing the Mott state into this soft regime is therefore of great interest.When potential and kinetic energy scales are comparable, quantum fluctuations enter the problem.Many experimental and theoretical studies have addressed this intermediate region of U/t.Upon doping, high-temperature superconductivity has been found [5] and spin-liquid states are predicted in certain models [6]. In spin-wave theory [7], quantum fluctuations describe the occupations of the bosonic modes as perturbations to the Néel state [8].The ground state can be viewed as a Néel state with finite boson density.The corresponding elementary excitation should therefore be considered as a magnon renormalized by its higher-order expansions, i.e., the magnon-magnon interactions.On an experimental level, it has however been difficult to measure or gauge the strength of quantum fluctuations.As mentioned above, magnon excitations [9] of the Néel state should be influenced by quantum fluctuations.The magnon dispersion is described by ℏω = Z c (k)ϵ k , where ϵ k is the "bare" magnon dispersion set by potential and kinetic energy scales, and Z c (k) = Z 0 c (1 + f k ) is the renormalization factor stemming from quantum fluctuations with f k being a momentum dependent function.In the strong coupling limit (U/t → ∞) quantum fluctuations are suppressed implying f k → 0 and Z c (k) ≈ 1.18 is essentially momentum independent [10,11].As quantum fluctuations grow stronger with gradually moderate values of U/t, the renormalization factor Z c (k) increases and ac-quires momentum dependence through a non-negligible f k .This limit governed by quantum fluctuations is interesting as it may provide physics beyond the antiferromagnetic Néel state. Conceptually, this moderate U/t limit is complicated due to a multitude of comparable magnetic exchange interactions.Nearest and next-nearest neighbour exchange interactions are given by J 1 = 4t 2 /U and J 2 = 4t 4 /U 3 , in the projection of the Hubbard onto the Heisenberg model.In the moderate or weak interaction limit, higherorder exchange interaction terms are gaining prominence.The ring-exchange interaction J □ = 80t 4 /U 3 becomes a significant fraction of J 1 (J □ /J 1 = 20t 2 /U 2 ) and manifests by a magnon zone-boundary dispersion [12][13][14][15][16][17].In this limit, higher-order hopping integral t ′ , can introduce new magnetic interaction term J ′ = 4t ′2 /U that further adds to enhance the zone boundary (ZB) dispersion.Within the Hubbard-Heisenberg model, the zone boundary dispersion, quantum fluctuations, and Z c correlate in the U/t and t ′ /t parameter space.In fact, the renormalization factor Z c gains its momentum dependence from the higher-order exchange interactions.Enhanced quantum fluctuations may thus introduce new magnetic ground states and with that exotic magnonic quasiparticles [18,19].It is thus interesting to study materials with significant higher-order exchange couplings.In the cuprates, ACuO 2 with A = Sr, Ca has been studied with electron spectroscopy and resonant inelastic xray scattering (RIXS) [17,18,20] due to its large ring exchange interaction.Yet, no experiments have demonstrated the importance of magnon-magnon interactions and quantum fluctuations through direct measurements of a momentum dependent magnon renormalization factor. Here we provide a RIXS study of SrCuO 2 (SCO) realized in thin-film format, which demonstrates a Mott-insulating nature by electron spectroscopy measurements [3,4].Analysis of the RIXS spectra led us to derive the single-and bi-magnon dispersions.Starting from an effective Heisenberg representation of the Hubbard model, we show that the observed single-magnon dispersion is inconsistent with a constant Z c for reasonable values of kinetic energy scales.We thus conclude that, in SrCuO 2 , quantum fluctuations are significantly influencing the magnon dispersion.Further, our analysis shows that the observed magnon dispersion is well described when introducing significant momentum dependence to Z c by evaluating the magnon-magnon interactions. This finding is further supported by comparing cuprate compounds with different correlation strengths.Our results thus provide a gauge for quantum fluctuations which are getting increasingly important as U/t is reduced.Possible exotic magnetic ground states, emerging from quantum fluctuations are here explored by classical Monte Carlo calculations. Results The crystal field environment around the copper site in SrCuO 2 is shown schematically in Fig. 1a.In contrast to, for example, La 2 CuO 4 (LCO), no apical oxygen is present in SrCuO 2 .Examples of Cu L-edge RIXS spectra, covering magnetic and dd excitations, are shown in Fig. 1b.The absence of the apical oxygen in SrCuO 2 pushes the d z 2 excitation well below the t 2g excitationsas previously established in CaCuO 2 [17,22].The two excitations at 1.43 and 1.73 eV are assigned to the d xy and degenerated d xz /d yz states, respectively.The origin of the additional peak at ∼2.06 eV, which has also been observed in CaCuO 2 [17,22], has been attributed to the incoherent component of the d xz /d yz orbital excitations due to the coupling to magnetic excitations [23]. Magnetic excitations have been recorded systematically along the (h, 0), (h, h), and zone boundary (azimuthal ϕ rotation with a constant in-plane momentum amplitude Q // ) directions with both linear vertical (σ) and linear horizontal (π) incident light polarisations.A single-magnon excitation manifests clearly in the π channel (see Fig. 1).When switching to σ polarisation, the single-magnon is suppressed as expected [24], and an excitation at higher energy appears.We interpret this as a magnetic continuum that in SrCuO 2 (and CaCuO 2 ) has a structure-sometimes referred to as a bi-magnon excitation [25][26][27].In what follows, we extract the singlemagnon and bi-magnon dispersions along the high symmetry directions. We analyze the low-energy part of the RIXS spectra by considering four components that include elastic scattering (grey shaded area-enhanced in the σ channel [24]), single-and bi-magnon (orange shaded area) excitations, and a smoothly varying background (grey dashed line).Elastic scattering is mimicked by a Gaussian function centred at zero energy loss.The energy width is slightly larger than the instrumental resolution due to unresolved phonon modes [28,29].The single-and bi-magnon excitations are described respectively by a damped harmonic oscillator convoluted with the instrumental resolution and a Gaussian function.Background is modeled by a second-order polynomial.In grazing-exit geometry, RIXS cross section from the magnon (bi-magnon) is generally enhanced when using the π (σ) polarized incident lights-as shown in Fig. 1c,d and Supplementary Fig. 1.We fit globally across the two light polarisations to extract the two magnetic contributions.The resulting single-and bi-magnon dispersions are plotted on top of the inelastic RIXS spectral weight (π polarisation) in Fig. 2a-c.Consistent with previous reports on CaCuO 2 [17,18], a large zone boundary dispersion ) of the single magnon excitation is observed with an essentially non-dispersive section along the (h, h) direction.Away from the Brilliouin zone centre, the bi-magnon dispersion ℏω bm roughly mimics the single magnon dispersion ℏω sm .At the zone boundary position ( 14 , 1 4 ), ω bm /ω sm ≈ 2. This ratio however varies significantly along the high symmetry directions. The fitting of the single-and bi-magnon excitations also provides information about spectral weight and quasiparticle lifetime.For most of the Brillouin zone, the energy width of the single-magnon is resolution-limited.However, around (0.5, 0) spectral weight suppression and shorter single-magnon lifetimes are observed consistently with what has previously been reported in La 2 CuO 4 and CaCuO 2 [14,17,18]. Discussion The single-magnon dispersion of SrCuO 2 features two peculiar characteristics.A steep zone boundary dispersion is followed by a non-dispersive section along the (h, h) direction.Magnon excitations of layered copperoxides have been discussed via a Heisenberg Hamiltonian derived from the Hubbard model [2].In the simplest form, the nearest-neighbour exchange interaction J 1 = 4t 2 /U is described through the Coulomb interaction U and nearest-neighbour hopping integral t.The magnon dispersion is, in this limit, isotropic-given by ℏω [12], however, revealed a zone boundary dispersion indicating the importance of higher-order exchange interaction terms.To account for this zone boundary term, a ring exchange interaction J □ ∼ t 4 /U 3 was included to satisfactorily describe the observed magnon dispersion [12,14].Later more detailed studies [21,30] included higher-order hop-ping terms, i.e., next, and next-next nearest-neighbour hopping intergrals t ′ and t ′′ .This extended model, yields a magnon dispersion ℏω = Z c (U, t, t ′ , t ′′ )ϵ k (U, t, t ′ , t ′′ ), where Z c is a momentum dependent quantum renormalization factor and ϵ k (U, t, t ′ , t ′′ ) = A 2 k − B 2 k is the bare magnon dispersion with A k and B k determined by U , t, t ′ , and t ′′ , as described in refs.21 and 30.In the U/t → ∞ limit, the magnon-magnon renormalization factor Z c is momentum independent, and ϵ k (U, t, t ′ , t ′′ ) has an analytic expression [16,31].This expression has been used to fit the magnon dispersion of La 2 CuO 4 , with realistic values of U , t, t ′ , and t ′′ .In particular, ratios of t ′ /t ∼ −0.4 and t ′′ /t ∼ 0.2 are found consistent with density functional theory calculations [32] and angle-resolved photoemission spectroscopy experiments [16,33]. For SrCuO 2 , however, the constant Z c solution does not provide a satisfactory description of the observed single-magnon dispersion (see blue dashed lines in Fig. 3g-l).As La 2 CuO 4 and SrCuO 2 share similar square lattice structures, similar values of t ′ /t are expected.However, an unbiased fit yields unphysical values for the hopping parameters and physical sensible values provide poor fits.We are thus led to reject the initial ansatz in a numerical self-consistent fashion.For La 2 CuO 4 , this methodology confirms that Z c is roughly constant (see Fig. 3a-c and Table I) with marginal changes to U, t, t ′ , t ′′ (compared to the constant Z c model [16]).However, for SrCuO 2 , an entirely new solution emerges.Values of t ′ /t and t ′′ /t comparable to La 2 CuO 4 and a smaller U/t now describes the magnon dispersion-see solid lines in Fig. 3 and Table I.We stress that this new solution describes the observed dispersion using fewer fitting parameters as Z c is now given by U, t, t ′ , t ′′ .The moderate value of U/t implies a magnon-magnon renormalization factor that is strongly momentum dependent (see Fig. 3g-i Our results thus indicate that quantum fluctuations have a significant impact on the magnon dispersion in SrCuO 2 . In Fig. 4, we plot the constant χ 2 (goodness-of-fit) contour lines that encircle solutions that are within 10% of the minimum of χ 2 .With the currently available data, a rather broad set of parameters describe the magnon dispersion of La 2 CuO 4 .For SrCuO 2 , however, χ 2 has a unique well-defined minimum confined to a narrow region of the parameter space.In Table I, the fitting values to describe the magnon dispersions for La 2 CuO 4 and SrCuO 2 are listed.For SrCuO 2 , these values represent a minimum in the χ 2 function.For La 2 CuO 4 , we further constrain the solutions by fixing t ′ /t = −0.4.In Fig. 4, the modeled zone boundary dispersion is plotted as a function of J □ /J 1 and J ′ /J 1 .Within the same parameter space, the Brillouin zone average Zc is shown.Generally, the model displays a correlation between magnon ZB dispersion and Zc .The Heisenberg-Hamiltonian projection from Hubbard model is breaking down in the limit where Zc ≫ Z 0 c -that is when (J □ + J ′ )/J 1 is large.Compared to La 2 CuO 4 , SrCuO 2 displays a larger ring exchange coupling J □ .In fact, the fitting parameters obtained for SrCuO 2 are close to the limit where the Heisenberg representation of the Hubbard model breaks down.This limit is characterized by a complete suppression of the staggered magnetization and the imaginary solution of the magnon dispersion (Fig. 4).It has been reported that the absence of apical oxygen in cuprates leads to a decrease in the electronic correlation strength [34,35], which agrees with the observations here.To corroborate our findings, we analysed the magnon dispersion of CaCuO 2 (CCO), which is another infinitelayer cuprate compound with a large ring-exchange coupling [17].As observed in ref. [17], the one-band Hubbard model with underestimated quantum renormalization fails to describe the magnon dispersion and yields an unrealistically small U (U/t = 4.9).As shown in Fig. 4 and Supplementary Figs. 2 and 3, the magnon dispersions in both compounds are only well described when in- and SCO/GSO (blue).The solid lines are corresponding fits using a Hubbard model including higher-order terms (see text).The parameters extracted from the fits are listed in Table I.Blue dashed lines mark the magnon dispersion obtained assuming a constant Zc = 1.219 [16,21], with U = 2.15 eV, U/t = 6.25, t ′ /t = −0.4,and t ′′ /t ′ = −0.5.Error bars indicate one standard deviation.The in-plane momentum amplitude Q // takes 0.461, 0.444, and 0.463 for LCO/STO, LCO/LSAO, and SCO/GSO, respectively.Data on LCO/STO and LCO/LSAO are taken from ref. [16].Top panels (a-c, g-i) display the momentum dependence of the quantum fluctuation factor Zc obtained from fitting with the Hubbard model. cluding substantial quantum corrections generated from magnon-magnon interactions.We thus demonstrate that SrCuO 2 -a moderately correlated Mott insulator-hosts strong quantum fluctuations that can potentially stabilize ground states beyond the AF ordered Néel state.Enhancing further J □ /J 1 or J ′ /J 1 would be of great interest to explore new quantum matter ground states. To gain insight into the possible magnetic ground states when the Néel order breaks down by the increase of t and t ′ , we perform Monte Carlo calculations using a Heisenberg Hamiltonian including the first-, second-, and third-nearest-neighbour, as well as a four-spin ring exchange coupling (see Methods).To compare with the experimental results on SrCuO 2 , we fixed U = 2.66 eV and t ′′ /t ′ = −0.5.Incommensurate magnetic orders characterized by a quartet of magnetic Bragg peaks around (0.5, 0.5), i.e., with wave vectors Q M = (0.5 ± δ, 0.5) and (0.5, 0.5 ± δ), or Q M = (0.5 ± δ/ √ 2, 0.5 ± δ/ √ 2) are found in the parameter space between the antiferromagnetic Néel and columnar orders (see Supplementary Fig. 4 for examples of the calculated spin structure factor).We plot in Fig. 4d the distance δ between Q M and the Néel wave vector (0.5, 0.5) as a function of t and t ′ /t.While SrCuO 2 is in the Néel AF ordered state, it is located not far from incommensurate magnetic ordered phases which can be reached by increasing t.Further increase of t ′ /t enhances the next-nearest neighbour coupling and stabilizes the columnar antiferromagnetic order.This tuning can be potentially realized by strain application with different substrates [16,36].Explorations on possible strain or pressure induced quantum critical behaviour would provide more insights into the nature of the magnetic ground states.Note that recent theoretical and numerical works on the two-dimensional Hubbard model have shown that under small doping, the Néel order becomes unstable and replaced by other magnetic orders [37][38][39][40][41].A spin-charge stripe state with an incommensurate ordering wave vector has also been experimentally established in underdoped cuprates [42][43][44].We point out that the classical Monte Carlo simulations do not capture the quantum nature of the problem, but they show the parameter space, where the Néel state is expected to break down.When antiferromagnetic order is suppressed by the strong quantum fluctuationsstemming from magnon-magnon interactions-near the phase boundary, superconductivity could be potentially enhanced.It would therefore be of great interest to 12 , 0), renormalized staggered magnetization as a function of the exchange interactions J □ /J1 and J ′ /J1.(c) Average renormalization factor Zc across the Brillouin zone for the same parameter space as in (a,b).Green, purple, yellow, and blue dashed circles indicate constant χ 2 contour lines with solutions that are within 10% of the minimum of χ 2 for LCO/STO, LCO/LSAO, CCO, and SCO, respectively.Grey, green, and purple filled symbols denote parameters determined from the Hubbard model with constant Zc on respectively bulk LCO [12], LCO/STO, and LCO/LSAO [16].Magnon dispersion of CCO is adapted from ref. [17].Details of the analysis are described in the Supplementary Information.Empty areas in (a,c) and (b) indicate where the magnon dispersion becomes imaginary and the magnetization negative, respectively.(d) Magnetic ground state structure as a function of t and t ′ /t obtained from Monte Carlo calculations with U = 2.66 eV and t ′′ /t ′ = −0.5 fixed.White pentagram marks the position of SCO. study how the different magnetic ground states influence superconductivity upon doping. Future theoretical studies including extended dynamical mean-field theory calculations on the spin excitations [45][46][47][48][49]-beyond the scope of the current work-could offer more insights into the relationship between the quantum fluctuations and the degenerate ground states near the critical region.On the experimental front, such quantum effects could also be addressed by comparative RIXS measurements at a higher temperature (T ∼ 0.1J), which call for future exploration. RIXS experiments Cu L 3 -edge RIXS experiments were carried out at the ADRESS beamline [53,54], of the Swiss Light Source (SLS) synchrotron at the Paul Scherrer Institut.All data were collected at base temperature (∼20 K) under ultrahigh vacuum (UHV) conditions, 10 −9 mbar or better.RIXS spectra were acquired in grazing-exit geometry with both linear horizontal (π) and linear vertical (σ) incident light polarisations with a fixed scattering angle 2θ = 130 • .The two-dimensional nature of the system ensures that the out-of-plane dependence of the magnon dispersion is negligible, as confirmed in a recent RIXS study on CaCuO 2 [18].The energy resolution, estimated by the full-width-at-half-maximum of the elastic scattering from an amorphous carbon sample, is 118.5 meV at Cu L 3 edge (∼931.5 eV).Momentum transfer is expressed in reciprocal lattice units (r.l.u.) using a pseudo-tetragonal unit cell with a = b = 3.97 Å and c = 3.4 Å for SrCuO 2 , a = b = 3.8 Å and c = 13.2Å for La 2 CuO 4 .RIXS intensities are normalised to the weight of dd excitations [55]. The staggered magnetization is calculated by , where N is the number of spin on the square lattice.Our code is available upon request. Monte Carlo simulations. Monte Carlo simulations were carried out using the Heisenberg Hamiltonian projected from the Hubbard model, taking into consideration the leading contribu- where For each set of input values, (U, t, t ′ , t ′′ ), the simulation ran for 10 7 Monte Carlo steps with random starting configurations.The Monte Carlo calculation code is available upon request. FIG. 1 . FIG. 1. RIXS on SrCuO2 with different incident light polarisations.(a) Crystal structure of SrCuO2 and schematic illustration of the RIXS scattering geometry.Incident light, either linear horizontal (π) or vertical (σ), is directed to the film with variable angle θ.(b) RIXS spectra at Q = (0.34, 0) measured with π (blue line) and σ (purple line) incident x-rays.Spectra are vertically shifted for clarity.Dotted lines mark the peak positions of the dd excitations, determined from fitting with Gaussian components denoted by the shaded areas.(c,d) Zooms of the low-energy part (within the grey dashed boxes) of the spectra in (b).Solid lines are the sum of a four-component fit.Each component is indicated by dashed lines and shaded areas-see text and Supplementary Information for more details.Vertical dashed lines indicate the zero energy loss. hFIG. 2 . FIG. 2.Magnon and bi-magnon dispersions and spectral weights observed in SrCuO2.(a-c) RIXS intensity as a function of momentum and energy loss measured with π polarized incident light, along three different directions in the reciprocal space as indicated by the solid color lines in the inset.Elastic and background scattering has been subtracted.Open circles and diamond points indicate respectively the magnon and bi-magnon pole position (see text for detailed description of the analysis).In (b) the in-plane momentum amplitude is Q // = 0.463.(d,e) Single magnon inverse lifetime Γ ∼ ℏ/τ along the (h, 0) and (h, h) directions.Horizontal dashed line indicates the applied energy resolution.(f-i) Normalized single-magnon (f,g) and bi-magnon (h,i) spectral weight along the (h, 0) and (h, h) high symmetry directions.Error bars are determined from the fitting uncertainty. hhFIG. 3 . FIG.3.Magnon dispersion and momentum dependence of quantum renormalization factor Zc. Bottom panels (d-f, j-l) display the magnon dispersion along the indicated momentum trajectories for LCO/STO (green), LCO/LSAO (pink), and SCO/GSO (blue).The solid lines are corresponding fits using a Hubbard model including higher-order terms (see text).The parameters extracted from the fits are listed in TableI.Blue dashed lines mark the magnon dispersion obtained assuming a constant Zc = 1.219[16,21], with U = 2.15 eV, U/t = 6.25, t ′ /t = −0.4,and t ′′ /t ′ = −0.5.Error bars indicate one standard deviation.The in-plane momentum amplitude Q // takes 0.461, 0.444, and 0.463 for LCO/STO, LCO/LSAO, and SCO/GSO, respectively.Data on LCO/STO and LCO/LSAO are taken from ref.[16].Top panels (a-c, g-i) display the momentum dependence of the quantum fluctuation factor Zc obtained from fitting with the Hubbard model. 2 FIG. 4 . FIG. 4.Quantum renormalization effect within the Hubbard model.(a,b) Zone boundary dispersion ratio EZB/E( 1 2 , 0), renormalized staggered magnetization as a function of the exchange interactions J □ /J1 and J ′ /J1.(c) Average renormalization factor Zc across the Brillouin zone for the same parameter space as in (a,b).Green, purple, yellow, and blue dashed circles indicate constant χ 2 contour lines with solutions that are within 10% of the minimum of χ 2 for LCO/STO, LCO/LSAO, CCO, and SCO, respectively.Grey, green, and purple filled symbols denote parameters determined from the Hubbard model with constant Zc on respectively bulk LCO[12], LCO/STO, and LCO/LSAO[16].Magnon dispersion of CCO is adapted from ref.[17].Details of the analysis are described in the Supplementary Information.Empty areas in (a,c) and (b) indicate where the magnon dispersion becomes imaginary and the magnetization negative, respectively.(d) Magnetic ground state structure as a function of t and t ′ /t obtained from Monte Carlo calculations with U = 2.66 eV and t ′′ /t ′ = −0.5 fixed.White pentagram marks the position of SCO. 2 . The first three sums count respectively nearest, next-nearest, and next-next-nearest neighbour using indices ⟨i, j⟩, ⟨i, i ′ ⟩ and ⟨i, i ′′ ⟩.The last sum counts around the squares following the clockwise direction.Simulations were performed for a temperature T = 0.1 K on a sheet of 50 × 50 unit cells in the (a, b)-plane, i.e., 2500 magnetic sites with classical spin S = 1 2 . . Early neutron scattering experiments on La 2 CuO 4
5,573.4
2023-11-28T00:00:00.000
[ "Physics" ]
Research on mapmatching algorithm based on priority rule for low sampling frequency vehicle navigation data Purpose – There is a certain error in the satellite positioning of the vehicle. The error will cause the drift point of the positioning point, which makes the vehicle trajectory shift to the real road. This paper aims to solve this problem. Design/methodology/approach – The key technology to solve the problem is map matching (MM). The low sampling frequency of the vehicle is far from the distance between adjacent points, which weakens the correlation between the points, making MM more difficult. In this paper, an MM algorithm based on priority rules is designed for vehicle trajectory characteristics at low sampling frequencies. Findings – The experimental results show that the MM based on priority rule algorithm can effectively match the trajectory data of low sampling frequency with the actual road, and the matching accuracy is better than other similar algorithms, the processing speed reaches 73 per second. Research limitations/implications – In the algorithm verification of this paper, although the algorithm design and experimental verification are considered considering the diversity of GPS data sampling frequency, the experimental data used are still a single source. Originality/value – Based on the GPS trajectory data of the Ministry of Transport, the experimental results show that the accuracy of the priority-based weight-based algorithm is higher. The accuracy of this algorithm is over 98.1 per cent, which is better than other similar algorithms. Introduction With the increase in car ownership, the application of car networking technology has developed rapidly.Based on the driver's driving behavior data acquired by the vehicle network terminal, vehicle networking insurance services such as vehicle network insurance, traffic supervision, route recommendation, travel time estimation and prediction and urban planning can be carried out.The acquisition of this data is mainly collected by GPS vehicle terminal and onboard diagnostic.However, the existence of positioning error makes it impossible to directly apply it, especially the specificity of some vehicle networking applications, such as transportation.The GPS data of the "two passengers and one danger" national vehicle supervision platform is generally sent every 30 s.It is difficult to accurately match the GPS data and the GIS road network by using the existing map matching algorithm (MMA). MMA is an algorithm to precisely match the location with the digital map in GIS.At present, a large number of scholars at home and abroad have studied and improved the GPS MMA (Ollero, 2007;Boucher et al., 2013;Paefgen et al., 2014;Perrine et al., 2016).Through analysis and induction, it is found that the existing MMA is applied to vehicle track data with a low sampling frequency (sampling interval of 15 s and above for adjacent points), and its MM precision is low.Marchal et al. (2005) proposed a scoring model for large-scale data.Considering the point-tosegment distance and connectivity principle, a weight formula was set up and the parameter settings were discussed.The results show that the algorithm has good timeliness.Blazquez and Vonderohe (2009) consider the difference of different positioning data, propose a parameter adjustment idea for the MM algorithm of rule decision and find that different parameters have an influence on the matching result accuracy of the positioning data of different sampling frequencies.Mokhtari et al. (2014) proposed an integrated-weighted MMA for particle filtering.The algorithm considers two factors: heading angle and speed.It has a good matching effect when the GPS signal is shielded in the case of complex road network.Quddus and Washington (2015) considered the acquisition frequency of GPS data to carry out experiments and proposed an algorithm of MM weight model based on distance and driving direction.Hashemi and Karimi (2016) proposed an MMA with dynamic weight considering the course angle, distance outside the distance and the distance between adjacent points.The above all algorithms have much lower matching precision when applied to GPS points with low sampling frequency.Ming and Karimi (2009) proposed a global map matching method based on markov model for wheelchair navigation (low speed and low sampling frequency).Goh et al. (2012) improved the HMM algorithm based on the state transfer probability determined by SVM and conducted experimental analysis on the same data.The algorithm has high precision but high time cost.Based on this, Raymond et al. (2013) did not consider the information between observation points and directly calculated the distance between the two points as the calculation parameter of the state transition probability, and the experimental results performed better.The above three algorithms are too complex to determine the state transition probability and cannot guarantee the matching precision of the low-frequency sampling points. Therefore, this paper draws on the idea of the existing MMA, considering the angle between the speed direction and the road traffic direction, and the shortest distance from the point to the candidate road segment.Based on the high-precision GIS electronic map, the two departments for the Ministry of Transport Based on the application range of the algorithm, a map matching based on priority rule (MMPR) is designed and compared.The effectiveness of the algorithm.The innovation of the algorithm lies in: MMPR finds the candidate road segment by setting the priority of the factor, which is different from the existing weight algorithm, so that the importance of the two factors of speed direction and distance can be effectively measured. The innovative MMPR accurately calculates the angle between the speed and the road traffic direction. First, find the closest point to the candidate road segment, comprehensively consider the tangential direction and road traffic type of the point and finally determine the angle between the road traffic direction and the speed direction through angle conversion. Research and design of map matching algorithm The specific process of the algorithm is as follows: First, the candidate road segment and the electronic map are input, the candidate road segment is determined by the candidate radius and then the angle between the speed direction and the road traffic direction is calculated to find the candidate segment with the smallest angle, and then based on the point to the shortest distance of the candidate road segment finds the best candidate point, and finally the coordinate information of the observation point is corrected based on the coordinate information of the candidate point and the trajectory is drawn.The process essentially repeats the process of iterating forward according to the timestamp of the observation point, and the algorithm ends when all the points match. Factor selection Based on the research status of MMA, it is found that the current map matching mainly considers three factors: speed direction, distance from observation point to candidate point and road accessibility.In this paper, the MMA based on the priority rule first considers the speed direction and, second, considers the distance factor and ignores the road accessibility factor.Next, the factors of the MMA are selected and calculated from the perspectives of the existing algorithms and factor analysis.The limitations of the existing algorithms are analyzed as follows: The rule-based comprehensive weight MMA has different advantages and disadvantages.The reason is mainly the complexity of GPS data set and the diversity of road network structure, which makes it difficult to dynamically adapt the weight coefficients of distance and speed direction.Therefore, it is necessary to find an efficient method to measure the importance of the two directions of speed direction and distance.It can be known from the GPS data observation that the distance between the observation points and the real road segment is not the smallest among all the candidate road segments.The fundamental reason is that the error region of the GPS system positioning is an elliptical domain, and the candidate segment is not closer to the center point in the elliptical domain. The greater the possibility.Therefore, the distance factor is actually less important than the speed angle factor.The HMM model solves the map matching problem, and the road accessibility problem is considered in essence.As shown in Figure 1, the frequency statistics of the length of the broken line segment of the electronic map, after statistical analysis of the road network structure of the electronic map, the measurement range fluctuation range of the element of the road segment, that is, a single broken line segment is 0$1000 m.Comparing the distance between the broken line segment of the electronic map and the distance of the adjacent GPS points, it can be found that the reference value of the accessibility factor of the road is unstable.As can be seen from the figure, 95 per cent of the length of a single road segment is distributed at (0m, 200m).Assume that the average speed is v = 50 km/h%13.8m/s, then the threshold of the acquisition frequency of the road adjacent to the GPS point is f = 200/13.8%15s, Therefore, it can be initially considered: When f < 15 s, the GPS sampling frequency is high, and the adjacent points are close, and the road accessibility is considered to be good.When f > 15 s, the GPS sampling frequency is lower and the adjacent points are far away, and the road accessibility factor has little reference significance. If we consider different GPS algorithms for different sampling frequencies to use different algorithms to match, then we need to identify the sampling frequency of GPS points according to time, and then use different algorithms.However, the actual GPS data noise may be large.According to the data observation, the sampling frequency of the GPS point sequence changes, and the time difference of the adjacent points fluctuates between 0 and 30 s.Therefore, for the GPS point with the noise sequence, if the frequency is matched, the switch is switched back and forth.The matching algorithm is obviously less efficient.At the same time, when considering the candidate matching road reachability, if the last point matches the wrong candidate road segment, the next time point will match to the same road segment, so that consecutive multiple point matching errors will occur.Based on the above analysis, the MMPR algorithm proposed in this section first considers the speed, and secondly considers the distance factor, and the road accessibility factor is not considered in the MMPR algorithm. Candidate segment selection The selection of candidate road segments is the first step of map matching.Selecting suitable candidate road segments can improve the computational efficiency of the algorithm on the one hand and improve the matching accuracy on the other hand.The GPS real point generally falls within the elliptical area of a certain length and length axis centered on the observation point.To simplify the calculation, the ellipse area can be abstracted into a candidate circle.By using the observation point as the center of the circle, the buffer is constructed with a certain radius, and the space cross-query operation is performed through the buffer and the road network to obtain the candidate road segment.The calculation method of the candidate circle radius can be calculated according to equation (1): where a represents the positioning error of the road network, generally 5m, v represents the width of the road, taking an average of 20m, b is the GPS error, the pseudo-random code C/A used in the civil signal system, the ranging accuracy is 29.3 meters between 2.93 meters, the average error b = 16m can be taken first.m represents the width of the vehicle, the width of the vehicle is generally 2m, and finally r = 30m.After the initial setting of the candidate radius, the adjustment can be made through experimental analysis.Map matching algorithm The angle between the speed direction and the road traffic direction In the angle design, first find the point closest to the observation point on the candidate road segment, then draw the tangential direction of the curve of the point, and finally use the tangential direction of the point as the direction of the road segment.The range of the velocity direction in the GPS data is (0, 360°); the direction of the passage in the electronic map is represented by an ordered variable.By combining the start and end points of the fold line, the angle between the tangent of a point on the curve and the true north direction can be calculated, and then the difference between the speed direction angle and the absolute value can be used to obtain the angle between the speed direction and the road passing direction (indicated by l ).The determination of the angle of the road traffic direction (indicated by u ) can be divided into two-way street and one-way street.In the electronic map, the one-way street can be divided into the forward path (the entrance of the road entry in the GIS electronic map is the starting point of the line segment) and the retrograde road (the road entrance is marked as the end point of the line segment in the GIS electronic map); therefore, the determination of the angle can be discussed in three cases. 2.3.1 Two-way street.As shown in Figure 2 (left), the road network in the electronic map is represented by a broken line segment, and the starting point and the ending point are generally marked.On a two-lane road segment, the vehicle can be from the start point to the end point, or from the end point to the starting point.When you do not know that the vehicle is entering from that intersection, you can ignore the difference between the start point and the end point and directly project the observation point vertically onto the line segment., the foot is, and then the tangent of the line segment at the foot point, the angle of the tangent is generally the angle of the horizontal positive direction, expressed by u , the speed angle is a, generally in the GPS receiver is positive The north direction is the baseline calculation. 2.3.2Antegrade road.As shown in Figure 2 (middle), the vehicle on the forward path travels from the starting point to the end point of the folding line segment.Therefore, the difference between the starting point and the ending point of the folding line segment cannot be ignored, and the finding point is found after the footing point on the candidate road segment.The foot is tangent, and the tangent is directional, and the direction is from the beginning of the fold line to the point of the foot. 2.3.3Retrograde road.As shown in Figure 2 (right), the vehicle on the retrograde road travels from the end of the folding line to the starting point.Therefore, the difference between the starting point and the ending point of the folding line segment cannot be ignored.After finding the footing point of the observation point on the candidate road section, the foot point is tangent, and the direction of the tangent is from the end of the line segment to the foot point. The GPS speed direction angle is based on the angle of the true north direction.The horizontal direction of the tangential angle of the road direction obtained by the above calculation method needs to be converted into an angle with the true north direction to be able to be distinguished from the speed direction angle.Therefore, according to the geometry knowledge, design conversion rules are shown in Table I.The angle l determined at this time and the road are in the range of (0, 2p ) and need to be converted to (0, p ) to participate in the calculation.Therefore, according to the geometric knowledge, the rules for the conversion of the design are shown in Table I. Data preprocessing 3.1 GPS track data processing The GPS data in this paper come from the "two passengers and one danger" data of the Ministry of Communications.The data format is the offline DMP file format, which needs to be imported and reused through the Oracle database.Through data observation, the original GPS data includes 16 field information such as time stamp, latitude and longitude, speed and heading angle.There are data redundancy and data anomalies.Therefore, it needs to be preprocessed as follows. 3.1.1Remove data redundancy.There are serious data redundancy situations in GPS data.There are two main types of situations.The first type of data redundancy refers to the existence of this duplicate record in the original GPS data table.This should be caused by the backup mechanism of the database.This article uses SQL statements to deduplicate in Oracle.At the same time of weight reduction, the massive GPS data is filtered and the GPS data to be used is selected.The second type of data redundancy means that there is no road network structure in the location of the GPS point.This is probably because the team has arrived in an area without a road network structure after work, usually because the GPS receiver has not been turned off after the vehicle is parked.Data are still being collected at this time, but this part of the data does not help map matching.Therefore, to improve the matching efficiency of the algorithm, a spatial query mode is adopted, and once the candidate segment is found to have no matching candidate segments, the deletion operation is performed. 3.1.2Abnormal data rejection.GPS track data anomalies are mainly caused by terminal recording errors or data transmission, and are mainly classified into three types of abnormalities.The first is the abnormal speed extreme value.This kind of data is caused by the recording error.In the actual matching process, the upper limit of the speed can be set by the SQL statement to filter.Second, the latitude and longitude is extremely abnormal.Such data may cause the positioning error to be greater than 100 m due to the presence of the vehicle's surrounding obstructions.The reference data of this type of data in the actual matching process should be eliminated.The main fields used in the map matching process include time, latitude and longitude, and speed direction.For unnecessary fields, the filtering operation should be performed to improve the timeliness of the algorithm.The final GPS attributes are shown in Table II. Electronic map processing The data of this article's electronic map are in industrial-grade shp format vector electronic map, which contains national roads, provincial highways, high-speed, county-level road network information, and takes up about 7 GB of disk space.Imported into the spatial database ArcSDE through ArcGIS software, the road network contains a total of 35 attribute fields.To improve the matching timeliness of the algorithm, the electronic map needs to be processed as follows. 3.2.1 Classification and screening of electronic map attribute fields.The electronic map belongs to spatial data, including spatio-temporal information and attribute information.The spatio-temporal information contains the spatial attributes and geometric topological relations of the map data object and is mainly used to satisfy the calculation requirements, and is automatically generated in the spatial database.The attribute information contains the feature attributes of the features in the map.It is mainly used for display and description.After analysis and filtering, the key road network attribute fields are selected as shown in Table III. 3.2.2Space clipping of electronic maps.To improve the speed at which the front-end program loads the map, it is necessary to tailor the map.First, select the road network information in Zhejiang, including national roads, national highways and highway networks.Then according to the administrative division and latitude and longitude range of Zhejiang Province (118°-123°E, 27°-32°N), the national electronic map will be cut and the rectangular road network including Zhejiang Province will be obtained, as shown in Figure 3. Spatial coordinate transformation. The position description of any object requires a reference coordinate system.In the process of map matching, it involves the unification of the spatial coordinate system of GPS coordinates and electronic maps.The map used in this paper is an electronic map of China.The domestic electronic map mainly uses the geographic coordinate system of Beijing 1954 coordinate system or Xian 1980 coordinates. The GPS uses WGS1984 (1984 geodetic coordinate system).The result of this coordinate system is displayed in latitude and longitude and altitude (B, L, H).Therefore, WGS1986 needs to be transformed into a geographic coordinate system.The conversion process is carried out according to equation (2): In the above formula, (X, Y, Z) represents the coordinates of the geographic coordinate system, and (l , f , H) represents the WGS coordinates.After transforming the coordinates of the above formula, the GPS coordinates can be converted into Beijing 1954 coordinate system or Xian 1980 coordinates by referring to the reference plane parameters in the earth ellipsoid data table.After the coordinate conversion is completed and the reference coordinate system is unified, the map matching work can be performed.Otherwise, the GPS display error in the road network will exceed 100 m, so that map matching cannot be performed. Experiment analysis 4.1 Analysis of results 4.1.1Analysis of overall matching results.Figure 4 shows the overall result of matching GPS traces at low sampling frequencies.Take the trajectory of a certain day of the car as an example.The original data of the day have 2,833 points.After data preprocessing (removing data redundancy), there are 130 points left.The trajectory passes through G15-Shenhai Expressway, G104, S10-Wenzhou Ring Expressway and G1513-Wenli Expressway. There are 128 points matching the correct MMPR algorithm and 2 points matching the error.The error reason is that the head angle of the GPS point is recorded incorrectly.The The weighting algorithm considers the angle between the velocity and the road direction and the distance factor.First, the two factors are normalized.Through iterative experiments, it is found that the weighting coefficient is 0.6 and the distance weighting coefficient is 0.4, which has the highest matching precision.The results show that there are 109 points with correct weight algorithm matching, 21 points with matching errors, matching precision of 83.3 per cent, matching duration of 1.23 s, matching speed of 106/s, and testing under the same environment. 4.1.2Analysis of local matching results.In Figure 5, the X shape represents the original point, the dot represents the corrected point, the arrow indicates the traveling direction, the green bold curve indicates the traveling trajectory and the other curves indicate the road network. Figure 5 (left) is the result of using MMPR to match on the road network with roundabouts and parallel roads.There are four points in the figure, all matching to the correct road.Figure 6 (right) is the same weight algorithm.The geographical location is matched, but there is an error in the matching of two points, and one point in the lower left corner matches the roundabout.The reason is that the point is closer to the inner section of the roundabout, and the uppermost point matches the other side of the parallel road.So that Figure 6 (left) is the result of matching the complex intersection using the algorithm of this paper.Figure 6 (right) is the result of matching the same position by the weight method.It is found through observation that the matching points of the matching algorithm in this paper are all matched correctly, and the weighting method has a point on the lower left corner of the island that is misaligned and matched to the adjacent road segment.The reason is that the point is closer to the inner side of the roundabout. Algorithm performance evaluation To eliminate the contingency, the 10-day GPS trajectory data of ten vehicles with lowfrequency sampling rate points were selected for experimental analysis.The results are shown in Table IV.The left side of the symbol "k" is the MMPR result, and the right side is the result of the weighting method.The final matching accuracy of the algorithm is 98.10 per cent, the standard deviation is 0.012, the matching duration is 73/s and the standard deviation is 1.748. To visually represent the effectiveness of the MMPR algorithm, in addition to comparing the performance of the matching results with the weighting method, the calculation results in the two documents are also cited (Ming and Karimi, 2009;Liu et al., 2007).It is GPS data for wheelchair navigation, and the sampling frequency is low.The literature (Liu et al., 2007) collects bus data every 30 seconds, so it is comparable. It can be seen from the table that the average accuracy of the algorithm is 98.10 per cent, 15.82 per cent higher than the weight algorithm, 2.10 per cent higher than the algorithm in Ming and Karimi (2009) and 0.30 per cent higher than the algorithm in Liu et al. (2007).The algorithm processing speed average reaches 73 points per second; the running speed is lower than the weight method because the weighting method calculation rule is simple and sacrifices the accuracy while improving the running speed. Conclusions and discussion Based on the existing MMA theory, this paper does the following work: A priority-based MMA is designed.On the basis of demonstrating the factors that the angle between the speed direction angle and the road traffic direction is higher than the distance from the point to the candidate road segment, a method for calculating the angle between the speed direction and the road traffic direction is designed and based on the Ministry of Transport.The "two passengers and one danger" GPS trajectory data, the experimental results verify that the accuracy ratio based on the priority ratio weight-based algorithm is higher, the accuracy of the algorithm in this paper exceeds 98.1 per cent, which is better than other similar algorithms.On the physical machine used in the experiment, the map matching speed reached 73 per second. In the algorithm verification of this paper, although the algorithm design and experimental verification are considered considering the diversity of GPS data sampling frequency, the experimental data used is still a single source.The road network structure that the experimental vehicle trajectory data can match on the electronic map is mainly three kinds of road network structures: national highway, provincial highway and high-speed highway.When the algorithm is applied to inter-city road matching or some other more complex environment, the accuracy and timeliness of the proposed algorithm may be reduced.Therefore, the next research direction is to collect vehicle trajectory data of more complex road network structure for algorithm testing, find the problems in actual matching, find the reason and further improve the algorithm of this paper. Figure 1 . Figure 1.Frequency distribution of the length of the line segment Figure 2. Two-way tangential angle and velocity direction angle Figure 3. Rectangular road network (including Zhejiang Province) Figure 4. Overall matching results Figure 6.Complex intersection matching results
6,034
2019-04-16T00:00:00.000
[ "Engineering", "Computer Science" ]
Collective Langevin Dynamics of Flexible Cytoskeletal Fibers We develop a numerical method to simulate mechanical objects in a viscous medium at a scale where inertia is negligible. Fibers, spheres and other voluminous objects are represented with points. Different types of connections are used to link the points together and in this way create composite mechanical structures. The motion of such structures in a Brownian environment is described by a first-order multivariate Langevin equation. We propose a computationally efficient method to integrate the equation, and illustrate the applicability of the method to cytoskeletal modeling with several examples. Introduction The internal architecture of living cells relies largely on microscopic fibers, which form the cytoskeleton with their associated proteins. These fibers have remarkable mechanical properties. Microtubules and actin filaments for instance have persistence lengths of ∼5 mm and 20 µm, respectively, and can sustain pico-Newtons of force without breaking [1]. Yet these fibers can also be broken down quickly, because they are formed by the non-covalent assembly of protein monomers. Filament ends can grow or shrink, or even alternate between those two states in a remarkable process called dynamic instability [2,3]. Structurally, the monomers in microtubules and actin filaments assemble head to tail in a regular manner. On the resulting polar lattices, mechano-enzymes called molecular motors (for example kinesin on microtubules or myosin on actin-filaments) use chemical energy to move directionally [1] or to organize the filaments in space [4]. Furthermore, specific enzymes control the filaments by regulating nucleation, assembly/disassembly or even by severing the filaments. The cytoskeleton is involved in multiple cellular processes such as cytokinesis, motility, polarization and mitosis. These functions are accomplished by many filaments working together. In this way, a set of dynamic or short-lived filaments may form a stable larger assembly, as exemplified by the mitotic spindle [4]. Many of the enzymes involved in the assembly of these structures are part of multi-functional entities [5,6,7]. For example, motors form oligomers that can actively connect filaments together [4]; motors may be able to disassemble filaments [6]; nucleation can be controlled such that it occurs on existing filaments [8,9]; crosslinkers may be polarity-specific [10] and motors are sometimes linked to proteins that track the tips of growing microtubules [7,11,12,13]. Generally speaking, modularity allows the cytoskeleton to be reprogrammed, for example at different stages of the cell cycle. It allows cells to reuse the same functional elements to achieve different tasks and multiplies the number of way in which the organization of fibers can be regulated. This modularity is certainly a consequence of the combinatorial exploration operating during natural selection [14]. In any case, the cytoskeleton in addition to fibers contains a kit of activities which can be combined in many ways. Biological systems are hard to understand, and theory is necessary to approach the non-intuitive aspects [15]. It is notable that many models in the cytoskeleton field often include the same basic elements (for a recent review on this subject, see [16]). This reflects the inherent modularity of the biological design illustrated briefly in the previous paragraph, and also affects the modeling approach. It implies that it is worthwhile to build a computer simulation to model a few basic elements, if these elements can be combined freely to rapidly model diverse situations. In practice, the elements of the simulation (eg. a model of kinesin, or a model of a severing enzyme) can even be implemented, tested and benchmarked by different teams of experts for each aspect of the system. Sharing computer code in this way can in fact be a practical mean to combine the efforts of the community. Writing a cytoskeletal simulation is likely to be a collective task also because it is a demanding project, involving multiple aspects: (a) chemical reactions that occur inside cells, (b) transport along fibers, for example the motion of molecular motors, (c) assembly dynamics of cytoskeletal fibers and (d) motion and deformation of fibers. Fortunately, numerous algorithms are available for certain of these aspects, in particular for the reactiondiffusion (see [17,18]). Transport along fibers can be modeled with advection equations, or with more details of the motion of the motors [19]. The assembly dynamics of fibers has been the subject of much research and cannot be reviewed here (see [16]). The deformation of the fibers is a classical mechanical problem (see for example [20,21]). However, the scale of living cells is associated with many specific features. In particular, Brownian motion plays a fundamental role, inertia is negligible [22] and the fibers are dynamic: they can lengthen or shorten by self-assembly. As a consequence, the physics of biological fibers is fundamentally distinct from other mechanical systems. In brief, public or commercial codes are not adapted to simulate the cytoskeleton. The purpose of this paper is to describe a method to calculate the mechanics of an ensemble of connected fibers and other objects, which is the basis of a cytoskeletal simulation such as cytosim. The physics of such system is described by a Langevin equation (for an introduction, see [23]) that recreates the Brownian motion of the fibers and includes bending elasticity, fiber-fiber interactions and external force-fields. Following earlier work [24,25], we use constraints in order to maintain the length of the fibers. This is an alternative to methods in which potentials are used to represent the longitudinal stiffness of fibers. We extend this approach by introducing an implicit integration scheme. Our method was first used to simulate the effects of motor complexes on two radial arrays of microtubules (asters) [26], and more recently the assembly of anti-parallel microtubule arrays in S. pombe [7] and the positioning of the spindle in the C. elegans embryo [27]. A major aim of these simulations was to reconstitute the system's operation in silico, from established physical principles. This offers two major advantages: i) the assumptions of the model are well defined and can always be modified; ii) any property of the system can be measured easily. This facilitates further investigations. For example we could systematically simplify the model in order to identify a minimal set of working properties [7]. In addition, we could identify the parameter range under which the system can operate [27]. However, for these results to be valid, the systems operation needs to be reproduced correctly at the first place! To maximize the chances of success, it is desirable to reconstitute the mechanics in a physically sensible and accurate way. One may otherwise derive conclusions which do not apply to the real system. In this paper, we focus on the mechanical aspects of the fibers, and explore the numerical resolution of the associated equations. We first describe objects that in addition to fibers are useful for simulating different cellular skeletons. We then present the equation of motion and discuss its numerical integration. We examine the numerical stability of the resulting method and discuss how it affects the simulation speed. Finally, we discuss how other aspects of the cytoskeleton can be added to extend the mechanical calculation. Objects More accurate mechanics can be achieved if we introduce two new objects in addition to fibers: spherical sets of points (spheres) and non-deformable sets of points (solids). These objects are also described with points but have different morphologies (see fig. 1). The mechanical properties are also distinct. While fibers may bend, the solids do not deform. The spheres can represent spherical viscous membranes such as vesicles. Any number of objects can be combined in various ways to build complex cytoskeletons. For example, to simulate interacting microtubule asters [26], fibers were positioned around a solid using static links (see fig. 2A). The solid represented in this case the organelle (called the centrosome) which in the cell generates microtubules in a radial fashion. In vivo as well as in the simulation, the resulting structure is radially symmetric, and the fibers have their ends mechanically joined together. Two such asters were further connected by another solid, to model the positioning of the mitotic spindle in C. elegans [27]. In this case, the additional solid represented the pole-to-pole mechanical connection achieved by the mitotic spindle. To simulate nuclear positioning in S. pombe, fibers (microtubules) were attached to a sphere, and the ensemble was confined in a cylindrical volume (see fig. 2B). The fibers and the sphere represented microtubules and the cell nucleus, which are attached also in the real cell. To model the formation of anti-parallel microtubule arrays in S. pombe [7], fibers where connected by motors and other crosslinkers (see fig. 2C). Using fibers and solids, it is also possible to model the segregation of parM plasmids in E. coli (see fig. 2D), a process which depends on actin-like filaments [28]. The objects can naturally be combined in many more ways than illustrated here. This enables diverse cellular mechanics to be reproduced, and consequently widens the application scope of the method. This freedom is intimately linked to the structure of the master equation that will be examined below, and to the way it is integrated numerically. Constrained Langevin Dynamics In the simulation, fibers and other objects are described by points. The coordinates of the points are collected in a vector x of size N d, for a system of N points in dimension d. Following Langevin (for a simple introduction, see [23]) the equation of motion reads: F (x, t) of size N d contains the forces acting on the points at time t. It includes objectspecific forces such as bending elasticity, and all the links between different objects. dB(t) of size N d summarizes the random molecular collisions leading to Brownian motions; it is a stochastic non-differentiable function of time. The matrix µ contains the mobility coefficients of the object-points, which will be defined later for each object. In addition, certain distances between points inside the objects (|a i − a j | = λ ij ) must be conserved during the motion. To satisfy these constraints, we perform a step of the dynamics in a subspace tangent to the manifold defined by the constraints, and project the result on the manifold. The procedure can be explained simply for a point n constrained to move at a distance r from a fixed position n 0 (see fig. 3). To calculate the motion of n, we first write its dynamics in the plane tangent to the sphere at the current position (this is the plane allowed by the constraint |n − n 0 | = r). The restricted dynamics is integrated implicitly, and the result projected on the sphere to restore the constraint exactly. This approach can be generalized as described next. Numerical integration From an initial configuration, the system is calculated by discrete time steps τ (see [29] for a general discussion on numerical integration). To calculate x t+τ from x t , the equation (1) is integrated implicitly. We will discuss the advantages of using an implicit rather than an explicit integration in section 9, and concentrate here on the practical issues. For an implicit integration, we need to express F (x, t) linearly as A t x + G t , where the square matrix A t contains the stiffness coefficients associated with the interactions, and the vector G t contains the constant forces. This linearization is obtained by summing over all the interactions present at time t (see fig. 6). In our simulations, many of the interactions were modeled as harmonic potentials for simplicity, and are therefore already linear. Non-linear interactions simply need to be linearized at this point. In particular, the linearization of the constraints leads to an orthogonal projection P (x), which will be defined later for each object. To obtain a finite difference scheme for the interval [t, t + τ ], P and A are used at time t, but x is used at t + τ (using x t+τ instead of x t is the basis of implicit integration): leading to a system of linear equations: where θ t,i ∼ N(0,1) are N d independent normally distributed numbers (derived from uniformly distributed pseudo-random numbers [29]). The factors β i ∼ τ 1/2 represent the magnitude of the Brownian motion during a lapse of time τ . We will see later how they are obtained by calibrating the diffusive motion for the objects. The equation can be solved to obtain x t+τ , since both the right-hand side and the matrix [I − τ P t µA t ] are known. It would be inefficient to invert the matrix, because the system is sparse (it only has few non-zero coefficients). This is true of matrix A t , as long as objects are only connected to few others. This is also true of P t which is block-diagonal: it has one block for each object on the diagonal, but the rest of the coefficients are null. This is because the constraints never involve points from different objects, and the projection can thus be done independently for each object. In this situation, it is advantageous to solve the linear system using an iterative method [29]. Different iterative solvers are adapted to different matrices. Because P t A t is non-symmetric, we have used the biconjugate gradient stabilized (http://www.netlib.org). This method iteratively converges toward the solution of the linear system, and can be stopped when the difference with the exact solution is below a certain threshold. We set this threshold to ψ min(β i ), with ψ = 1/10. In this way, the numerical error on x remains below 10% of the Brownian motion, and the approximate solution of (2) is practically indistinguishable from the real one. In practice, it is wise to systematically vary ψ and τ for each application to check the convergence of the method. It is easy to verify, for example, that more stringent values of ψ produce the same results. Finally, since equation (2) is obtained by linearization, an additional correction is necessary to re-establish the constraints. The result of equation (2) is projected back on the manifold associated with the constraints [26]. This introduces corrections which are second-order in τ . In the following sections, we will call this procedure 'reshaping' the objects. We now survey how fibers, spheres and solids are represented in space, their mobility coefficients, projection operators and 'reshaping' procedure. The interactions between objects (which contribute to A t and G t ) will be described subsequently. Linear set of points (fiber) Fibers are modeled as infinitely thin linear objects behaving like elastic, non extensible rods [26]. Each fiber is represented by p + 1 equidistant model-points m i , for i ∈ [0, p], separated by a distance L/p. A fiber is polar: m 0 is the minus-end and m p the plus-end. The number of segments p is adjusted as a function of the total length L of the fiber. Points are added or removed, in order to always minimize |ρ − L/p|, for each fiber as it grows or shrinks (see fig. 4). The desired segment length ρ is a parameter affecting the precision of the simulation. To set ρ, one may run a representative case with various values (for microtubules, ρ < 0.5 µm is usually appropriate). It is often necessary to interpolate between the model-points, when for example calculating the position x of a molecule attached to the fiber. If m k and m k+1 are the model-points on each side of x, we use x = (1−α)m k +αm k+1 . The interpolation coefficient α ∈ [0, 1] is calculated from the known relative positions of the three points along the fiber: α = |m k x|/|m k m k+1 |. The model-points are themselves updated using this interpolation procedure at every time-step if the length of the fiber has changed (see fig. 4). Bending elasticity Fibers can bend under external forces and resist these forces elastically. The standard formula for bending elasticity [20] can be applied to strings of points. For any set of three consecutive points m k , k ∈ {i − 1; i; i + 1}, we approximate it linearly as a triplet of forces {−F ; 2F ; −F }. Each triplet corresponds to the torque generated between two consecutive segments (see fig. 5). Furthermore, we have 3 , where κ is the bending modulus of the fiber, and L/n the length of each segment. The result was verified by comparing the buckling threshold in the simulation with Euler's formula π 2 κ/L 2 . The procedure is appropriate if ρ is such that the angles between consecutive segments remain small during the simulation (not shown). Physically, the forces are isotropic, i.e. they can be written as a reduced matrix of size p × p (and not pd×pd), obtained by adding several times the 3×3 matrix E = −(1, −2, 1)⊗(1, −2, 1) (⊗ is the tensor product). The final result is simple because points are distributed regularly over the length of the fiber (see fig. 5). Mobility The motion of an object at low Reynolds number is characterized by a mobility. This is defined by factors which link speed and force (speed = mobility × force). These factors depend on the size and shape of the object, and on the viscosity η of the surrounding fluid. For instance a straight cylinder has two mobility factors, because it is twofold easier to move in the longitudinal direction than in a transverse direction. This anisotropy could not be implemented simply, because fibers in the simulation may bend and adopt arbitrary shapes. An exact calculation would require finding the hydrodynamic interactions between all the points in the system. This can be done in the future, but for simplicity, we have so far used the averaged mobility of a straight rod of length L and diameter δ: µ = log(L h /δ)/3πηL [30]. The logarithmic term is an effective hydrodynamic correction on the scale L h , which is either the length of the fiber, or a hydrodynamic cut-off, whatever is smallest. We derive a single mobility factors for the p + 1 points representing a fiber: µ p = (p + 1) µ. Projector associated with the constraints In this section, we calculate the projection P derived from the constraint that the length of the fiber should remain constant during the resolution of equation (1). For each fiber, the coordinates of the p + 1 model-points m k are stored in a vector of dimension (p + 1)d (for d = 3, {x 0 , x 1 , x 2 } correspond to m 0 , and {x 3 , x 4 , x 5 } to m 1 , etc). The motions of these points are determined by external forces f = {f k }, and additionally by internal forceŝ f = {f k }. The speeds resulting fromf + f should be compatible with the constraints Because the mobility coefficients are the same for all the points (µ p , see sec. 5.2), the speed of the points is v = µ p (f +f ). This motion maintains the constraints if J v = 0. Thereforef must be such that J(f +f ) = 0. Furthermore, internal forces should not contribute to global motion or rotation of the object. This imposes that their work should be null for any motion compatible with the constraints:f ·u = 0 for any u such that J u = 0. This implies thatf = J t λ, where λ is a vector of size p (the Lagrange multipliers). We derive J(f + J t λ) = 0, and since JJ t of size p × p is non-singular, λ = −(JJ t ) −1 J f , and finallyf = −J t (JJ t ) −1 J f . This shows that the total force can be obtained linearly as f +f = P f , with P = I − J t (JJ t ) −1 J. From this result, it is clear that P is an orthogonal projection (P is symmetric and idempotent P P = P ). Notice that JJ t is banded symmetric, and therefore easy to invert, which means that P can be computed fast. P (which depends solely on x) is one block of the operator P t used in equation (2). Fibers are 'reshaped' to restore the constraints exactly after the model-points have been moved. This is done sequentially for k ∈ [0, p], by moving the points m 0 ...m k in the direction of m k+1 −m k and m k+1 ...m p in the opposite direction, to restore |m k+1 −m k | = L/p while conserving the center of gravity of the fiber. Brownian motion To simulate Brownian motion, a term δB t is attributed to each fiber coordinate x t (equation 2). This term is most simply calibrated by considering diffusion in the absence of bending or external forces (A = 0, G = 0). If we first assume P t = I in equation (2), we get x t+h − x t = δB t . To produce a pure diffusion with a coefficient D, one needs: This holds true if δB t is normally distributed, of mean zero and variance 2Dτ . We can use δB t = βθ, where θ ∼ N(0,1) is a random number generated for each time step, and β = √ 2Dτ , as mentioned in section 4. From Einstein's relation, we set D = µ p k B T , where µ p is the mobility, k B the Boltzmann constant, and T the absolute temperature. For a fiber with p+1 points, we use (p+1)d random numbers, independent and all normally distributed of variance β 2 . Projecting these numbers with P produces the appropriate diffusion for the fiber, as well as thermally-driven deformations. For example, the translation x of the center of gravity depends on the sum of all the terms in δB corresponding to the fiber, leading to a diffusion D = µk B T (with µ and not µ p ). Spherical set of points (sphere) To simulate the nucleus of S. pombe and attach microtubules on its surface (see fig. 2B), we implemented a 'spherical set of points' of radius r. Such object is composed of a point n 0 in the center, and q additional points n i on the periphery. If we define r k = n k − n 0 , the constraints are |r k | = r. A sphere moves as a rigid body, and the peripheral points behave as if they were embedded in a viscous surface (see fig. 1). If f k is the force applied at point k, the motion of the set reads: is the total torque calculated from the center, and where is the projection on the plane tangent to the sphere in r k . dB R , dB T and dB S k are the Brownian terms. Note that these equations would not describe a set of peripheral points articulated around a central node. For example, the motion of the center n 0 depends on the sum of all the forces applied to the object, and not only on the force applied in n 0 . This in fact corresponds to a sphere with points on its surface. To keep track of the orientation of the sphere, we also included three reference pointsñ k on the surface, which form with n 0 a reference frame associated to the sphere. The motion of these reference points is entirely determined by the total torque on the sphere: dr k = µ R M dt + dB R ×r k , where as beforer k =ñ k − n 0 . When the object needs to be 'reshaped', the peripheral points are simply projected on the surface (n 0 is not moved). Mobility and Brownian Motion The equations involve three mobility factors: the translation and rotational mobility of the sphere µ T and µ R , and the mobility of the points in the surface µ S . Stokes' law can be used to set µ T and µ R , if the sphere is surrounded by a large volume of fluid. The mobility coefficients for the points in the surface can also be calculated [31]. As described above, points undergo three different types of motion, and a random number δB t in equation (2) is associated with each of these motions. The parameters are calculated by considering diffusion in the absence of other forces (A = 0 and G = 0). For the translational diffusion of the sphere, the result from equation (3) is obtained as previously for the fiber: β T = 2µ T τ k B T . Rotational diffusion is calibrated using equation (3). If r t is fixed on the surface, we get r t+τ − r t = δB R t × r t . This should be a rotational diffusion of a point on a sphere: Since |r t | = r, we can use for δB R t a random vector with d independent components of mean zero and variance 2τ µ R k B T /r 2 . A peripheral point r t also diffuses on the surface, which in equation (3) is described by r t+τ − r t = P k δB S k,t . The projection p t of r t should diffuse in 2D: Since P k is the identity in the tangent plane, we used for δB S k,t a vector with d independent components of mean zero, and variance 2 τ µ S k B T . Non-deformable set of points (solid) We also implemented non-deformable objects called solids (see fig. 1) in which the points move together in such a way that the shape and size of the set is conserved. The number of points p in a solid, and their positions s i can be chosen arbitrarily, and each point is associated with a radius a i ≥ 0. The mobility of the solid is derived from Stokes's result for the spheres of center s i and radius a i , neglecting for simplicity the hydrodynamic interactions between the spheres. It is possible to include points with a i = 0 provided that i a i > 0. In our previous work, we have actually used solids where only one a i was non-zero. These solids moved like isolated spheres, and the points a i where positions to which forces could be applied. Mobility and Constrained Motion Because the set of points should not deform, its elementary motion during a time-step can be written as (s t+τ where v and ω are instantaneous translation and rotation speeds. The spheres of radius a i in a medium with viscosity η have a translational drag coefficient ξ i = 6πηa i , and a rotational drag coefficient ξ ω i = 8πηa 3 i [30]. The forces and torques resulting from the friction of the fluid on the sphere thus read: and should match the externally applied forces f i : This set of four equations can be solved algebraically in both 2D and 3D, to express v and ω as a function of the external forces f i . The result always fits in the format of equation (1). It is actually not necessary to calculate the matrix P to run a simulation. It is more efficient to calculate v and ω when the product P µf is needed. To 'reshape' a solid, one may restore a reference configuration in the current position and orientation. For this, the best translation and rotation which brings the reference points onto the current points is calculated [32]. The current points are then replaced by the transformed reference configuration. The Brownian components are calibrated as described before. Interactions between objects The three objects defined previously can be linked together using elementary interactions. By adding the contributions of all these interactions in the system, we obtain the linearized force F (x, t) = A t x + G t , which enters equation (2). In practice, each elementary interaction leads to a small matrix, which needs to be added to the matrix A t and vector G t , at the right rows and columns to correspond to the appropriate points (see example on figure 6). It is necessary to repeat the procedure at every time step, because the position of the interactions may change with respect to the model-points. We define four interactions in the case where they connect model-points of the objects. We later explain the procedure to connect intermediate positions between the model-points. This approach can be generalized to more complicated interactions if necessary. For example, it is possible to implement a ring able to slide along a fiber with viscous resistance [33]. 8.1. Connecting an object to a fixed position. The simplest way to immobilize an object is to attach a point a within the object to a fixed position g. If the stiffness of the link is k, the resulting force is f a = k (g − a). In practice, this means adding −k at one diagonal position in matrix A t , and kg to the vector G t (see fig. 6). Such interactions are used to model gliding assays (see fig. 8) in which motors immobilized on a surface propel fibers in solution. Each attached molecular motor leads to an elementary interaction where g corresponds to the place of immobilization, and a corresponds to the position on the fiber at which the motor domain is attached. Connecting two objects. Points from two different objects can be connected by a link of stiffness k. The forces between the points are f a = −f b = k (b − a). These elementary interactions are effective to model oligomeric motors [26] and more generally any entity able to connect two fibers together (see fig. 2C). In the case of an oligomeric motor, a and b are the positions to which the two motor domains are attached on the fibers. Confinement in a convex shape. To confine the objects inside a convex shape, we use a harmonic potential that is flat inside the allowed region, and rises quadratically away from its edge. Hence, a point a outside the cell volume is subject to a force f (a) = k(p(a) − a), where p(a) is the closest point to a on the edge of the allowed volume. Because p is also the orthogonal projection of a, the force corresponds to a friction-less edge. We linearized f as x → k ( e a · (p(a) − x) ) e a , where e a is a unit vector in the direction of p(a) − a. This linearization corresponds to the tangent plane in p(a), and usually gives a good approximation of f (a) as long as the curvature is small. To confine a fiber, it is sufficient to follow the procedure for its model-points, if the volume is convex, which is the case for example of the cylindrical yeast S. pombe (see fig. 2B). To confine the nucleus of radius r in the same volume, we used a cell volume reduced by r. In this way only the center of the sphere needs to be tested. Connecting two objects at a given distance. A Hookean spring of stiffness k with a non-zero resting length r between two points a and b corresponds to: This force should be linearized for |δ| ≈ r, leading for a to a term krδ/|δ| in G t and a contribution in A t which is: and the opposite contributions for b. This interaction can be useful to introduce a repulsion between the points. It can for example represent the physical interaction between the nuclear membrane and the microtubules in S. pombe (see fig. 2B). Interpolation of forces We have discussed connections which were attached to model-points. However, in the case of a fiber, a molecule may bind at any position x, which is likely to be between two model-points m k and m k+1 . When this happens, a is interpolated from the flanking model-points using a coefficient α = |m k x|/|m k m k+1 | in [0, 1]. In the same way, a force f applied in x can be distributed to the model-points as f k = (1 − α)f and f k+1 = αf . Since this procedure preserves any linearity in the relationship between force and coordinates, the different matrix elements mentioned previously can be used with interpolated points, provided they are multiplied left and right by an appropriate weight matrix. We can illustrate the procedure for the simplest connection f a = −f b = k (b − a) of stiffness k between two points a and b (section 8.2), which reads: When a and b are model-points, this 2 × 2 matrix is a reduction of A, corresponding to the x, y or z-subspaces. This is sufficient in this case because a Hookean spring of null resting length is isotropic, that is to say it does not mix x, y and z coordinates, and applies similarly to each subspace. This is not the case for all interactions discussed in this section, and it is often necessary to calculate a full matrix. Moreover, when a and b are intermediate positions between the model points, we have two indices k, l and two interpolation coefficients α, β such that a = (1−α) m k +α m k+1 and b = (1−β) m l +β m l+1 . If we define α = 1 − α and β = 1 − β, and w = α α 0 0 0 0 β β , we get: The resulting 4 × 4 matrix is w (−k) w t , with w t = (α, α, −β, −β). We derive that a matrix made by adding multiple such interactions is symmetric negative-semidefinite (x t Ax ≤ 0, for any x). The fact that this is true for any configuration of the connections guarantees the numerical stability of the method, as explained next. Numerical Stability and Performance We have described all the components of equation (2) which describes the collective mechanics of cellular fibers and other objects. The necessary steps of the calculation are summarized in figure 7. It is useful at this stage to examine the method mathematically. This is usually done by looking at two properties: precision and numerical stability [29]. The precision is a measure of how the typical error behaves when the time-step τ becomes small. The numerical stability is a measure of how large τ can be, before the calculation fails. Numerical precision is important for deterministic equations, for example to predict the trajectories of celestial bodies. However, this is not so critical at the cellular scale. In fact, to simulate the Brownian motion present in the cell, a random term δB ∼ √ τ was included in equation (2). The presence of this 'noise' indicates that the physics itself limits the precision at which the position of an object can be predicted. This fact undermines the usefulness of high precision schemes. The implicit method that we have described is of order one: the step's error scales like O(τ 2 ), which is better than the physical 'noise' in √ τ . We found that it was not practically useful to use higher order numerical schemes. In contrast, the numerical stability of the method is most important. Indeed, explicit schemes usually converge only if the time-step is small. In general, a condition like τ µk < 1 must be fulfilled, where µ is the mobility of a point in the system, and k the stiffness of the interaction potential. For example, we looked at a test-case in which a microtubule is pushed by immobilized motors (see [25] and Fig. 8). It can be simulated explicitly only if τ < 1 µs, but the implicit method can use larger time-steps. To achieve this stability, we treated the repulsive and attractive interactions in the system differently. Compressive forces in the fibers (which are repulsive in nature) were replaced by constraints. All the other forces were attractive. This ensured that A t would be negative-semidefinite (this result was proven in section 8.2 for Hookean interactions of null resting length). Mathematically, because P t is an orthogonal projection, we can show that the eigenvalues of I −τ µP t A t are always greater than 1, for any value of τ . This implies that our integration scheme is unconditionally stable. For the other elementary interactions, some instabilities may appear, but only for very high values of the time step (not shown). Beyond stability, other considerations naturally limit the choice of τ . In particular the iterative solver might not converge when τ is large. The optimal time-step generally depends on the problem studied, and it is best to perform systematic trials to find it. For the test-case (see fig. 8), the results are consistent for τ < 20 ms. This means that a value of 5 or 10 ms would be appropriate. The computational requirements depend on the total number of steps (total time/time-step), but also on the cost of individual steps. An implicit step of integration is always more costly than an explicit step, because a linear system must be solved. However, the use of sparse matrix techniques reduces the additional work. In practice the considerable reduction in the number of steps makes implicit simulations faster (in the test-case, this gain is 10 4 , using τ = 10 ms instead of 1 µs). Increasing the execution speed is essential if many simulations need to be performed. Implicit methods require increased numerical labor, of which we have illustrated the main difficulties. Using the method described here, we can simulate the examples shown in figure 2 B, C & D much faster than real time using one processor (www.cytosim.org). Other Elements of a Cytoskeletal Simulation In addition to mechanics, a cytoskeletal simulation such as cytosim must include additional aspects such as the motion of molecular motors, their binding/unbinding dynamics, as well as the transitions between growth and shrinkage of dynamic fibers. These processes can be modeled most simply by executing small sub-routines after the Brownian mechanics has been calculated, because they correspond to independent operations (see fig. 7). However, two particularly important aspects of cytoskeletal physics need to be mentioned. Firstly, only in very particular cases can we approximate the system as a well-mixed reactor. At least some of the molecules should be spatially resolved. Secondly, the mechanics commonly affects the chemistry. For instance the rates of certain key reactions are forcedependent. This is the case for the unbinding rates of molecular motors and for their stepping rate (see below). Because these elements are essential for modeling the system accurately, it will rarely be possible to apply algorithms developed for purely chemical systems (eg. the Gillespie algorithms [34] or even spatially resolved methods [35]) without extensive modifications. We can however use simple and robust simulation strategies, as illustrated below in the case of molecular motors. Modeling Molecular Motors In cytosim, a motor is characterized by a position, when it is not attached, and by a pointer to a fiber and a curvilinear abscissa, when it is attached (see fig. 9). The abscissa is the distance, measured along the fiber, between a reference and the attachment position. It is necessary to use a reference which is fixed with respect to the physical lattice, because the model-points of a fiber are themselves updated as the fiber grows (see fig. 4). This description neatly separates the details of how the mechanics is implemented from the routines simulating the motors per se. This means that the interface with the rest of the program can be very simple, with only two procedures: step(f ) and attach(m). 10.1.1. Active Motion. The first procedure step(f ) simulates the possible actions of a bound motor. The argument f is the load of the motor calculated during the collective mechanics. The procedure should decide to detach the motor, or to update the abscissa a according to a microscopic model for the interval τ . For a well characterized motor like kinesin, a classical model is based on the measured characteristics of the motion: the abscissa is increased by δa = τ v max (1 − f /f stall ). In addition, a force-dependent unbinding rate p off = p 0 exp(|f |/f 0 ) is used to model the dissociation from the fiber. v max , p 0 , f 0 and f stall are characteristics of the motor that have been measured for kinesin [1]. With this model, the fibers are continuous tracks along which motors may be located anywhere. Alternatively, we may model the motion of a motor as a succession of discrete stochastic steps. In this case, the motor does one of four things: stay immobile, detach, take a step toward the minus-end or take a step toward the plus-end. This means that if the motor does not detach, the abscissa is either unchanged, or it is increased or decreased by the step size (8 nm). The procedure step(f ) calculates the probabilities of these events as a function of the force f for the interval τ , and selects one of them. This model is quite attractive, because these probabilities are actually available for kinesin [36]. Most models describing the movement of motors [19] can be summarized similarly with a function step(f ). Attachment to Fibers. The second procedure necessary to model motors, attach(m) simply decides if a unbound motor binds or not to a site m. Usually the model would specify , a maximum distance at which a motor may bind from its current position (see fig. 9). In addition, the molecule would bind at the closest site on the fiber (the orthogonal projection) with a certain molecular binding rate k on (s −1 ). To simulate attachments, one therefore needs to first find the fiber-segments which are closer than , typically from all the positions x at which molecular motors are located. For each point x, the list of candidates should then be shuffled, to ensure a random ordering of the segments. The molecular binding rate can finally be tested sequentially for each segment in the list, for example by comparing τ k on with a random number θ in [0,1]. The first successful trial is followed by attachment. If done naively, the first step of the operation may require calculating the distance of all points to all fiber-segments, and thus a great deal of computation for many motors. To avoid this bottleneck in cytosim, a divide and conquer algorithm was developed (see fig. 10). Its goal is to limit the number of segments that need to be tested to find those which are close to x. The geometrical distance between x and these segments is calculated using the vector cross-product to exactly determine which ones are closer than . Reducing the number of tested segments is sufficient to accelerate the simulation. Conclusion The method described here is efficient to simulate sparsely connected networks of filaments. It applies to many in vivo situations, because the connections between fibers are usually mediated by proteins that are small compared to the fibers, and consequently the fibers are only locally connected. We have modeled fibers as oriented lines, which is sufficient to calculate the extent of bending. It may be necessary in the future to include more details such as writhe, since cytoskeletal fibers also have a torsional rigidity. The method can be extended in several other ways. One could for instance easily model discrete binding sites on the fibers. This may be important if the fibers are highly covered and molecules compete or interact while bound to the lattice. It is also possible to extend the overdamped mechanics by adding hydrodynamic effects. It will be very exciting to integrate fiber mechanics with membrane dynamics, since membranes and cytoskeleton contribute synergistically to cellular architecture, but this might take some time. Cellular chemistry, reaction-diffusion of the components in the cell, gene expression networks, can be added more simply. This can be done by interfacing our software with other tools (eg. the Virtual Cell project), which already cover some of these aspects of physiology. We did not discuss here implementation issues, but the scale of the task should remind us of their importance. Software modularity is essential to divide the development effort in separate projects of manageable size. Submodels or algorithms should be developed and tested separately, in such a way that they can be added or removed from the integrative software easily. Dividing the work among different groups is the best way to produce the high-quality cellular simulations that biology needs. [27]. (B) Microtubules in interphase fission yeast and the nucleus, represented by a sphere (blue/green). This can be used to study the role of mechanics in regulating the dynamics and organization of microtubules. (C) Self-assembly of interphase microtubules arrays in fission yeast. The simulation contains no steric interaction between the fibers, and they overlap freely. In the display, however, the fibers are shifted in order to visualize the bridging complexes (bottom and right). Using this simulation, we could identify a minimal 'recipe' to make stable bundles from dynamic microtubules. This recipe describes how cross-linking, nucleating and motor activities can be associated to obtain the result observed in vivo. (D) Self-segregation of plasmids in prokaryotes. Actin-like filaments are simulated, together with two solids, representing the plasmids [28]. The efficiency of the segregation is recapitulated in the simulation, and can therefore be analyzed. The sum of all rows and columns is zero, since the matrix should only generate an internal torque. The forces associated with the first triplet (points x 1 , x 2 and x 3 ) are depicted. The resulting matrix for 5 points is also shown, and the generalization is straightforward. For any fiber, the result is a symmetric banded matrix multiplied by a scalar α that depends on the bending elasticity modulus and on the distance between the points. For each interaction, the appropriate formula (sec. 8) is first expanded algebraically. The factors associated with the coordinates of the points are added to A, and the coefficients which are independent of the coordinates are added to G. At the end of the procedure, one obtains a (sparse) symmetric matrix A and a vector G that provide the forces on the points F = A x + G. Here we illustrate how a connection of stiffness k 1 (sec. 8.2) contribute to factors k 1 and −k 1 at the rows and columns of A corresponding to the points connected. For a connection to a fixed position g (sec. 8.1), a stiffness coefficient −k 2 is added in A, while k 2 g is added in G. In this example, the connections are attached exactly to points of the system, but this is not always the case. Section 8 explains the general proceduce. In addition, the matrix represented here corresponds to a 1D system. It needs to be duplicated for a 2D simulation, and triplicated in 3D (sec. 8.5). Pool coordinates of object-points. Calculate projection P for each object. Project solution to 'reshape' the objects. For mobile attachments such as molecular motors: Calculate tensions in the interaction link. Use this information to move attachment positions, according to the characteristics of the motors. Calculate forces on fiber tips. Elongate fibers according to their force-growth curve. Recalculate the model-points of fibers by interpolation. Calculate right-hand side of equation (2), Solve system of linear equations using iterative method, with a precision exceeding βΨ, with Ψ=0.1. Set Brownian components from random numbers, record Brownian magnitude in β. Loop over all interactions to set matrix A and vector G. Individual Procedures Attachment trials for unbound motors. Detachment trials for bound motors. (detachment rates are usually force-dependent) Figure 7. Synopsis of a simulation time-step. Sub-steps necessary to simulate a system of molecular motors and dynamic fibers. The collective mechanics corresponds to the algorithm described in the article. As a byproduct of calculating the mechanics, one gets the tensions in the fibers and the forces connecting the fibers. With this information, simulation sub-steps can be performed for the objects independently. Events such as the binding and the unbinding of motors and the nucleation of new filaments will most likely be modeled stochastically. Depending on the level of details required, less-discrete events may be simulated in a deterministic manner. For example, the active motion of molecular motors and the assembly dynamics of cytoskeletal fibers can be simulated as non-random processes characterized by a force-velocity curve. speed (turn/s) Figure 8. Numerical stability of the integration scheme. Top: A gliding assay where a filament is attached at its end (time-intervals of 5s). The motors pushing the fiber lead to the formation of a rotating spiral, as observed experimentally [25]. The rotation speed and maximum radius of the spiral can be calculated from the parameters of the system: 16000 motors cover an area of 2 × 2µm, and have the characteristics of kinesin: stall force f max = 5 pN , unloaded speed 0.4µm/s, binding rate 10 s −1 , unbinding rate p off = 0.5 s −1 exp(force/2.5 pN ), maximum binding distance 10 nm and stiffness 200 pN/µm. The microtubule of length 8 µm has a rigidity of 20 pN µm 2 . It is constrained at the minus end by a link of stiffness 4000 pN/µm. The effective viscosity is 0.02 pN s µm −2 . Bottom: The configuration is simulated for different values of the time-step τ , with accurate results for τ < 20 ms. The algorithm is numerically stable, and even produces a spiral with τ ∼ 0.5 s. However, the radius is then under-estimated, and the rotation speed overestimated. Another critical parameter, the distance ρ between the points on the fiber was also varied. The results shown for ρ = 0.1, 0.2, 0.4 and 0.5 µm (different lines) are similar, because all these values are appropriate. The calculations were inaccurate however with ρ = 0.8 µm (data not shown). This is expected considering that the radius of the spiral is ∼ 1.4µm. a(t) fiber origin δa Binding Motion load f a(t+τ) ε ε ε Figure 9. Molecular Motors. Top: An unbound motor (diamond) is represented by a position. Attachment occurs on the closest site on the fiber-segment, provided this site is within a distance (dashed lines). The capture regions of the segments are truncated such that they cover exactly the region located at a distance from a straight fiber. When the fiber is not straight, the gaps and overlaps exactly compensate each other. Bottom: A bound motor is represented by a pointer to a fiber, and by a curvilinear abscissa a(t) measured from a fixed origin on the fiber. This defines the position of the motor along the fiber independently of the mathematical representation of the fiber. The motor sub-model needs to decide whether the motor should detach during the interval of time τ , or it needs to calculate the displacement δa during the same interval. For this, it can use the load f calculated during the collective mechanics, and other properties associated with the fiber, such as the proximity of the ends, or information on the crowdedness of the binding sites on the fiber. To simulate the attachments to fibers, we must be able to find all the fiber-segments which are within a distance from an arbitrary position x. We can proceed according to the following two steps method: Divide (left): A grid is set in space, each node of the grid being associated with a list of segments. The segments are recorded on the grid, at the nodes located at a distance h or less (h will be defined later). This operation is performed in 2D using standard rasterizer codes derived from computer graphics, which are optimized to scan all points with integer coordinates located inside an arbitrary polygon. We rasterize the rectangles built around the segments at a distance h. For example, on this diagram, the blue segment is recorded at the blue points, and the red segment at the red points. In 3D, the rectangular volume can be rasterized following the same principles as in 2D. Conquer (right): After the segments have been distributed over the grid, one can quickly find which ones are near x: one needs to check only the segments recorded at the grid point g closest to x. One will find all segments located at distance h − d/2 or less from x, since |gx| < d/2, where d is the diagonal of the grid. Hence to find all the segments closer than , one sets h = + d/2 during the rasterizing operation. Note: The grid does not need to be square (the unit cell can be rectangular) and it can be adjusted for optimal performance. If the grid is too fine, it will use a lot of memory, and rasterizing will be slow. If the grid is coarse (d large), the number of candidates returned for a point x will be larger. Experimentation may be necessary to optimize the grid, but the procedure provides exact results for any cell size.
12,460
2007-11-30T00:00:00.000
[ "Physics" ]
Thermal and Hydrodynamic Characteristics of Graphite-H2O and CuO-H2O Nanofluids in Microchannel Heat Sinks In this study, nanofluids were used as coolant for high-heat dissipation electronic devices with nanoparticle volume concentrations from 1% to 5%. The results were compared to other conventional cooling systems. Graphite-H2O and CuO-H2O nanofluids were analyzed at inlet velocities of 0.1 m/s and 1.5 m/s in a rectangular copper shaped microchannel heat sink MCHS with a bottom size of 20mm×20mm. The results indicate that suspended nanoparticles significantly increase thermal conductivity, heat flux, pumping power, and pressure drop. For graphite-water and CuO-water nanofluids at 0.1m/s with 5.0% volume, the greatest percentage increase in thermal conductivity was 15.52% and 14.34%, respectively. Graphite-water at 0.1 m/s and 1.5 m/s with 5% volume fraction had a maximum heat flux of 18% and 3.46%, respectively. CuO-water at 0.1 m/s and 1.5 m/s inlet velocity with the same volume concentrations had a heat flux of 17.83% and 3.33%, respectively. For graphite-H2O and CuO-H2O at 0.1 m/s with 5% volume fraction, pumping power and pressure drop were 0.000695 W and 92.63 Pa, respectively. For inlet velocity of 1.5 m/s with same volume concentration were 0.156306 W and 1389.39 Pa, respectively. Introduction Nanofluids, so named by Argonne National Laboratory, are nanoparticle suspensions in a base fluid. Water, engine oil, and ethylene glycol are base fluids with low thermal conductivity. Nanometer-sized particles have higher thermal conductivity than base fluids. Increasing the nanoparticles in a base fluid, even if the volume concentration is low, significantly increases thermal performance [1]. Choi was the first person to use the term ''nanofluids". Nanofluid technology a mixture of liquidsolids in which metallic or nonmetallic nanoparticles are suspended to improve the heat transfer of conventional fluids. Heat fluxes from Modern electronic devices have increased significantly. For electronic component cooling, it is very important to manage heat fluxes. To dissipate heat fluxes conventional cooling systems (air cooling techniques) are inadequate. For many heat transfer applications, conventional techniques have been replaced by other cooling techniques. The dispersing solid particles into a base fluid (nanofluid) for heat transfer applications enhances heat transfer coefficients and thermal conductivity. It is essential to create efficient and high-performing heat transfer fluids for heat industrial processes. Electronic components deteriorate, decreasing component performance and increasing component failures due to overheating. To create high-performing electronic systems the heat dissipation from their components must be efficiently controlled. The average electronic chip heat flu x exceeds 150 (W/cm 2 ) [2]. Dissipating heat from electrical devices is an important factor in improving informatio n technology (IT). ADHAM [3] Carried out the investigation of refrigerant base nanofluid (Al2O3-NH3) as a coolant for electronic chips. He concluded that using (Al2O3-NH3) coolant will outperform other coolants like (SiC-H2O, TiO2-H2O, H2O and Al2O3-H2O) in terms of pumping power demand by up to 85%. Adham et al. [4] carried out an analytical study on the thermal resistance and pressure drop of a microchannel heat sink with rectangular shape utilizing ammonia as a coolant. They concluded a significant thermal resistant reduction with 0.213 o K/W for ammonia gas when compared to that of 0.266 o K/W for air. Sohel et al. [5] showed heat transfer improvements fro m the use of minichannel heat sinks electronic cooling with a Al2O3-H2O nanofluid coolant for volume fractions fro m 0.1 -0.25 %. The heat transfer coefficient was enhanced by 18%, heat sink base temperature was reduced by 2.7 o C, and thermal resistance was reduced by 15.72%. Li and Xuan [6] investigated the convective heat transfer of CuO-H2O base nanofluids in a tube. Their results showed that the use of nanofluids improved heat transfer rate compared to pure water. Nguyen et al. [7] reported the thermal behavior of Al2O3-H2O nanofluid as a microprocessor coolant. Their results indicated the enhancement of heat transfer coefficients by 40% compared to the base fluid. Lee et al. [8] presented that the thermal conductivity of CuOethylene glycol nanofluid with 4% particle volume concentration could be enhanced by up to 20%. Chen [9] analyzed forced convection heat transfer through microchannel heat sinks for electronic cooling systems. Gillot et al. [10] evaluated the use of single-phase and twophase micro heat sinks to cool power components. Chein and Huang [11] studied silicon microchannel heat sink performance using a CuO-H2O nanofluid as a coolant. They indicated that heat sink performance has significantly enhanced by the nanofluid. Ding et al. [12] investigated the heat transfer performance of CNT nanofluids flowing in a horizontal tube with an inner diameter of 4.5 mm. They were showed that increases in the heat transfer coefficient were much greater with increases in thermal conductivity. The study aims analytically examines nanofluid thermal conductivity, heat transfer coefficient, flow rate, pumping power, and pressure drop for a rectangular copper minichannel heat sink that used CuO-H2O and Graphite -H2O as coolants. In addition, it investigates the effect of using appropriate equations to calculate the thermophysical properties of the nanofluids on the overall performance of the considered system. Nanofluids In this study, CuO nanoparticles and graphite nanoparticles suspended in water were mathematically analyzed. The thermophysical properties of CuO, Graphite and water at 30°C were used [13]. Table 1 Lists the Thermophysical properties of the water and nanoparticles. The thermophysical properties of CuO-H2O and Graphite-H2O nanofluids were calculated using particle volume fractions of 1%, 2%, 3%, 4%, and 5%. The density [14], viscosity [15], specific heat, and thermal conductivity [16] were determined using Eq. (1) to (4): Nanoparticles were assumed to be spherical particles with n=3 Heat Flux This paper examined a copper minichannel heat sink. The dimensions of the copper minichannel heat sink were taken from Xie et al. [17] and are shown in Fig. 1. [19] Assumptions: The flow was laminar, incompressible, and steady state; the thermophysical properties of CuO-H2O and graphite-H2O were constant; and the effect of body force was neglected. The nanofluid Reynolds number was defined as [13]: Hydraulic diameter was the ratio between channel crosssectional areas and the perimeter [13], which was computed using Eq. (6): Where 0.1 m/s and 1.5 m/s are the mean velocities of CuO-H2O and graphite-H2O in the minichannel heat sink, respectively [17]. Nusselt number was a dimensionless parameter defined as the ratio of convective to conductive heat transfer [13]. The Nusselt number for nanofluid laminar flow through a minichannel heat sink was calculated using Eq. (8) Where αs is the channel aspect ratio The convective heat transfer coefficient h was evaluated from the Nusselt number using Eq. (9): The efficiency of copper MCHS was calculated using Eq. (10) and (11). η is fin efficiency, which was expressed as: Surface area was written as: where n is the number of cooling channels. There were 25 channels for the fixed width of the heat sink [17]. (13) where ̇ is the total coolant mass flow rate through channel inlets and Abm is the bottom area of a rectangular minichannel heat sink, which was calculated using Eq. (14) [13]: Overall thermal resistance Rt and temperature differences for heat generation rate Q were computed using Eq. (15): where ̇ is heat flux, Tmax is the maximu m bottom temperature, Tin is inlet fluid temperature, and Q is total heat transfer. where α is the channel aspect ratio Required pumping power was calculated using Eq. (19): Result and Discussion The results showed that the addition of graphite nanoparticles to the base fluid (water) had a significant effect on thermal conductivity. Fig. 2 shows variations in graphite-H2O thermal conductivity with different particle volume fractions. The thermal conductivity of graphite-H2O increased with increased particle volume fractions. The maximu m thermal conductivity for graphite-water was about 0.7128 W/m.K at 5% particle volume fraction and the greatest enhancement in thermal conductivity was 15.52%. In addition, the thermal conductivity of CuO-H2 O nanofluid was improved through the addition of nanoparticles. Figure 3 shows that the greatest improvement in thermal conductivity for CuO-H2O with 5% volume concentration was 14.34%. Thermal conductivity was computed based on Hamilton and Crosser model (Eq. (4)). Liu et al. [22] measured the thermal conductivity of CuO-water with a 5% volume fraction. Their results showed that an improvemen t of thermal conductivity of around 22.4%. In this study, the effect of Brownian motion was neglected but the effect of particle volume fraction on thermal conductivity and particle shape was taken into account. The measurement results show that nanofluid thermal resistance remarkably decreased with increased Reynolds numbers, while convective heat transfer coefficient increased. For inlet velocities of 0.1m/s and 1.5m/s for graphite-water and CuO-water nanofluids (Eq. (5) and (13)), thermal conductivity, heat transfer coefficient, and thermal resistance influenced each other. For example, the thermal conductivity of graphite-water nanofluid with 1% particle volume fraction was 0.6354 W/m. K with a 6533 W/m2.K heat transfer coefficient and an 0.0805 W/K thermal resistance. By increasing the particle volume fraction to 5% the heat transfer coefficient and thermal resistance changed to 7329W/m 2 . K and 0.0781 K/W, respectively. The same results occurred for CuO-water at inlet velocities 0.1 m/s and 1.5 m/s as shown in Fig.4 and Fig.5. As expected, mass flow rate was directly proportional to the heat transfer coefficient for graphite-water and CuOwater, as mass flow rate increased with increased heat transfer coefficients (Eq. (14) and (9)). In addition, nanofluid density increased when increased particle volume fractions were added to the base fluid, which increased the convection heat transfer coefficient and inlet velocity for graphite-water and CuO-water. Nanofluid density was computed using Eq. (1). For instance, at 0.1m/ s the density of CuO-water nanofluid has 1050.84kg/m 3 with 1% particle volume concentration and a mass flow rate equal to 0.0079 kg/s with a 6519 W/m 2 . K convective heat transfer coefficient. At 5% volume fraction density was 1271.01 kg/m 3 with a mass flow rate of 0.0095kg/s and a volume fraction density of 7254 W/m 2 . K as shown in Figs. and 7. Increased volume concentrations enhanced the heat flu x of both nanofluids, which was calculated using Eq. (16). From this study it can be observed that the greatest improvement in heat flux with 1% particle volume concentration from the use of 0.1m/s graphite-water and CuO-water nanofluids were 17.83% and 18%, respectively, and 1.5m/s graphite-water and CuO-water was 3.33% and 3.46%, respectively, for both inlet velocities. For the CuOwater nanofluid the maximu m enhancement in heat flu x was 13.15% at 4% volume fraction while improvements from TiO2-water and Al2O3-water were 6.20% and 6.80%, respectively. The thermal conductivity of nanoparticles is higher than the base fluid (water). Thus, the addition of nanoparticles to the base fluid led increases its convective heat transfer coefficient, thermal conductivity, and heat flux while decreasing thermal resistance as shown in Fig.8 and Fig. 9. An important parameter for minichannel heat sinks is pressure drop. Pressure drop linearly increased with increased mass flow rates for both graphite-water and CuOwater nanofluids, which was computed using Eq. (17). Pressure drop is a function of inlet velocity and nanofluid density. For instance, at 0.1m/s inlet velocity with 1% concentration, the pressure drop for graphite-water nanofluid was 83.55 Pa with 1010.74 kg/m 3 density. On the other hand, at 5% volume fraction concentration the pressure drop was 92.63 Pa and the density was equal to 1070.51 kg/m 3 as shown in Fig. 10 and Fig. 11 Xie et al [17] studied a minichannel heat sink similar to the one used in this study. Their results showed that at 0.1m/s pressure drop was 70Pa with 5.3*10-4 W pumping power, and at 1.5m/s inlet velocity the pressure drop and pumping power were 1817 Pa and 0.205 W, respectively. At 0.1m/s with 1% vol. and 5% vol. the pumping power of the graphite-water and CuO-water nanofluids were 0.000627 W and 0.000695 W, respectively. The pumping power for both nanofluids at 1.5m/s with 1% and 5% of particles volume fractions were 0.140993 W and 0.156306 W, respectively. Pressure drop is related to pumping power, as when pressure drop increased pumping power increased for graphite-water and CuO-water nanofluids with 0.1 m/s and 1.5 m/s inlet velocities as shown in Figs. 12, 13, 14, and 15. For graphite-water and CuO-water nanofluids increases in the particle volume fraction present in the base fluid (water) increased thermal conductivity and convective heat transfer coefficient, which increased pumping power as pumping power linearly increases with increased heat transfer coefficients. Pumping power was computed using Eq. (19). Conclusions In summary, this paper investigated nanofluid thermal conductivity, heat flux, and pumping power. Two particular nanofluids, namely Graphite-H2O and CuO-H2O, were studied as coolants. The results illustrated that the dispersion of nanoparticles into the base liquid led to an increase in thermal conductivity. For graphite-H2O and CuO-H2O at 5% particle volume concentration, the greatest improvement in thermal conductivity was 15.52% and 14.34%, respectively. Significant improvements were observed for nanofluid thermal conductivity in comparison to pure water. The maximu m enhancement of heat flux from the use of graphite-H2O with 1% volume fraction at 0.1 m/s was 18% greater than the base fluid. At 1.5 m /s inlet velocity with the same volume concentration, the maximu m rise heat flu x was 3.46%. For CuO-H2O nanofluids at 0.1 m/s and 1.5 m/s inlet velocity with a volume fraction of 1% volume, heat flux was enhanced by 17.83% and 3.33%, respectively. It was found that the maximu m pumping power and pressure drop from the use of graphite-H2O and CuO-H2O at 0.1 m/s inlet velocity with 5% volume fraction were 0.000695 W and 92.63 Pa, respectively. On the other hand, at 1.5 m/s the maximu m increase in pumping power and pressure drop for both nanofluids were 0.156306 W and 1389.39 Pa, respectively for 5% nanofluid volume fraction. Abm Bottom area of minichannel heat sink (m 2 )
3,196.2
2020-03-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Fast protein structure searching using structure graph embeddings Comparing and searching protein structures independent of primary sequence has proved useful for remote homology detection, function annotation and protein classification. Fast and accurate methods to search with structures will be essential to make use of the vast databases that have recently become available, in the same way that fast protein sequence searching underpins much of bioinformatics. We train a simple graph neural network using supervised contrastive learning to learn a low-dimensional embedding of protein structure. The method, called Progres, is available at https://github.com/greener-group/progres. It has accuracy comparable to the best current methods and can search the AlphaFold database TED domains in a tenth of a second per query on CPU. Introduction A variety of methods have been developed to compare, align and search with protein structures [1] including comparing residue-residue distances [2,3,4], considering local geometry [5], coordinate alignment [6] and 3D Zernike descriptors [7,8].Since protein structure is more conserved than sequence [9] these methods have proved useful in remote homology detection [10], protein classification [11], inferring function from structure [12], clustering large databases [13,14] and assessing the accuracy of structure predictions.The highest accuracy methods tend to be careful comparisons based on coordinates like Dali [3], but searching large structural databases such as the AlphaFold Protein Structure Database [15,16] or the ESM Metagenomic Atlas [17] with these methods is slow.Recently Foldseek [18] has addressed this problem by converting primary sequence into a sequence of learned local tertiary motifs.It then uses the rich history of fast sequence searching in bioinformatics to dramatically reduce the pairwise comparison time of the query with each member of the database.It follows that to further reduce search time, the pairwise comparison step should be made even faster. Inspired by the impressive performance of simple graph neural networks (GNNs) using coordinate information for a variety of molecular tasks [19], we decided to train a model to embed protein structures into a low-dimensional representation.Two embeddings can be compared very quickly by cosine similarity and a query can be compared to each member of a pre-embedded database in a vectorised manner on CPU or GPU.It makes sense to use expertly-curated classifications of protein structures when training such an embedding [20,11,21]; we use supervised contrastive learning [22] to allow the embedding to be learned in a manner that reflects such an understanding of protein structure space and returns search results consistent with it. A number of recent methods have used protein structure graph embeddings [23,24] and contrastive learning [25,26].Embedding protein folds has also been done using residue-level features [27,14], and GNNs acting on protein structure have been used for function prediction [28].Other studies have used unsupervised contrastive learning on protein structures and show that the representations are useful for downstream prediction tasks including protein structural similarity [29,30,31].Contrastive learning using protein classifications has also improved language models for protein sequences, showing clustering that better preserves protein structure space [32].Protein structure has been incorporated into language models more broadly, often with the intention of searching for remote homology [33,34,35,36,37,38]. Results We trained a simple GNN, called Progres (PROtein GRaph Embedding Search), to embed a protein structure independent of its sequence (see Figure 1A).Since we use distance and torsion angle features based on coordinates the embedding is SE(3)-invariant, i.e. it doesn't change with translation or rotation of the input structure.As shown in Figure 1B, supervised contrastive learning [22] on SCOPe domains [20,39] is used to train the model, moving domains closer or further apart in the embedding space depending on whether they are in the same SCOPe family or not.Sinusoidal position encoding [40] is also used to allow the model to effectively use information on the sequence separation of residues.The main intended use of such an embedding is fast searching for similar structures by comparing the embedding of a query structure to the pre-computed embeddings of a database of structures.Our model does not give structural alignments, but if these are required they can be computed with tools like Dali after fast initial filtering with Progres.The impact of changing model hyperparameters is shown in Table S1.The distribution of values across the embedding dimensions is shown in Figure S1. In order to assess the accuracy of the model for structure searching, we follow a similar procedure to Foldseek [18].Since our model is trained on SCOPe domains it is important not to use domains for training that appear in the test set.We select a random set of 400 domains from the Astral 2.08 40% sequence identity set for testing.No domains in the training set have a sequence identity of 30% or more to these 400 domains.This represents the realistic use case that the query structure has not been seen during training -for example it is a predicted or new experimental structure -but other domains in the family may have been seen during training.The easier case of searching with the exact domains used for training gives superior results that are not reported here, and the harder case of searching with completely unseen folds is discussed later. As shown in Table 1 our model has sensitivity comparable to Dali [3] and Foldseek-TM [18] for recovering domains in SCOPe from the same fold, superfamily and family.Its strong performance at the fold level indicates an ability to find remote homologs.Progres is more sensitive than the EAT [32] and ESM-2 [17] protein language model embeddings, and also the baseline sequence searching method of MMseqs2 [41].This indicates the benefits of comparing structures rather than just sequences for detecting homology.Figure 2A-C shows the performance across different SCOPe classes, protein sizes and contact orders.Progres does particularly well on all β domains, smaller domains and domains with higher contact order.It has lower performance on membrane proteins and larger domains.As shown in Figure 2D, performance drops of significantly when the number of embedding dimensions is below 32. For searching a single structure against SCOPe on CPU the model is faster than Foldseek with most run time in Python module loading.For example, going from 1 to 100 query structures increases run time from 1.3 s to 2.4 s.When searching with multiple structures, most run time is in generating the query structure embeddings.Consequently, the speed benefits of the method arise when searching a structure or structures against the pre-computed embeddings of a huge database such as the AlphaFold database [8,13,14].The recent TED study split the whole AlphaFold database into domains using a consensus-based approach [43].We embed the TED domains clustered at 50% sequence identity and use FAISS [44] to considerably speed up the search time against the resultant database of 53 million structures.This allows a search time of a tenth of a second per query on CPU, after an initial data loading time of around a minute.Since we search exhaustively with FAISS, the results are not changed, though the approximate score calculation means the similarity score does vary slightly from the exact value.For the SCOPe test set used above, the mean difference between FAISS and exact similarity scores for the top hit is 0.006.As shown in Figure S2, the best TM-align score to the query among the top 5 hits has a mean of 0.80 across the SCOPe test set, with 94% being over 0.5.This indicates that searching is accurate even when using a large database. Figure 3 shows 2D t-SNE embeddings [45] of the 128 dimensions of our model embedding.This shows the lower-dimensional protein fold space [46,47,48] created by our embedding.SCOPe classes tend to cluster together, with α+β folds appearing between the all α and all β folds which show little overlap.There is a clear protein size gradient across the t-SNE embedding.A t-SNE embedding for the AlphaFold database TED domains compared to ECOD [21] and the AlphaFold 21 organisms set [16,49] shows the volume of new structural information available in the AlphaFold database.The Progres score between two embeddings is the cosine similarity score normalised to run between 0 and 1, with 1 indicating identical embeddings.As shown in Figure 3E a Progres score of 0.8 indicates that two proteins share the same fold, analogous to a TM-align score of 0.5. Discussion The model presented here is trained and validated on protein domains; due to the domain-specific nature of the training it is not expected to work without modification on protein chains containing multiple domains, long disordered regions or complexes.Fortunately, there are a number of tools such as Merizo [50], SWORD2 [51] and Chainsaw [52] that can split query structures into domains. Searching with domains separately can overcome issues that arise from searching with multiple domains at the same time, such as missing related proteins due to differing orientations of the domains.As shown in Figure S3 the Progres embeddings are fairly robust to truncating residues from the termini, with truncations of 20 residues giving an embedding with a similarity of 0.8 to the full length domain embedding for 89% of domains with 200-299 residues.This means that minor inaccuracies in predicting domain boundaries are unlikely to cause a problem. One issue with supervised learning on domains is whether performance drops when searching with domains that the model has not seen anything similar to during training.We trained an identical model on a different dataset where 200 domains were used for testing and domains were removed from the training set if they were from the same SCOPe superfamily as any of the testing domains.The fold, superfamily and family sensitivities analogous to Table 1 are 0.190, 0.383 and 0.546 respectively.This indicates similar performance at finding distantly related folds, the main use of structure searching over sequence searching, though there is a drop in performance at finding closely-related domains. Aside from searching for similar structures, an accurate protein structure embedding has a number of uses.Fast protein comparison is useful for clustering large sets of structures, for example to identify novel folds in the AlphaFold database [13,14,43].The embedding of a structure is just a set of numbers, and therefore can be targeted by differentiable approaches for applications like protein design. A decoder could be trained to generate structures from the embedding space [53,54], and a diffusion model to move through the embedding space.Properties of proteins such as evolution [55], topological classification [56], the completeness of protein fold space [57], the continuity of fold space [58], function [59] and dynamics could also be explored in the context of the low-dimensional fold space.Structure embeddings could also be used to identify regions of unknown density in cryo-electron tomography studies.We believe that the extremely fast pairwise comparison allowed by structural embeddings is an effective way to take advantage of the opportunities provided by the million structure era. Methods Training Structures in the Astral 2.08 95% sequence identity set including discontinuous domains were used for training [60].We chose 400 domains randomly from the Astral 2.08 40% sequence identity set to use as a test set (see below) and another 200 domains to use as a validation set to monitor training.We removed domains with 30% or greater sequence identity to these 600 domains using MMseqs2 [41], and also removed domains with fewer than 20 or more than 500 residues.This left 30,549 domains in 4,862 families for training. mmCIF files were downloaded and processed with Biopython [61].Some processing was also carried out with BioStructures.jl [62].Cα atoms were extracted for the residues corresponding to the domain.Each Cα atom is treated as a node with the following features: number of Cα atoms within 10 Å divided by the largest such number in the protein, whether the Cα atom is at the N-terminus, whether the Cα atom is at the C-terminus, and a 64-dimensional sinusoidal positional encoding for the residue number in the domain [40]. PyTorch was used for training [63].The neural network architecture was similar to the E(n)-equivariant GNN in Satorras et al. 2021 [19].We used a PyTorch implementation (https://github.com/lucidrains/egnn-pytorch) and a configuration similar to the molecular data prediction task, i.e. not updating the particle position.In this case the model is analogous to a standard GNN with relative squared norms inputted to the edge operation [19].Edges are sparse and are between Cα atoms within 10 Å of each other.6 such layers with residual connections are preceded by a one-layer multilayer perceptron (MLP) acting on node features and followed by a two-layer MLP acting on node features.Node features are then sum-pooled and a two-layer MLP generates the output embedding, which is normalised.Each hidden layer has 128 dimensions and uses the Swish/SiLU activation function [64], apart from the edge MLP in the GNN which has a hidden layer with 256 dimensions and 64-dimensional output.The final embedding has 128 dimensions.Supervised contrastive learning [22] is used for training.Each epoch cycles over the 4,862 training families.For each family, 5 other families are chosen randomly.For each of these 6 families, 6 domains from the family present in the training set are chosen randomly.If there are fewer than 6 domains in the family, duplicates are added to give 6.This set of 36 domains with 6 unique labels is embedded with the model and the embeddings are used to calculate the supervised contrastive loss with a temperature of 0.1 [22].During training only, Gaussian noise with variance 1.0 Å is added to the x, y and z coordinates of each Cα atom.Training was carried out with the Adam optimiser [65] with learning rate 5 × 10 Testing For testing a similar approach to Foldseek was adopted [18].The 15,177 Astral 2.08 40% sequence identity set domains were embedded with the model.The embeddings are stored as Float16 to reduce the size of large databases on disk, but this has no effect on search performance as shown in Table S1.400 of these domains were chosen randomly and held out of the training data as described previously.Like Foldseek, we only chose domains with at least one other family, superfamily and fold member.For each of these 400 domains, the cosine similarity of embeddings to each of the 15,177 domains was calculated and the domains ranked by similarity with the query domain included.For each domain, we measured the fraction of TPs detected up to the first incorrect fold detected.TPs are same family in the case of family-level recognition, same superfamily and not same family in the case of superfamily-level recognition, and same fold and not same superfamily in the case of fold-level recognition. All CPU methods were run on an Intel i9-10980XE CPU and with 256 GB RAM.Progres, Foldseek and MMseqs2 were run on 16 threads.The GPU methods were run on a RTX A6000 GPU.Progres was run with PyTorch 1.11.Foldseek version 8.ef4e960 was used.For TM-align we used the fast mode, which has similar performance to the normal mode [18].For 3D-SURFER the neural network model and mainchain atoms were used.EAT was run with the "-use tucker 1" flag.ESM-2 embeddings used the esm2 t36 3B UR50D model which has a 2560-dimensional embedding.The mean of the perresidue representations was normalised and comparison between sequences was carried out with cosine similarity.For MMseqs2, easy-search with a sensitivity of 7.5 was used. For contact order, all residue pairs with Cβ atoms (Cα for glycine) within 8 Å are considered.The contact order of a structure is then defined as LN where S i is the sequence separation of the residues in contacting pair i, N is the number of contacting pairs and L is the sequence length of the protein. Databases The AlphaFold database domain embeddings were prepared from the TED set of domains [43] using cluster representatives from clustering at 50% sequence identity.Clustering was carried out with MMseqs2 using the command "mmseqs easy-cluster ted 100.fasta clusterRes tmp -min-seq-id 0.5 -c 0.9 -cov-mode 5 -s 7.5".This gave 53,344,209 clusters, fewer than the TED analysis due to the use of easy-cluster over easy-linclust.The FAISS [44] index was prepared using "IndexFlatIP(128)", which carries out exhaustive searching using the same cosine similarity as Progres.The effect of truncating domains on Progres embeddings.For each of the 400 domains in the SCOPe test set a number of residues were removed from the N-terminus or the C-terminus and the truncated domain was embedded.The Progres similarity score to the full length domain was then computed.The results are categorised by the number of residues in the full length domain.The line shows a similarity score of 0.8, indicating the same fold (see Figure 3E). Figure 1 Figure1Protein structure embedding.(A) Protein domains are treated as a graph with Cα atoms as nodes and edges between Cα atoms within 10 Å.A GNN embeds the graph into a 128-dimensional representation.This can be compared quickly to a pre-embedded search database.(B) Supervised contrastive learning[22] is used to train the model, with embeddings for domains in the same SCOPe family pushed together and embeddings for domains in different SCOPe families pushed apart. Figure 3 Figure 3 Exploring Progres embeddings.(A) 2D t-SNE embedding of the 128 dimensions of our model embedding for the Astral set of SCOPe domains clustered at 40% sequence identity (15,177 domains).The domains are coloured by SCOPe class.t-SNE was carried out using a perplexity value of 30.(B) The same data coloured by number of residues in the domain.The median length of domains is 149 residues.For colouring, the maximum number of residues in a domain is treated as 400.(C) 2D t-SNE of the AlphaFold database TED domains [43] clustered at 50% sequence identity and the ECOD F70 set of domains in the PDB [21].5m (9%) of the TED domains are chosen randomly for the t-SNE for computational reasons.(D) A similar comparison of the TED domains to the AlphaFold 21 model organisms set [16, 49].(E) Comparison of Progres score to TM-align score.For each of the 400 domains in the test set the top 200 matches in the Astral 40% sequence identity set according to TM-align are considered.The Pearson correlation coefficient is 0.60.The green line shows the TM-align score threshold of 0.5 indicating the same fold.The purple line shows the Progres score threshold of 0.8 indicating the same fold. −5 and weight decay 1 × 10 −16 .Each set of 36 domains was treated as one batch.Training was stopped after 500 epochs and the epoch with the best family sensitivity on the validation set was used as the final model.Training took around a week on one RTX A6000 GPU. Figure S1 Figure S2 Figure S1 Values of embedding dimensions.Each protein in the Astral 40% sequence identity set is embedded with Progres and the distribution of values in each of the 128 dimensions is shown. Table 1 [18]l performance on different protein types.In each case the "Any" category is the same as in Table1.(A)Sensitivityforfoldsearching by SCOPe class.(B)Sensitivityforfoldsearching by protein sequence length.(C)Sensitivityforfoldsearching by contact order, a measure of the sequence separation of contacting residues.(D)Sensitivityforfoldsearching across different embedding sizes.A model was trained from scratch for each embedding size.See TableS1for further ablations.Comparison of ability to retrieve homologous proteins from SCOPe.A similar procedure to Foldseek[18]is followed with a set of 400 domains.For each domain the fraction of true positives (TPs) detected up to the first incorrect fold is calculated (higher is better).TPs are same family in the case of family-level recognition, same superfamily and not same family in the case of superfamily-level recognition, and same fold and not same superfamily in the case of fold-level recognition.The mean of this fraction over all 400 domains is reported.Run time (single) is the time taken to search a structure of 150 residues (d1a6ja in PDB format) against all the 15,177 Astral 2.08 40% sequence identity set DaliFigure2domains, with the database pre-prepared.Run time (all-v-all) is the time taken to calculate all pairwise distances between the 15,177 domains from structure.EAT, ESM-2 and MMseqs2 use sequence not structure for searching.3D-SURFER and EAT are trained with structural information and may have seen proteins in the test set during training.
4,552.8
2024-04-18T00:00:00.000
[ "Computer Science" ]
Quasi-droplet Microbubbles for High Resolution Sensing Applications Optical properties and sensing capabilities of fused silica microbubbles were studied numerically using a finite element method. Mode characteristics, such as quality factor (Q) and effective refractive index, were determined for different bubble diameters and shell thicknesses. For sensing applications with whispering gallery modes (WGMs), thinner shells yield improved sensitivity. However, the Q-factor decreases with reduced thickness and this limits the final resolution. Three types of sensing applications with microbubbles, based on their optimized geometrical parameters, were studied. Herein the so-called quasi-droplet regime is defined and discussed. It is shown that best resolution can be achieved when microbubbles act as quasi-droplets, even for water-filled cavities at the telecommunications C-band. Introduction The benefits of high quality (Q) factors in whispering gallery mode resonators (WGRs) have been studied intensively during the last several decades 1 .This unique feature of WGRs is a key factor in the study of low-threshold microlasers 2 , for nonlinear effects 3,4 , in cavity quantum electrodynamics 5,6 , and for optomechanics 7 .WGRs are typically micron-scale dielectric structures that can confine light internally by a process of continuous total internal reflection.Light circulates around the boundary and forms whispering gallery modes.Wavelengths of WGMs are highly dependent on the geometrical size and refractive index of the resonator.Resonant frequencies of WGMs are also very sensitive to external influences and this leads to ultrahigh sensitivity in sensing applications.By now, sensing has been accomplished using various WGRs, such as microspheres 8 and microtoroids 1,9 .A large diversity of physical quantities can be sensed, e.g.biochemical changes 8, 10 , gas 11 , temperature [12][13][14] , pressure 15 , and force 16 .Due to the high Q and small mode volume of WGRs, even single molecule detection has been achieved 9,10 . For the sake of general discussion, let us consider refractive index sensing as an example.The sensitivity of a resonator relies on the portion of electromagnetic (EM) field distributed outside it, or in another words, the tunneling depth of the evanescent field.To extend the evanescent field, several methods have been developed, such as PDMS-coated microspheres 12 or plasmonic enhancement in metal-coated microresonators [17][18][19] .By having different dielectric layers, WGM EM field distribution can be tailored.Liquid core optical ring resonator (LCORR) sensors 20 are an alternative type of WGM resonator.Such devices can be viewed as hybrid microresonators, since the evanescent light field can penetrate into the liquid in the core.In LCORRs, however, a high Q is maintained because most of the WGM energy still propagates in the shell structure.Therefore, LCORRs have outstanding sensing properties. In this paper, one type of LCORR is discussed, namely the microbubble resonator 21,22 .They are made by locally heating a fused silica microcapillary with a CO 2 laser while internally pressurizing the cavity.By controlling the size of the CO 2 laser heating zone, a spherical shape, with a con-trollable shell thickness, can be created.Single-and double-pass structures, i.e. spherical shells with one or two openings, can be made.For double-pass structures, liquid can be injected through the cavities using a syringe pump.Similar to other LCORRs, the core liquid is sensed by an EM field traveling internal to the cavity core.Sensitivity of the WGM to changes in the liquid core can be improved by making the shell thinner, thereby increasing WGM EM field intensity in the liquid.However, when the shell thickness decreases to near or less than the wavelength of the light propagating in the WGM, tunneling can occur, causing the resonant line width to broaden, thereby limiting the total resolution.As a result, a tradeoff must be achieved between shell thickness and the fixed size of the microbubble in order to optimize the Q-factors and, hence, the sensitivity of the device.To our knowledge, this optimization of the LCORR has not been reported in the literature to date.Herein, the axi-symmetrical finite element model (FEM) is used to investigate the optical mode properties of microbubble WGRs.Propagation constants of different order modes are strongly related to bubble size and shell thickness.Therefore, controllability of the modes is made possible by changing the coupling conditions.When the shell thickness is subwavelength and the bubble contains a high-index liquid inside, a quasi-droplet regime is defined.Based on different physical sensing applications, optimized parameters for microbubbles are determined. FEM simulations of microbubbles Images of typical microbubbles are shown in Fig. 1(a) and the schematic cross-section of a microbubble is shown in Fig. 1(b).For simplicity we assume that the microbubble is a spherical shell formed by fused silica and surrounded by air with core materials that can be varied.A working wavelength of 1.55 µm was chosen, as it is commonly used in WGM experiments.FEM simulations of a 3D structure consume a lot of computational resources even for micron-scale objects. The microbubble is rotationally and axially symmetric, so by utilizing a newly developed FEM 23 , the 3D problem is reduced to 2D and is solvable in seconds with smaller computational memory requirement.The method in 23 is based on the weak form of the Helmholtz equation 24 , given by: where ǫ is the effective permittivity and α is the penalty factor first introduced in 23 . In spherical coordinates (r, θ, φ), WGMs propagate azimuthally to the rotational symmetric axis, as illustrated in Fig. 1(b).This gives rise to a field phase varying term, exp(imφ).Here, m is the azimuthal mode order of the WGM.In the simulation, m is varied and eigenfrequencies of corresponding fundamental modes are determined for different EM field distributions along the radial direction (Fig. 1(c)-(e)).The effective index of the mode is estimated by N ef f = mλ/2πR, where R is the outer radius of the microbubble. For WGRs, the Q-factor is a very important parameter.The total intrinsic loss of a WGR originates from radiation loss (tunneling loss), material loss, surface roughness, and contamination. Here, only radiation and material losses are considered.The surface roughness is very small due to the fabrication method used.In experiments, a high Q absorption limit for microbubbles has been reported 25 .Radiation loss is caused by leakage from evanescent light into a free space mode. The upper and lower bounds of the Q-factor can be estimated with a closed resonator model.In this work, a more precise method was used, in which a perfectly matched layer (PML) along the boundary of the computation domain is introduced, see Fig. 1(c) -(e).A properly set PML can be treated as an anisotropic absorber, simulating radiation tunneling to infinity within a limited domain calculation space.An accurate determination of the Q-factor in a microsphere has been reported recently using this modified method 24 .In order to match the model with a realistic situation, material absorption is introduced as an additional imaginary part to the resonator permittivity.For fused silica and a 1.55 µm wavelength, the imaginary part is estimated to be ǫ i = −3.56× 10 −10 , which is calculated from the absorption coefficient.As will be demonstrated in the following, the radiation loss is dominant when the diameter of the microbubble is less than ∼ 30µm.The Q-factor exponentially increases with diameter such that it is saturated when R >30 µm, and is then only limited by the material absorption loss. When solving the eigen-equations using FEM software, such as COMSOL c , with complex material permittivity and PMLs, the eigenfrequencies (f r ) are complex, with the real parts representing resonant frequencies and the imaginary parts representing total intrinsic losses.Therefore, the Qfactor is defined as Q = Re(f r )/2Im(f r ).For the material absorption term, the upper bound is limited to around 10 9 , which will be shown in the following simulation results.For investigating WGM properties, air (ǫ = 1) is initially chosen as the core material in this section. Microbubbles with different diameters (10-60 µm) and wall thicknesses (500 nm to 3 µm) were simulated(Fig.2).Similar to solid microspheres, Q-factors increase with diameter.Exponential curves for diameters below 30 µm indicate that radiation losses dominate.When microbubbles are larger, radiation losses diminish and become negligible compared to material losses.As expected, Q-factors for large diameters do not exceed absorption Q-factors of solid microspheres (10 10 ). Two microbubbles with different shell thicknesses were compared.It is clear that when the shell is thinner, the mode tunnels more into the core, increasing the radiation loss and reducing the Qfactor.Therefore, to design high Q-factor microbubbles, larger diameters and thicker shells are required.In the following calculations, 50 µm has been selected as a reasonable microbubble size, since it can be easily fabricated and shell thickness can be controlled in fabrication 13 . Before discussing the shell thickness relationship, it is necessary to note that in addition to fundamental modes, other modes also exist in microbubbles (c.f.Fig. 1(d) and (e)).These can be denoted as higher radial modes (q = 2, 3, . ..) or higher azimuthal modes (l = m ± 1, m ± 2, . ..). For the first radial, fundamental TE mode, when the thickness is less than 1 µm, the Q-factor drops extremely sharply (Fig. 3).The TM mode has a lower Q-factor and it drops when the shell thickness is less than 1.3 µm.At a wall thickness of ∼ 600 nm, the Q of the TM mode drops to a very low value, implying that the microbubble can only hold the TE mode.For shell thickness less than 500 nm , even the TE mode has a very low Q-factor and microbubbles cannot hold any high Q WGMs.It is worth noting that, when the thickness is larger than the working wavelength (1.55 µm in this paper), microbubbles can even hold second order radial modes (q = 2).Radial mode distribution is dependent on the medium along the radial direction.If the shell becomes even thicker, microbubbles should be able to hold even higher radial modes until they become the same as solid microspheres.In other words, single radial mode operation is only possible for microbubbles with subwavelength shells. For real sensing applications, light has to be coupled in and out of the WGR for detection.Many coupling methods have been developed and, among them, tapered fibers exhibit high efficiency as evanescent probes that are widely used for WGRs 26 .In order to effectively couple light, the cavity mode must be sufficiently spatially overlapped with the mode from the tapered fiber and a phase matching condition must be met, i.e. the effective index of the WGM must equal that of the tapered fiber mode.To verify efficient coupling in the microbubble tapered fiber system, the effective index of a 50 µm microbubble fundamental TE mode was calculated (Fig. 4).For comparison, the index for different tapered fiber diameters is also shown.Note that this index is calculated when the fiber is in contact with the microbubble.To tune the effective index, one can control the taper/microbubble gap or change the taper diameter. From Fig. 4 it is clear that for thinner microbubbles, the effective index decreases, which is also due to more EM field distributed in the core.For a 50 µm microbubble, the effective index of the TE mode varies from 1.20 to 1.35.Phase matching can be realized if the taper diameter is controlled between 1.4 µm and 1.8 µm.The second order mode has an even lower effective index, ranging from 1.05 to 1.28, so a thinner taper is required to efficiently couple with this mode.In the following discussion, we assume that only the first order fundamental TE mode is of concern, since it has a larger Q-factor than the higher order modes.Efficient coupling to such modes is realized and controlled by selecting the size of tapered fiber. Quasi droplet regime of microbubbles The foregoing discussion has been centered on WGM properties of empty (air-filled) microbubbles; however, it is of more significance to investigate the microbubbles filled with liquid.Since the refractive index of liquid is higher than air and a spherical boundary can be shaped if the liquid forms a droplet, WGMs can be found in such a droplet.Such droplet WGRs have been studied for lasing 27 and nonlinear effects 28 .Indeed, droplet-like WGMs can also be found in microbubbles. If the shell of a microbubble is very thick, most of the EM field of the mode will propagate within the shell, so the microbubble behaves like a solid microsphere.As it gets thinner, mode gets extended more into the core.For an extreme situation, t → 0, the mode is almost entirely propagating in the liquid core, where a droplet like condition is satisfied, given that the inner boundary is spherical.Between these two situations, there exists a region where the shell starts to lose the ability to confine WGMs.This region has been dubbed the quasi-droplet regime 29 .It is worth noting that higher radial modes occupy more space than first order modes; therefore, they cannot exist in air-filled microbubbles with very thin shells.However, the core of a liquid-filled bubble provides the space required for higher order modes to propagate, so higher modes can be supported in thin-walled, liquid-filled bubbles.For example, in Lee et al. 29 , the quasi-droplet regime mentioned is only for the q = 2 mode.Therefore, it is very important to clarify the definition of the quasi-droplet regime for different modes.In the following discussion, the quasi-droplet regime is defined in terms of the effective index. To numerically simulate this regime, the permittivity of the inner core material was replaced with a liquid (for example, water with ǫ real = 1.33 2 = 1.7698) and substituted into the FEM equation (Eq.1).The absorption of the liquid can be omitted in the model for calculating mode eigenfrequency and field distribution, since it only changes the imaginary part of the eigenfrequency simulated by FEM software.This will be discussed in the next section.The radial EM field distri-bution for q = 1 was determined by extracting the intensity value along the radial direction (c.f. Fig. 5(a)-(d)) . It is distributed along the radius, with part inside the core area, part in the shell and an evanescent tail tunneling to the outer environment. The estimated percentage of the WGM's EM field in the core was found by integrating the EM field intensity in the core and shell separately.The percentage of energy in the core for the first three radial modes for a shell thickness varying from 300 nm to 3 µm is calculated for fixed diameter microbubbles, see Fig. 5(e).It can be seen that, when the shell thickness is 500 nm or less, up to 85% of the q = 1 WGM propagates in the core, while if the shell thickness is more than half of the working wavelength (1.55 µm), more than 80% of the light travels in the shell. To describe the quasi-droplet regime more precisely, we resorted to a quantified definition for the fundamental TE mode.The idea is based on the well-known interpretation of the radial mode number.For a liquid microsphere, i.e. a droplet, the radial field distribution has a maximum inside the droplet close to the boundary.Analogous to a droplet, when the peak is inside the core of a microbubble, the core is equivalent to the droplet while the shell is the new boundary.This can be used as a criterion for the quasi-droplet regime.It can be physically interpreted that light is traveling in the water and the field distributed in the shell is the evanescent component tunneling into the shell.According to this definition, for a shell thickness of less than 300 nm, the microbubble is driven into the quasi-droplet regime for its fundamental TE mode.However, such defined thickness does not apply for the q = 2 and q = 3 modes, as those modes have multiple peaks along the radial direction so they are more complicated than the q = 1 case.Where the percentage of the EM field for the q = 2 and q = 3 modes are plotted, even when the shell is as thick as 1 µm and 1.5 µm, respectively, the proportion of the EM field in the core does not drop to less than 80% (Fig. 5(e)). To have a general definition that applies to different radial modes, the effective indices for the q = 1, 2, and 3 modes in microbubbles were calculated.For comparison, the effective index of a droplet of the same diameter was simulated and is shown together with those for a solid silica microsphere (Fig. 6).The effective index for a microbubble in the quasi-droplet regime, as defined above, is only slightly higher than for the droplet modes, proving that, in this case, the shell is negligible and the microbubble acts like a droplet.For thicker shells >2.5 µm, the effective indices of all bubble modes are the same as the corresponding modes in a solid silica microsphere of the same size.The q = 1 mode does not reach the droplet index unless the shell thickness is less than 500 nm.On the other hand, higher order modes, especially the q = 2 modes, exhibit a much wider range of effective indices corresponding to those of the droplet.The effective index of the q = 2 mode changes abruptly to more closely resemble a solid microsphere when the shell is thicker than 1.5 µm.Accordingly, a new definition for the quasi-droplet regime could be the range of shell thicknesses until the point where the effective index starts to rapidly approach that of a silica microsphere. So far the quasi-droplet has been described in two ways and we have shown that, for a very thin shell, a microbubble filled with liquid behaves very similarly to a droplet WGR.This may be very interesting in applications such as sensing or nonlinearity.The quasi-droplet resonator has advantages since its shape is protected by the shell.Changes to resonator shape through surface evaporation can thereby be avoided, and coupling to external waveguides is easier instead of low efficiency, free space excitation 27 . Optimizing microbubble geometry for high resolution sensing applications Sensing applications for WGRs are mostly based on the following principle: changes in the environment cause a shift in the resonant frequency of the whispering gallery modes.This is detected by scanning and recording the transmission spectrum of the WGR.Sensitivity, S, is defined as the shift rate of WGMs.However, if the line width of the resonance dip is broad, the shift cannot be resolved, thereby limiting the resolution of WGR-based sensors.Unfortunately, improving sensitivity is nearly always in conflict with achieving higher Q-factors.In order to overcome this limitation, it is important to have a method of determining optimal parameters for WGR sensors. In general, the resolution, ℜ, is defined as the following: where U is the physical quantity (e. g. temperature or pressure) that causes the frequency shift, and λ is the working wavelength.∂λ(U)/∂U is the frequency shift caused by the change of the physical quantity, i.e. the sensitivity.In practice, there are two ways to induce the frequency shift and both will be addressed in the following subsections. Pressure sensing The resonant frequency of a microbubble WGR can be tuned by manipulating the compression or tension of the device 30 .Alternatively, a mode frequency shift of hundreds of GHz can be generated in a single-pass microbubble by gas pressure 31 .Two different mechanisms are dominant.The first is size expansion by applying aerostatic pressure from inside the bubble. The second is a possible refractive index change due to strain and stress on the resonator material. For a given material the elasto-optic coefficient (C) and shear modulus (G) are constants, so the pressure sensitivity is given by 15 (neglecting external pressure): where n 0 is the refractive index.From Eq. 3, the sensitivity of pressure sensing is proportional to the geometrical parameters R and t and a relative sensitivity, S r , is defined by: For simplicity, in the following content, S r is used for sensitivity, and relative sensitivity is not distinguished from absolute sensitivity.Using Eq. 2 and Eq. 4, ℜ can be calculated.Note that since it is deduced from relative sensitivity, it is a relative resolution.Again, for simplicity, ℜ represents relative resolution in the following sections.It was plotted as a function of shell thickness, incorporating the Q-factor plotted in Fig. 3 and this is shown in Fig. 7.It can be seen that the best resolution is obtained with a shell thickness of about 1.4 µm.The resolution worsens when the shell thickness is less than 1 µm.This is due to the exponentially decreasing value of Q with decreasing shell thickness.When the shell is thicker than 1.5 µm, the resolution also worsens, as the sensitivity to pressure diminishes with increasing shell thickness.The Q-factor reaches the material limit when the shell thickness is more than 1.4 µm.Sensitivity changes as the inverse cube of the shell thickness, which causes less severe deterioration of the resolution. Refractive index sensing Observation of frequency shifting induced by subtle refractive index variations is the most common sensing mechanism for WGRs [8][9][10] , especially for LCORRs 13,20,[31][32][33][34] , where different materials can be injected into the interior core volume.A change in the core material due to changes in concentration, temperature, or pressure, causes a small change in the effective index of the mode.This is sufficient to generate a WGM frequency shift.Intuitively, for higher sensitivity, the light in the microbubble should be distributed in the inner core as much as possible. It can be qualitatively described by the following equation: κ c and κ s are the proportion of EM field in the core and shell respectively while n c and n s are their refractive indices.For simplicity, we assume that the physical quantity only changes the core material index, so that the sensitivity is proportional to κ c only.This means that the sensitivity follows the same trend for thickness as shown in Fig. 5.To achieve high sensitivity, e. g. for the fundamental mode, it is better to have a very thin shell (<300 nm) where more than 80% of the light is in the core; in other words, the bubble is in the quasi-droplet regime.However, in general, the core material has higher absorption (e. g. water) than the silica shell, thus limiting the Q-factor. Similar to Section 4, an optimum resolution of the microbubble sensor exists, and it depends on the shell thickness.A precise value of sensitivity is obtained by FEM simulations.In order to implement numerical optimization that can deal with all kinds of index sensing, a detailed indexquantity relationship is not introduced.S r , is defined by the ratio of frequency shift to refractive index change by ∂λ/∂n.By introducing a small index change to the core material (0.001) in the COMSOL model, the frequency shift is calculated and plotted (Fig. 8).To determine the Q-factor, it is assumed that the core is water with a high absorption at 1.55 µm (ǫ i = −3.577× 10 −6 estimated from the water absorption coefficient α at 1.55 µm). As discussed for Eq. 5, the simulation shows the same result as in Fig. 8(a), that S r increases with decreasing shell thickness.When the thickness is larger than 2.5 µm, most of the light is distributed in the shell, yielding a zero frequency shift.For different microbubble sizes, the sensitivity is slightly different.The Q-factors of the corresponding microbubbles are also calculated and presented in Fig. 8(b).It should be mentioned that for a 20 µm diameter microbubble, the Qfactor is lower than for larger sizes.This is due to higher water absorption plus high radiation loss since light is not so well confined in the shell for such a small bubble.Therefore, in the following discussion, only diameters larger than 20 µm are considered.Using the results from Fig. 8(a) and (b), optimized shell thicknesses and microbubble sizes for best resolution can be obtained.The optimized thickness is about 1 µm for microbubbles with diameters between 30-50 µm.Around this thickness, the resolution does not vary too much with bubble size, which is also due to exponential dependencies of Q and sensitivity to shell thickness.The optimized microbubble for the fundamental first order mode is not within the quasi-droplet regime, due to a poor wavelength selection (1.55 µm).As has been discussed, the higher order modes are in the quasi-droplet regime over a larger range than the q = 1 mode, so for thicknesses less than 2.5 µm the sensitivity will not drop too much.Using the same method, ℜ is calculated for the q = 2 mode in a 50 µm diameter microbubble and plotted together with the q = 1 mode in Fig. 8(d).From this plot one can see that the resolution remains unchanged for thicknesses ranging from 500 nm to 3 µm.Therefore, in a practical sensing application, the second order mode is recommended since it decreases the diffi-culty of controlling the shell thickness when fabricating the device.The resolution can be further improved by using an alternative core material or a light source at a different wavelength where absorption loss is lower. So far it has been shown that high resolution refractive index sensing can be obtained by using a microbubble operating close to the quasi-droplet regime.It is assumed that index changes occur only in the core region.In some other situations, such as thermal sensing 31 , changes in both shell and core refractive index must be considered.The thermo-optical coefficient of silica is positive, which leads to a red shift of the modes if the temperature rises.The core is often filled with a negative thermo-optical liquid, such as water, ethanol, or acetone.The net thermal shifting of a bubble is determined by the proportion of intensity in the shell and core, as in Eq. 5.This can be tuned by selecting the shell thickness.Recent experimental results have proven that a thermally induced red shift of silica can be compensated for 31 and it is even possible to obtain a large inverse blue shift 13 . Nanoparticle sensing As an extension of the optimization method discussed in this paper, let us now consider nanoparticle sensing in microbubbles as a final example.Nanoparticle detection and bimolecular sensing have been realized in other WGRs 9,10,35 and have also been generally discussed in LCORRs 36 .Usually, the particles, the WGR, and the evanescent coupler are in an aqueous environment so that the particles can be delivered to the sensing devices.In practice, microbubbles can benefit from their hollow structure, so that various liquids can be passed inside the device while the optical readout occurs outside (by taper coupling, for example) without being influenced by the liquid.For a simple estimation of this effect, suppose a nanoparticle with a radius, r 0 , is attached to the inner surface of the shell and the core is filled with water (see inset of Fig. 9(a)).The nanoparticle possesses a refractive index difference to water, ∆ǫ(r 0 ) in permittivity. This small perturbation by the particle can cause a frequency shift, δω, as follows: On calculation, the frequency shift of the fundamental TE mode is calculated assuming that ∆ǫ is a very small perturbation (∆ǫ = 0.005) in the above Eq.6.The sensitivity is then plotted (Fig. 9(a)). Since the nanoparticle changes the refractive index in the evanescent field that penetrates into the liquid core, for thinner shells the sensitivity is much higher.Subwavelength thickness is required for particle sensing.Otherwise the sensitivity goes to zero.The resolution determined from Eq. 2 and the Q-factor data presented in Fig. 8(b) is plotted in Fig. 9(b).Resolution worsens exponentially when the shell thickness increases for a 50 µm microbubble.This is quite similar to the refractive index sensing situation, but with even more sensitive dependency on thickness.It can be understood by considering the concepts discussed in Section 3. From Eq. 6, the nanoparticle is sensed by the value of |E(r)| 2 , which means that a high sensitivity is achieved when the radial maximum covers the position of the nanoparticle.As discussed in Section 3 and shown in Fig. 5, if the microbubble is in the quasi-droplet regime, the maximum is shifted from inside the shell to the inner boundary of the microbubble near the position of the nanoparticle.If the relative position of the maximum to the nanoparticle changes, it leads to an exponential increase in |E((r))| 2 .This is the origin of exponentially improved resolution.Sensing nanoparticles with second order modes is also shown in Fig. 9.The simulation shows that there is an increase in sensitivity when the shell thickness is around 1.5 µm and a minimum in relative resolution at 1 µm.This corresponds to the multi maxima in the core discussed in Section 3. It is also obvious that, for microbubbles with the same shell thickness, both the sensitivity and resolution of the q = 2 mode are better than for the q = 1 mode.Within the simulation range from 500 nm to 1.5 µm, the microbubble is in the quasidroplet regime for the q = 2 mode while out of this regime for the q = 1 mode, thereby proving that the quasi-droplet regime is quite important for high sensitivity particle sensing.Microbubbles in the quasi-droplet regime have other advantages.For example, for higher order modes in the quasi-droplet regime, more mode maxima lie in the core, implying a deeper penetration of the mode into the liquid.Even if one requires a method for sensing particles that are not attached to the inner surface, high sensitivity is still achievable if it is done using the appropriate higher order mode and a carefully designed shell thickness.This is of more practical significance in biochemical sensing applications. Here, the absolute frequency shift due to the presence of a single particle was not discussed since the specific material and geometrical properties of the nanoparticle were not assigned in our simulations.However, the calculation method used herein is universal and, therefore, it should be capable of simulating such a case.A complicated modification to introduce arbitrary nanoparticles near the surface of a toroidal cavity to break the axial symmetry has been reported 37 .With some modifications, this method should also be suitable for microbubbles. Conclusions WGM optical properties of microbubble WGRs have been studied with numerical simulation results based on FEM.When the shell thickness diminishes to subwavelength scale, microbubbles operate in the so-called quasi-droplet regime, where WGMs are dominated by the presence of the liquid core.This provides an ultra-sensitive way to detect liquid optical properties.Optimization was performed to achieve the best resolution for three types of sensing applications.This method can be further developed for a wide range of sensor optimization designs with microbubbles, such as newly developed optomechanical, microfluidic devices 25 and group velocity dispersion control for utilization in optical frequency comb generation 38 . for fruitful discussions and Mr. Nitesh Dhasmana for his help in preparing this manuscript. Correspondence Correspondence and requests for materials should be addressed to Y. Y. (email: yong.yang@oist.jp).circles (TM) and triangles (q = 2 TE).However, due to inner surface tunneling loss, higher radial modes have lower Q-factors than lower modes.The diameter of the microbubble for this plot is 50 µm.The TE mode is higher than the TM mode, especially when the shell is thin.The maximum Q-factor is limited by the silica absorption.µm, where the air-filled bubble starts to support high order modes.The taper effective index for a fiber waist of 0.5-1.0µm radius is also presented (dashed pink line).Once the geometry of a microbubble is set, a proper taper size can be chosen to satisfy the phase matching condition.of ℜ with q = 1 (black squares) and q = 2 (red circles) for the same microbubbles.for a 50 µm microbubble for sensing the 500 nm nanoparticle.The axis of resolution is plotted on a log scale, which implies that the resolution improves nearly exponentially for a thinner shell for the first order fundamental mode (black squares).Here, the first order mode is plotted as black squares while the red circles represent the second order mode.Lines joining the data points are simply guides for the eye. Figure 1 : Figure 1: (a) Images of double-pass and single-pass microbubbles.(b) Whispering gallery modes Figure 2 :Figure 3 : Figure 2: Q-factors of microbubbles drop exponentially with decreasing radii due to greater ra- Figure 4 : Figure 4: Effective index of a 50 µm diameter microbubble for different shell thickness and dif- Figure 9 : Figure 9: (a) Sensitivity of a microbubble for nanoparticle sensing.A relative frequency shift to
7,436.8
2014-02-03T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
A Review of Indoor Positioning Systems for UAV Localization with Machine Learning Algorithms : The potential of indoor unmanned aerial vehicle (UAV) localization is paramount for diversified applications within large industrial sites, such as hangars, malls, warehouses, production lines, etc. In such real-time applications, autonomous UAV location is required constantly. This paper comprehensively reviews radio signal-based wireless technologies, machine learning (ML) algorithms and ranging techniques that are used for UAV indoor positioning systems. UAV indoor localization typically relies on vision-based techniques coupled with inertial sensing in indoor Global Positioning System (GPS)-denied situations, such as visual odometry or simultaneous localization and mapping employing 2D/3D cameras or laser rangefinders. This work critically reviews the research and systems related to mini-UAV localization in indoor environments. It also provides a guide and technical comparison perspective of different technologies, presenting their main advantages and disadvantages. Finally, it discusses various open issues and highlights future directions for UAV indoor Introduction In order to enhance the connectivity of the wireless communication network, various network enhancement methodologies can be adopted, such as the millimeter-wave (mmwave) frequency band [1,2], massive multiple-input multiple-output (M-MIMO) [3], relay node (RN), Internet of Things (IoT) [4,5], heterogeneous network (HetNet) [6], mobile ad hoc networks (MANETs) [7,8], machine-to-machine (D2D) [9], power optimizations [10], handover processes [11], and interference cancellation [12].Some of the latest approaches, like Artificial Intelligence (AI), enabled micro base stations [13], machine learning [14], unmanned aerial vehicles (UAV) [15], blockchain [16], and human-centric communication [17] are potential concepts that can be used to design efficient net generation networks [18,19].However, some global areas require remote or temporary connectivity such as a terrestrial location where construction work is in progress, distant sports activity, indoor localization, remote health monitoring, or a war zone which requires communication devices for to-and-fro messaging, among others.For such scenarios, the utilization of UAVs can play an important role [20,21].The coverage and user capacity can be defined by the location, position, and altitude of the UAVs. When discussing position tracking, we are usually asked why a Global Positioning System (GPS) cannot be used indoors [22].GPS is now widely used and recognized for outdoor location applications such as car navigation.In indoor settings, however, GPS technology has difficulties establishing a signal and maintaining accuracy [23].GPS cannot be utilized indoors due to poor signal strength and low accuracy.The GPS satellite's signal strength is weak, and after a long journey, the signal strength reaching the GPS receiver is significantly weaker, and barely strong enough to be useful [24].Any barrier in the line of sight between the antenna and the sky further weakens the signal.Walls usually reflect or obstruct GPS signals indoors, preventing them from penetrating the area.As a result, satellite signals cannot be properly picked up, and the room's poor signal makes it difficult to pinpoint one's location.While certain GPS devices can be put near a window to receive satellite signals, this is not always feasible or practicable in every building or indoor situation.In an open outdoor environment, GPS can reach 5−10 m precision, which is far from the half-meter accuracy required for many industrial use cases [25].Indoor precision has deteriorated even further.Both indoor location tracking and indoor navigation are in high demand for a wide range of applications.The UAV can surely provide a better solution for indoor localization.Figure 1 [26] shows the various UAV indoor location application scenarios.When discussing position tracking, we are usually asked why a Global Positioning System (GPS) cannot be used indoors [22].GPS is now widely used and recognized for outdoor location applications such as car navigation.In indoor settings, however, GPS technology has difficulties establishing a signal and maintaining accuracy [23].GPS cannot be utilized indoors due to poor signal strength and low accuracy.The GPS satellite's signal strength is weak, and after a long journey, the signal strength reaching the GPS receiver is significantly weaker, and barely strong enough to be useful [24].Any barrier in the line of sight between the antenna and the sky further weakens the signal.Walls usually reflect or obstruct GPS signals indoors, preventing them from penetrating the area.As a result, satellite signals cannot be properly picked up, and the room's poor signal makes it difficult to pinpoint one's location.While certain GPS devices can be put near a window to receive satellite signals, this is not always feasible or practicable in every building or indoor situation.In an open outdoor environment, GPS can reach 5−10 m precision, which is far from the half-meter accuracy required for many industrial use cases [25].Indoor precision has deteriorated even further.Both indoor location tracking and indoor navigation are in high demand for a wide range of applications.The UAV can surely provide a better solution for indoor localization.Figure 1 [26] shows the various UAV indoor location application scenarios. Background UAVs exist in many different forms and sizes.The sizes can range from several meters long to a few centimeters, and in terms of forms, they range from fixed-wing aircraft to blimps and multi-rotor UAVs [27,28].Fixed-wing aircrafts and blimps offer longer flight times, but multi-rotor UAVs have been popular in research because they offer higher maneuverability and control.Among the multi-rotor UAVs, a quadrotor configuration has been relatively more popular, with hexacopters and octocopters being other popular configurations [29].A study conducted by [30] surveys recent trends in UAV research in terms of the types of UAV systems, their applications in different research areas, trends in UAV research over the years, flight time, degree of autonomy, etc. UAVs can also be employed in the form of multi-UAV systems or swarms.For ease of deployment in most applications, some degree of autonomous operation of an UAV system is often desirable [31].Autonomy, in terms of robotics, is the ability of a system to operate without the intervention of a human operator, and the same definition applies in the context of Background UAVs exist in many different forms and sizes.The sizes can range from several meters long to a few centimeters, and in terms of forms, they range from fixed-wing aircraft to blimps and multi-rotor UAVs [27,28].Fixed-wing aircrafts and blimps offer longer flight times, but multi-rotor UAVs have been popular in research because they offer higher maneuverability and control.Among the multi-rotor UAVs, a quadrotor configuration has been relatively more popular, with hexacopters and octocopters being other popular configurations [29].A study conducted by [30] surveys recent trends in UAV research in terms of the types of UAV systems, their applications in different research areas, trends in UAV research over the years, flight time, degree of autonomy, etc. UAVs can also be employed in the form of multi-UAV systems or swarms.For ease of deployment in most applications, some degree of autonomous operation of an UAV system is often desirable [31].Autonomy, in terms of robotics, is the ability of a system to operate without the intervention of a human operator, and the same definition applies in the context of UAVs.Vision-based autonomous navigation and control strategies for autonomous operation are some very important aspects of UAV implementation [32]. UAVs have grown in popularity in various military and civilian sectors in recent years.They may work in hazardous settings where entry would be dangerous or impossible.Mini-UAV is a novel technology with the potential to be used for a range of activities such as precision farming, building maintenance, and surveillance missions [33].They are particularly intriguing to academics due to their tiny size, great agility, and inexpensive cost, in addition to being excellent for indoor use.Most UAVs employ a GPS module to establish their location; however, GPS does not work well indoors [34].If the mini-UAV cannot get location data, it will struggle to fly independently. Signal processing has provided a set of tools that have been refined and utilized to great advantage over the last fifty years for UAVs [35].Different tools are used to solve diverse problems, and these tools are periodically combined to construct signal processing systems.Speech and audio, autonomous driving, picture processing, wearable technologies, and communication systems are all powered by signal processing.Currently, signal processing is influenced by deep learning [36].Deep learning for signal data requires extra steps compared to applying deep learning or machine learning (ML) to other data sets.Obtaining good-quality signal data is challenging because noise and fluctuation are used for UAV communication, especially in indoor environments.Most signal data include undesirable elements such as wideband noise, jitters, and distortions. From the above-outlined introduction, this paper deals with a review of UAV localization in indoor environments.The summary of the contribution of this work is as follows: • It discusses the various existing surveys which work on indoor localization. • It examines several ML-based indoor localization approaches. • It explores various open issues and suggests future directions for ML-based indoor localization approaches. An Indoor Localization Strategy for a Mini-UAV in the Presence of Obstacles Long Cheng et al. [37] propose a new approach to mini-UAV localization in a wireless sensor network.To solve the problem of localization in non-line of sight (NLOS) environments, they present NLOS identification and a maximum joint probability algorithm.The proposed method requires only the RSS estimation model parameters to detect the propagation requirement and a particle swarm optimization (PSO)-based maximum joint probability procedure to assess the position of the mini-UAV.The outcome evidences the higher output rate for the NLOS environment.Furthermore, the highest joint probability approach based on PSO surpasses other techniques. 3. 2. An IMU/UWB/Vision-Based Extended Kalman Filter for Mini-UAV Localization in Indoor Environments Using 802.15.4aWireless Sensor Network Alessandro Benini et al. [38] describe a method for indoor localization of a mini-UAV using Ultra-Wide Band technology, a low-cost inertial measurement unit (IMU), and visionbased sensors.An extended Kalman filter (MA) is presented in this paper as a potential method for improving localization.The suggested method permits using a low-cost IMU in the estimated measure and built-in visual odometry to detect markers near the touchdown area.Ranging measurements allow for the reduction in inertial sensor errors caused by the limited performance of accelerometers and gyros. Classification of Indoor Environments for IoT Applications: A ML Approach Mohamad Ibrahim Alhajri et al. [39] present a ML approach for indoor environment classification based on real-time radio frequency (RF) signal measurements in a realistic setting.Various ML techniques like decision trees (DT)s, Support Vector Machine (SVM), and k-nearest neighbor (k-NN), were investigated using various RF features.The findings prove that a ML approach based on weighted k-NN, channel transfer function (CTF), and frequency coherence function (FCF) surpasses other techniques in detecting further indoor environment types with a 99.3% classification accuracy.The estimated time is set to less than 10 s, showing that the applied method is feasible for real-time implementation developments.Their study's goal was to outline the process and underline the advantages of ML as cutting-edge technology and a useful tool for classifying indoor settings. Mini-UAVs Detection by Radar Miroslav Kratky and L. Fuxa [40] present possible methods for UAV recognition, especially within the radar frequency spectrum.The detection range of sensors will only gradually and insufficiently expand in the future, necessitating the development of alternative solutions.According to this paper, one option is to link them into a linked, complex surveillance system.Their common interconnectedness, interoperability, and modularity should result in synergistic effects such as increased detection probability and decreased false alarms.When using radar to detect Low, Small, and Slow (LSS) targets, the target's radar cross-section (RCS) is the limiting factor.The carrier frequency f, or wavelength of radar, is its limitation.Additional internal and external influences include specific radar technical solutions, tactical employment within the terrain and combat formation, atmospheric conditions, and crew proficiency, among others.The capacity of radar systems to identify small, sluggish, and low-flying targets has been verified, which is the main contribution of this research to military science. Indoor UAV Localization Using a Tether Xuesu Xiao et al. [41] present an approach to localizing an UAV in indoor environments using only a quasi-taut tether.They propose a new sensor modality for tethered UAV indoor localization that uses tether-based feedback instead of GPS, inertia, and vision-based sensing.This localizer uses tether sensory information, including tether length, elevation, and azimuth angles, and is based on the transformation of polar to Cartesian coordinates.The authors show in Figure 2 that, when the tether is long and dragged down by gravity, it forms an arc instead of an ideal straight line.A mechanics model is built to quantify the inevitable tether deformation.This model can correct the calculated altitude angle and tether length, enhancing localization precision.Tests demonstrate enhanced localization precision on the Fotokite Pro, a physically tethered UAV.The findings demonstrate that the model can successfully reduce the detrimental effect of increased tether length on localization outcomes and boost localization accuracy by 31.12%.The floating strength tolerance of the particular UAV platform determines how much this proposed strategy can tolerate hovering stability mistakes. prove that a ML approach based on weighted k-NN, channel transfer function (CTF), and frequency coherence function (FCF) surpasses other techniques in detecting further indoor environment types with a 99.3% classification accuracy.The estimated time is set to less than 10 s, showing that the applied method is feasible for real-time implementation developments.Their study's goal was to outline the process and underline the advantages of ML as cutting-edge technology and a useful tool for classifying indoor settings. Mini-UAVs Detection by Radar Miroslav Kratky and L. Fuxa [40] present possible methods for UAV recognition, especially within the radar frequency spectrum.The detection range of sensors will only gradually and insufficiently expand in the future, necessitating the development of alternative solutions.According to this paper, one option is to link them into a linked, complex surveillance system.Their common interconnectedness, interoperability, and modularity should result in synergistic effects such as increased detection probability and decreased false alarms.When using radar to detect Low, Small, and Slow (LSS) targets, the target's radar cross-section (RCS) is the limiting factor.The carrier frequency f, or wavelength of radar, is its limitation.Additional internal and external influences include specific radar technical solutions, tactical employment within the terrain and combat formation, atmospheric conditions, and crew proficiency, among others.The capacity of radar systems to identify small, sluggish, and low-flying targets has been verified, which is the main contribution of this research to military science. Indoor UAV Localization Using a Tether Xuesu Xiao et al. [41] present an approach to localizing an UAV in indoor environments using only a quasi-taut tether.They propose a new sensor modality for tethered UAV indoor localization that uses tether-based feedback instead of GPS, inertia, and vision-based sensing.This localizer uses tether sensory information, including tether length, elevation, and azimuth angles, and is based on the transformation of polar to Cartesian coordinates.The authors show in Figure 2 that, when the tether is long and dragged down by gravity, it forms an arc instead of an ideal straight line.A mechanics model is built to quantify the inevitable tether deformation.This model can correct the calculated altitude angle and tether length, enhancing localization precision.Tests demonstrate enhanced localization precision on the Fotokite Pro, a physically tethered UAV.The findings demonstrate that the model can successfully reduce the detrimental effect of increased tether length on localization outcomes and boost localization accuracy by 31.12%.The floating strength tolerance of the particular UAV platform determines how much this proposed strategy can tolerate hovering stability mistakes. Multi-Ray Modeling of Ultrasonic Sensors and Application for Micro-UAV Localization in Indoor Environments Lingyu Yang et al. [42] suggest a method that is based on an IMU and four Ultrasonic sensors.It is suitable for use in a light Micro Air Vehicle (MAV) because it accurately approximates a beam pattern while maintaining low computational complexity.An EKF is utilized and the IMU, after a quick approach, is described for constructing the Jacobian matrix of the measurement function.The model's accuracy is tested using a MaxSonar MB1222 sensor, and a simulation and experiment are run using the Thales II MAV platform.To achieve higher precision positioning, sonar and IMU sensor measurements are fused.The jump filter is used to suppress abnormal and significant differences between estimates and measurements.The proposed methods are validated using simulations, and the findings show that the model has a localization precision of about 20 cm.The findings demonstrate that its positioning precision exceeds 20 cm, and its computing complexity is sufficiently low to run on the stm32 platform.An investigation conducted with an unmodeled obstacle indicates that the suggested method's great robustness does not affect the findings. Indoor Positioning Using Bluetooth Technology By utilizing the RSSI, the authors have suggested a low-cost indoor positioning system (IPS) for UAVs that is based on Bluetooth low energy (BLE) beacons [43].BLE is a lowpower technology designed to send sparse quantities of data.The relationship between the RSSI and the distance between two or three Bluetooth devices, an onboard receiver, and transmitters placed in the interior operating field is examined using a mathematical model.This project's experimental findings and system performance analysis demonstrate the viability of its methods.The initial examination is to validate the device's attributes and operation and the correctness of the approaches used. UAV Localization Using Ultra-Wideband (UWB) The design and assessment of a realistic and collaborative UWB positioning system by employing released integrating radio frequency devices and antennas are covered in the study of [44].The UAV equipment uses GNSS emulation's signals to maintain its location.In addition to other aspects like antenna characteristics, constellation-aware parameters have been considered.A non-line-of-sight rejection has been implemented based on the relationship between the initial path and the strength of the aggregated channel impulse output.In order to get a large sample set of findings to evaluate the system's accuracy in actual usage, an experiment using a variety of locations and orientations is carried out.In a preliminary experiment, the function of the model achieves a root-mean-square error with a probability of 95% of less than 10 cm in the horizontal plane and less than 20 cm in 3D space.A GNSS emulation system is installed on an exploratory UAV carrier to evaluate the real-time inflight deployment of the UWB locating mechanism.It is a proof of concept that the GNSS emulation might be used with commercially available UAV devices to provide those systems with the ability to navigate indoors.In order to enhance flying performance, additional study is needed to enhance the processing for the UAV-specific navigational controller and to address magnetometer problems indoors, maybe with the use of integrated measurement unit (IMU) sensor fusion. Magnetic Field Measurements Based Indoor Positioning The authors in [45] offer an interior location system for an UAV in their article.Magnetic field observations are the primary source of data needed to determine where the platform is located.This is a parameter that is both cheap to measure and simple to get.A versatile measuring system has emerged as a result of this approach.The measuring system comprises three-axis electronic magnetometers, a battery-powered microprocessor system with an SD memory card, and an LCD display.The validation experiments were carried out in a dedicated room, and permanent magnets were used to modify the local magnetic field, which could act as beacons. Interference and Energy-Based Approach for UAV Localization Learning Techniques The authors in [46] design a prediction-based proactive drone management framework to decrease network interference and enhance energy efficiency in the multiple drone small cells (DSC) scenarios.The system proactively determines whether a DSC should be awake or asleep due to the predicted user positions at the next timeslot.The RF-based mobility prediction model with a high accuracy of 93.14% is built from a small sample data in the offline phase.In the online phase, the wake/sleeping schemes of the DSCs are proactively determined according to the predicted user positions.This study only addresses DSC-touser access links' interference and energy consumption problems. Table 1 summarizes the comparison, with advantages, disadvantages and limitations, of the current existing studies on indoor localization.The faster UAV can be detected by using high-frequency-based radar [40] Indoor UAV Localization Using a Tether predication time has been reduced in the real environment Power optimization approach required to be in more sophisticated indoor radio channel [41] Multi-Ray Modelling of Ultrasonic Sensors and Application for Micro-UAV Localization in Indoor Environments The proposed approach is efficient for localizing the UAV in a moving frame Tracking accuracy can be enhanced by using the ML approach [42] Indoor Positioning using Bluetooth Technology Kalman filtering is applied to improve collected data from noise, drift, and bias errors It must focus on outdoor tests for a safe landing area determination system [43] UAV localization using Ultra-wideband (UWB) Better probability positioning accuracy using the global navigation satellite system (GNSS) It required optimizing the filtering for the UAV-specific navigation controller and magnetometer issues indoors [44] Magnetic field measurements based on Indoor Positioning Higher attainable accuracy results as compared to infrared or sound waves based positioning systems The impact examination of powered and operating electronic devices needs to be explored [45] Interference and energy-based approach for UAV localization Learning Techniques Interference and enhance energy efficiency in the multiple drone small cells (DSC) scenarios Computational complexity is increasing, which needs to mitigate for higher interference scenarios [46] Machine Learning-Based Indoor Localization Several studies provide a thorough examination of ML-enabled localization methods using the most popular wireless technologies [47].In addition, several authors discuss various ML techniques (supervised and unsupervised) that could effectively address indoor localization challenges, such as NLOS, device heterogeneity, and environmental variations, with sufficient difficulty [48].These authors also discuss various ML techniques, supervised and unsupervised, that could relieve various indoor localization challenges to achieve a comprehensive indoor positioning system (IPS) [49].Therefore, the following sub-sections will discuss various ML models focusing on several learning-based approaches. k-Nearest Neighbor (k-NN) The k-nearest neighbor algorithm, also called the k-NN, uses distance to classify or predict the grouping of individual data points using supervised learning methods [50].With k-NN, all computation is delayed until the function is evaluated and locally approximated.If the features reflect distinct physical units or have mostly changed sizes, regulating the training data can drastically improve the precision of this method which utilizes distance for categorization [51].Weighting neighbor contributions might be a beneficial strategy for both classification and regression, enabling nearby neighbors to contribute more to the average than remote neighbors.The k-NN is a fundamental ML technique based on a supervised learning approach that assumes that the new and current instances are comparable and allocates the new instance to the category closest to the existing categories [52].After saving all prior data, new data points are classified using the k-NN algorithm based on similarity.This suggests that when new data is received, it may be swiftly sorted using the k-NN approach into the proper suite category.Although the k-NN approach may be used for both classification and regression problems, classification jobs are where it is very regularly employed [53].The k-NN does not make assertions about the distribution data because it is non-parametric.Since it does not immediately begin learning from the training dataset, this method is also referred to as a lazy learner algorithm [54].Instead, it makes use of the dataset to carry out an action when categorizing data.The k-NN technique keeps the data throughout the training phase and classifies fresh data into a category that is very near to the new data received. Support Vector Machine (SVM) The SVM is an intelligent, fast, and highly adaptable ML algorithm that can be used for regressions and classifications (linear and nonlinear ML) and for detecting outliers [55].It is one of the most well-liked ML models due to these features [56].SVMs are divided into support vector classification and support vector regression based on supervised ML.Face recognition, text and hypertext categorization, image classification, bioinformatics, protein fold, distant homology, and handwriting detection are just a few applications for SVMs [57].The SVM algorithm finds an N-dimensional hyperplane that splits the input points into distinct categories.In addition to regression, classification, and outlier detection, SVM is a supervised learning algorithm that performs effectively in high-dimensional environments [58].When there are more dimensions than samples, this method is also effective.It is also memory-efficient since the decision function only uses a subset of the training points.Regardless, SVMs employing kernel functions avoid over-fitting if the number of features is significantly more than the number of samples and the regularization term is crucial [59].Furthermore, SVMs do not yield probability estimates immediately; instead, they need a time-consuming five-fold cross-validation procedure. Decision Tree A decision tree model is an approach used in ML that presents possibilities based on the characteristics of the input [60].It follows the "branch node theory", according to which each branch stands for both a decision and a variable.Decision tree model methods come in different varieties.Some of these algorithms have been applied to categorize radiological pictures, and as a result, they may be found in radiology or radiology-related computer science articles.Decision tree models, unlike most other ML algorithms, apply principles and are hence understandable [61].Instead of diagrams of linked nodes, decision tree models can be shown using partitioned graphs and perspective plots.A decision tree is a tree-like diagram in which the tree trunk represents the internal nodes that describe a test of a certain feature, the branches reflect the test results, and the leaves indicate the nodes' categorization marks.In the decision tree approach, categorization aims to organize the data in a way that includes both the root node and the leaf node [62].Decision trees can analyze data to find key system features that point to potentially dangerous behavior.Consequently, evaluating the arrangement of intrusion identification information increases the value of distinct security frameworks.The growth of attack signatures, various checking activities, and patterns and instances that stimulate checking may all be recognized [63].The usage of decision trees differs from other methods in that it offers a complete set of rules that are straightforward to put into practice and that are easily connected with real-time technology. A DT represents a set of conditions which are hierarchically organized and successively applied from the root to the leaf of the tree.DTs are easy to interpret, and their structure is transparent [64].DTs produce a trained model that can represent logical rules, and the model is used to predict new datasets through the repetitive process of splitting.In a decision tree method, features of data are referred to as predictor variables, whereas the class to be mapped is the target variable.For regression problems, the target variables are continuous.For an UAV localization application, a simple example of a decision tree to predict 3D location is depicted in Figure 3. immediately; instead, they need a time-consuming five-fold cross-validation procedure. Decision Tree A decision tree model is an approach used in ML that presents possibilities based on the characteristics of the input [60].It follows the "branch node theory", according to which each branch stands for both a decision and a variable.Decision tree model methods come in different varieties.Some of these algorithms have been applied to categorize radiological pictures, and as a result, they may be found in radiology or radiology-related computer science articles.Decision tree models, unlike most other ML algorithms, apply principles and are hence understandable [61].Instead of diagrams of linked nodes, decision tree models can be shown using partitioned graphs and perspective plots.A decision tree is a tree-like diagram in which the tree trunk represents the internal nodes that describe a test of a certain feature, the branches reflect the test results, and the leaves indicate the nodes' categorization marks.In the decision tree approach, categorization aims to organize the data in a way that includes both the root node and the leaf node [62].Decision trees can analyze data to find key system features that point to potentially dangerous behavior.Consequently, evaluating the arrangement of intrusion identification information increases the value of distinct security frameworks.The growth of attack signatures, various checking activities, and patterns and instances that stimulate checking may all be recognized [63].The usage of decision trees differs from other methods in that it offers a complete set of rules that are straightforward to put into practice and that are easily connected with real-time technology. A DT represents a set of conditions which are hierarchically organized and successively applied from the root to the leaf of the tree.DTs are easy to interpret, and their structure is transparent [64].DTs produce a trained model that can represent logical rules, and the model is used to predict new datasets through the repetitive process of splitting.In a decision tree method, features of data are referred to as predictor variables, whereas the class to be mapped is the target variable.For regression problems, the target variables are continuous.For an UAV localization application, a simple example of a decision tree to predict 3D location is depicted in Figure 3. Extra-Tree Extra-Trees, or extensively randomized trees, is an ensemble learning approach.This approach creates a collection of decision trees [65].When constructing the tree, the Extra-Tree Extra-Trees, or extensively randomized trees, is an ensemble learning approach.This approach creates a collection of decision trees [65].When constructing the tree, the decision rule is chosen at random.This technique and random forest are quite similar, except for the randomly selected split values.In order to produce a combination of the unpruned decision or regression trees, the Extra-Trees methodology uses the usual top-down construction process [66].It divides nodes by choosing cut-points randomly and builds the trees using the whole learning sample, both of which are key differences from earlier treebased ensemble techniques.In addition, the Extra-Trees splitting method can be used for numerical features.This method has two parameters: the minimum sample size for splitting a node and the number of randomly selected attributes for each node.These parameters are repeatedly mixed with the initial learning sample to create an ensemble model [67].In classification problems, the final prediction is produced by the majority vote, whereas in regression problems, it is produced by the arithmetic average. Random Forest Using decision tree methods, a supervised ML methodology known as a random forest is developed [68].This strategy is used to anticipate behaviors and outcomes in a wide range of industries, including banking and e-commerce [69].This method employs ensemble learning, which is a method for mixing several classifiers to tackle challenging issues.There are numerous possible decision trees in a random forest algorithm.The random forest algorithm "forest" is trained using either bagging or bootstrap aggregation [70].The ensemble meta-algorithm bagging improves the ML systems' precision.Based on the predictions provided by the decision trees, the algorithm chooses the result.By averaging or averaging the output of multiple trees, it generates predictions.As the number of trees increases, the output's accuracy increases.Accuracy is increased while dataset overfitting is decreased.This method produces predictions without needing a complex set of package settings.Compared to the decision tree technique, it is more accurate and handles missing data well [71].Without the requirement for hyper-parameter adjusting, this method can deliver a decent forecast.Moreover, random forest addresses the decision tree generalization issue.Each random forest tree selects a sample of features at random at the node's splitting point.Before training, three critical hyperparameters for random forest algorithms must be set [72].Among these are node size, tree count, and sampled feature count.The random forest classifier may then be used to tackle classification or regression problems. Neural Network (NN) A neural network is a circuit or network of artificial neural networks (ANNs) made up of artificial neurons or nodes [73].ANNs are also used to address difficulties in AI [74].ANNs are a collection of algorithms that use a technique inspired by the way the human brain functions to find undiscovered connections in a batch of data.In order to uncover relationships among huge amounts of data, neural networks are a group of algorithms that simulate the functioning of a brain [75].As a result, they are frequently able to mimic synapses and neural connections seen in the brain.Applications utilized in the financial sector include forecasting, market research, fraud detection, and risk assessment.Deep learning methods use multiple-process-layer neural networks, sometimes referred to as "deep" networks [76].To produce the final output (last layer), the input data (first layer) is processed against a hidden layer (middle layer).It is possible to have several hidden layers.Figure 4 [77] shows a simplified view of a feed-forward artificial neural network. Feed-Forward Neural Network (FFNN) A feed-forward neural network (FFNN) is a kind of artificial neural network in which there are no cycles in the connections between the nodes [78].Numerous linked neurons make up an ANN.Each neuron receives a set of floating-point numbers and multiplies them by a set of weights, which are also floating-point numbers [79].The weights serve as a method to emphasize or disregard particular inputs. Table 2 shows the summary along with the advantages and disadvantages of various above studies wherein ML algorithms are used for UAV indoor localization. Feed-Forward Neural Network (FFNN) A feed-forward neural network (FFNN) is a kind of artificial neural network in which there are no cycles in the connections between the nodes [78].Numerous linked neurons make up an ANN.Each neuron receives a set of floating-point numbers and multiplies them by a set of weights, which are also floating-point numbers [79].The weights serve as a method to emphasize or disregard particular inputs. Table 2 shows the summary along with the advantages and disadvantages of various above studies wherein ML algorithms are used for UAV indoor localization.Missing data in the collection do not have any impact. • A small change in the data can significantly alter the structure of the decision tree. • Calculations can eventually get very complicated. • It takes longer to train the model. • Expensive Extra Tree • Able to monitor variables that are both continuous and categorical • Crediting the missing data is not essential. • Less coding and comparison are needed during the pre-processing processes. • It takes up memory • Takes more time • A small modification in the data can significantly impact the tree structure. • The complexity of space and time is more complicated. Random Forest • Provide extra time for convergence • The gradient issue of vanishing and explosion. Performance Evaluation Metrics for ML Models This section briefly describes the performance evaluation metric for machine learning models in indoor UAV localization systems [80].The performance evaluation metric of machine learning is divided into regression and classification, as described below: Error localization refers to the process of (automatically) identifying the fields in an edit-failed record that need to be imputed.To ensure the final (corrected) record will not fail to edit, a minimum set of fields is imputed using an optimization algorithm [81]. Mean Squared Error The mean squared error, or MSE, measures the accuracy of statistical models.It assesses the average squared variation between the values anticipated and observed.When a model is error-free, the MSE is equal to 0. The worth increases as model inaccuracy grows.The mean squared error is also known as the mean squared deviation (MSD).The MSE, or the second moment of the error, includes both the variance of the estimator, which shows how widely spread guesses are from one data sample to the next, and the difference between the approximate average value and the true value.The units of measurement for MSE and the square of the value being assessed are the same [82]. Root Mean Square Error Root mean square error (RMSE) is the residuals' standard deviation.RMSE is an assessment of how to spread out the residuals, which are a measure of how distant the data points are from the regression line.RMSE illustrates the Euclidean distance between measured true values and forecasts.Determine the difference between the prediction and the actual value for each data point, the residual norm for every data point, the mean of the residuals, and the square root of those means to determine the RMSE.It is very beneficial to have a single number when evaluating a model's performance in machine learning, whether at training, cross-validation, or maintaining after deployment.One of the most used metrics for this is the root mean square error [83]. R-Squared The R-Squared statistic indicates how much variation in a dependent variable is described by one or several independent variables within a regression model.In the context of investing, R-squared is commonly regarded as the proportion of a fund or security's motions that changes in a benchmark index can explain.An R-squared of 100% means that changes in the index accurately describe all changes in security [84]. Accuracy In science and engineering, accuracy refers to the degree to which the dimensions of a quantity are near the actual value of that number.Accuracy is the difference between a set of measurements (observations) and the actual value.Therefore, accuracy has a value between 0 and 1.The accuracy measure falls short when working with unbalanced data and models that provide a probability score.Inadequate precision results in a discrepancy between the result and the real value.High precision and trueness are required for greater accuracy [85]. Precision and Recall Precision is the degree to which the measurements of corresponding objects agree with one another.The degree to which repeated data points under the same conditions yield the same findings represents the accuracy of a measuring system, which is linked to reproducibility and repeatability.Accuracy is not necessary for precision.In other words, scientific observations can be appropriate without being exact, and they can be highly precise but not remarkably accurate.The highest caliber scientific observations are exact and accurate [86]. F1-Score The F1-score metric employs a combination of precision and recall.In other words, the F1 score is the harmonic mean of the precision and recall values [87]. Confusion Matrix A confusion matrix is a tabular visualization of the ground-truth labels versus model predictions.Each row of the confusion matrix describes the instances in a predicted class and each column describes the instances in an actual class [88]. AUROC (Area under Receiver Operating Characteristics Curve) AUROC is a combined measure of sensitivity and specificity.The AUROC is a measure of the overall performance of a diagnostic test and is interpreted as the average sensitivity value for all possible specificity values [89]. Table 3 describes the performance evaluation metric for machine learning models.Localization algorithms can be classified as range-free or range-based.In range-based localization, TDOA, TOA, TOF, TOA, RSSI, and CSI are used as the distance measurement technologies [90][91][92].Though TOA, TDOA, and AOA provide high accuracy, they require complex hardware arrangements to measure, while CSI and RSSI require simplified hardware setup with good accuracy [93].In addition, the performance may suffer from strong multipath and NLOS propagation in urban scenarios.Further, RSSI has been recommended for many localization systems [94].Proposed UAV indoor localization algorithms can also be categorized as deterministic, probabilistic, filter-based, or ML-based algorithms [95,96].Proposed location estimation algorithms based on filters are highly mathematical and mostly impractical in implementation on real hardware devices.Related works prove that ML-based algorithms provide a promising localization performance in terms of localization accuracy [97].Moreover, ML/DL-based systems are easy to deploy on edge devices with clouds.Furthermore, for massive data volumes of multi-story buildings, the deep learning neural network (DNN) emerges in a modernist way [98,99].DNN can function well with fewer training data dimensions and draw out more useful features from subsequent samples.Statistical and empirical methodologies will provide extremely useful guidance on the various discovering techniques for indoor location study [100].Localization strategies will face challenges with protocols, delays, and radio waves.Additionally, the performance of algorithms and the variety of applications both affect accuracy.The research trend for location-aware computing and navigation path prediction is still reaffirmed [101,102]. Wireless Technologies and Hardware Design for Future UAV Localization Systems Designing concerns for hardware setups for ML algorithm-based UAV localization systems for disaster management applications is critical [103].The system's wireless technology should be carefully deiced considering the sensing range, and power consumption, among others [104].Wireless technologies UWB, LTE, Bluetooth, LoRA, or ZigBee can be proposed along with ML approaches [105,106].Since the wide bandwidth makes detecting the time-delayed versions of the transmitted signal easier, UWB achieves excellent multipath resistance and good material penetrability [107].Since the introduction of the Bluetooth 4.0 standard protocol, Bluetooth, another wireless technology standard for sharing data over short distances, has grown in popularity.A version of Bluetooth called Bluetooth low energy (BLE) is designed for low-power applications and enables some applications to run continuously for several months [108].However, BLE is not recommended for UAV localization due to its short-range communication.Further, when deploying many nodes in the network multipath, channel fading and co-channel interferences may occur [108].Therefore, applying co-channel interference mitigating techniques is essential [109]. Since the nodes are being deployed in a robust environment, power management is critical in these hardware systems when applying ML approaches [110,111].This ensures that the localization system with a deep reinforcement learning approach continues to function normally even if some measuring units may be destroyed or malfunction due to the harsh environment, or run out of energy [112].Computing the location placement technique with high robustness could work even when some signals are not accessible; positioning approaches must employ this incomplete information.To prolong the battery life, power management techniques such as deep sleep modes could apply [113]. Signal Conditioning Techniques for Future UAV Localization Systems Ranging measurements, including RSSI, fluctuate highly due to the multipath environment [114].Therefore, strong signal processing techniques should be applied before using ML algorithms to train [115].Related works have proposed many linear and nonlinear filters, such as moving average, gaussian, particle, and Kalman filters for smoothing the signals [116].Similar to the moving average, the filters are easy to implement on hardware setups, however, the Kalman filter shows impractical implementation due to its complexity [117].The cost function is proposed for the linearization maximum likelihood algorithm with a mean square error [118]. Privacy Concerns and Security in Future UAV Localization Systems In some applications, UAV calls for recording user preferences, activity history, present location, and prior moves [119].The development, adoption, and expansion of UAV applications may be severely constrained by the risks connected with the violation of location privacy when applying ML approaches [120].Additionally, UAVs require that the user reveals their location to enable personalization [121].Location data may be kept, used, and sold by service providers.These possible hazards may deter users.Unrestricted access to a person's location information could result in dangerous interactions. Network attack is a high threat in UAV localization systems [122].Even if secure communication is carried out between anchor nodes or UAVs and user devices, attacks may still be possible when users' devices communicate with service servers [123].The attacker will be able to determine UAV location information if the attacker receives communication from the user and service server.Therefore, the service provider should implement a man-in-the-middle attack protection method (MITM) [124].MITM implements protection measures using mutual authentication methods like public key infrastructures (PKI) or stronger mutual authentication methods like secret keys or passwords [125]. Conclusions The detailed description of radio wave signals for indoor positioning, based on widely used technologies and efficient positioning techniques, was reviewed in this study.The introduction outlined how non-radio wave transmissions behave.As previously indicated, the current positioning algorithms have dealt with the inaccurate positioning problem brought on by the signal variation caused by multipath propagation, the hardware, and software complexity, real-time processing, and the dynamically changing environment.Several methods for localization of mini-UAVs in indoor environments are based on ML approaches, where each technology comes with its own unique set of limitations.Compared to the traditional localization algorithms proposed, ML algorithms are more accurate, less complex, and easier to deploy on real hardware devices (edge computing).Consequently, choosing a suitable remedy in a unique circumstance is the best course of action.In certain ways, developing low-cost positioning technologies to improve positioning accuracy is severely constrained due to the high expense of high-precision indoor positioning technology, and necessitates additional auxiliary equipment or a great deal of simulated processing.Localization strategies will face challenges with protocols, delays, and radio waves.Additionally, the performance of algorithms and the variety of applications both affect accuracy.The research trend for location-aware computing and navigation path prediction is still reaffirmed.Integrating and using various technologies will be the future trend to achieve complementary advantages.Furthermore, channel fading and co-channel interferences may occur when deploying many nodes in the network multipath.Therefore, applying co-channel interference mitigating techniques is essential.In general, the less expensive the technology, the more accurate it is, and the easier it is to popularize.Signal core features are extracted from signals using classifier algorithms in localization.ML for signal processing is a better technology that can be used to detect the position of an UAV.Integration of a range finder could significantly improve overall performance in indoor environments. Figure 2 . Figure 2. UAV flying in a motion capture studio with a tether dragged down by gravity [41]. Figure 4 . Figure 4.A simplified view of a feedforward artificial neural network [77]. Figure 4 . Figure 4.A simplified view of a feedforward artificial neural network [77]. Table 1 . Summary of existing studies on indoor localization. Table 2 . ML algorithms used for UAV indoor localization. Table 3 . Performance Evaluation of Machine Learning models.
10,306.8
2023-03-24T00:00:00.000
[ "Engineering", "Computer Science" ]
TeV scale leptogenesis via dark sector scatterings We propose a novel scenario of generating lepton asymmetry via annihilation and coannihilation of dark sector particles including t-channel processes. In order to realistically implement this idea, we consider the scotogenic model having three right handed neutrinos and a new scalar doublet, all of which are odd under an in-built Z2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z_2$$\end{document} symmetry. The lightest Z2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z_2$$\end{document} odd particle, if electromagnetically neutral, can be a dark matter candidate while annihilation and coannihilation between different Z2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z_2$$\end{document} odd particles into standard model leptons serve as the source of lepton asymmetry. The light neutrino masses arise at one loop level with Z2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z_2$$\end{document} odd fields going inside the loop. We show that experimental data related to light neutrinos, dark matter relic abundance and baryon asymmetry can be simultaneously satisfied in the model for two different cases: one with fermion dark matter and the other with scalar dark matter. In both scenarios, t-channel annihilation as well as coannihilation of Z2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z_2$$\end{document} odd particles play a non-trivial role in producing the non-zero CP asymmetry. Both the scenarios remain allowed from DM direct detection while keeping the scale of leptogenesis as low as TeV or less, lower than the one for vanilla leptogenesis scenario in scotogenic model along with the additional advantage of explaining the baryon-dark matter coincidence to some extent. Due to such low scale, the model is testable through rare decay experiments looking for charged lepton flavour violation. Introduction There have been significant progress in last few decades in gathering evidences suggesting the presence of a mysterious, non-luminous form of matter, known as dark matter (DM) in the present universe, whose amount is approximately five a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>(corresponding author) times more than the ordinary luminous or baryonic matter density B ≈ 5% [1]. Among different beyond standard model (BSM) proposals for DM, the weakly interacting massive particle (WIMP) paradigm remains the most widely studied scenario where a DM candidate typically with electroweak (EW) scale mass and interaction rate similar to EW interactions can give rise to the correct DM relic abundance, a remarkable coincidence often referred to as the WIMP Miracle. On the other hand, out of equilibrium decay of a heavy particle leading to the generation of baryon asymmetry has been a very well known mechanism for baryogenesis [2,3]. One interesting way to implement such a mechanism is leptogenesis [4] where a net leptonic asymmetry is generated first which gets converted into baryon asymmetry through B + L violating EW sphaleron transitions. The interesting feature of this scenario is that the required lepton asymmetry can be generated within the framework of the seesaw mechanism that explains the origin of tiny neutrino masses [5], another observed phenomena which the SM fails to address. Although these popular scenarios can explain the phenomena of DM and baryon asymmetry independently, it is nevertheless an interesting observation that DM and baryon abundance are very close to each other, within the same order of magnitudes DM ≈ 5 B . Discarding the possibility of any numerical coincidence, one is left with the task of constructing theories that can relate the origin of these two observed phenomena in a unified manner. There have been several proposals already which mainly fall into two broad categories. In the first one, the usual mechanism for baryogenesis is extended to apply to the dark sector which is also asymmetric [6][7][8][9]. The second one is to produce such asymmetries through annihilations [10][11][12] where one or more particles involved in the annihilations eventually go out of thermal equilibrium in order to generate a net asymmetry. The so-called WIMPy baryogenesis [13][14][15][16][17][18] belongs to this category, where a dark matter particle freezes out to generate its own relic abundance and then an asymmetry in the baryon sector is produced from DM annihilations. While there is no evidence yet for seesaw mechanism, recently the so-called scotogenic model [19] as an alternative to canonical seesaw mechanism has been extensively studied, where Majorana light neutrino masses can be generated at one loop level with DM particle in the loop. In the scotogenic model, the required lepton asymmetry can be generated through right handed neutrino decays at a low scale M N ∼ 10 TeV at the cost of a strongly hierarchical neutrino Yukawa structure [20,21], but it can not explain the coincidence of baryon asymmetry and DM abundance. An interesting question raised is whether leptogenesis through (co-)annihilations of Z 2 odd particles can be realised by lowering the scale of leptogenesis further compared to vanilla leptogenesis in the scotogenic model. Giving an answer to this question is the main purpose of this work. We examine how the (co-)annihilations of Z 2 odd particles can produce the lepton asymmetry while keeping the correct DM abundance, and show that the DM relic abundance is correlated with baryon asymmetry in the scenario. To include all possible annihilations producing lepton asymmetry, we consider the annihilations and coannihilations of all Z 2 odd particles instead of restricting them to the lightest Z 2 odd particle which is also the DM candidate. If we consider the neutral component of the Z 2 odd scalar doublet as DM, then there exists s-channel coannihilation diagrams between DM and right handed neutrinos which can produce a net leptonic asymmetry. While the model satisfies correct DM abundance and lepton asymmetry, the DM sector can be probed at direct detection experiments as well as colliders due to the electroweak gauge interactions of scalar doublet DM. We then consider the fermion DM scenario where the lightest right handed neutrino plays the role of DM. While the annihilation of a pair of the fermionic DM can not produce a net lepton asymmetry in this case, the annihilation of Z 2 odd scalar doublets can contribute to the generation of lepton asymmetry. Due to the natural absence of typical s-channel diagrams of scalar doublet annihilations leading to lepton asymmetry, here we show how t-channel diagrams (both tree level and one loop level) can play a non-trivial role in creating the required asymmetry. As far as we know, the contributions of (co-)annihilations of dark sector particles to lepton asymmetry in this minimal model was not considered before. In both the scenarios we address here, the criteria for "on-shell"-ness of loop particles in one loop annihilation diagrams dictate the particle spectrum and hence the nature of dark matter candidate. We show that it is possible to satisfy the requirement of baryon asymmetry, light neutrino mass and DM related constraints in both the scenarios while keeping the scale of leptogenesis as low as 5 TeV, lower than the scale of vanilla leptogenesis in the same model [21,22]. Due to such a low scale, the model has another advantage to predict observable rates of charged lepton flavour violation accessible by the sensitivity of the future experiments. This paper is arranged as follows. In Sect. 2, we briefly review on the minimal scotogenic model followed by detailed discussion on leptogenesis from annihilation and coannihilations in this model in Sect. 3. We finally conclude in Sect. 4. Minimal scotogenic model The minimal scotogenic model [19] is the extension of the SM by three copies of right handed singlet neutrinos N i , i ∈ 1, 2, 3 and one scalar field η transforming as a doublet under SU (2) L . An additional discrete symmetry Z 2 is incorporated under which these new fields are odd giving rise to the possibility of the lightest Z 2 -odd particle being a suitable DM candidate. The Lagrangian involving the newly added singlet fermions is The electroweak symmetry breaking occurs due to the nonzero vacuum expectation value (VEV) acquired by the neutral component of the SM Higgs doublet while the Z 2 -odd doublet η does not acquire any VEV. After the EWSB these two scalar doublets can be written in the following form in the unitary gauge, The scalar potential of the model is The masses of the physical scalars at tree level can be written as Here m h , m η R , and m η I are the masses of the SM like Higgs boson, the CP even and CP odd scalars from the inert doublet, respectively. m η ± is the mass of the charged scalar. Without any loss of generality, we consider λ 5 < 0, λ 4 + λ 5 < 0 so that the CP even scalar is the lightest Z 2 odd particle and hence a stable dark matter candidate. Denoting the squared physical masses of neutral scalar and pseudo-scalar parts of η as m 2 R,I = m 2 η R ,η I and the mass of the right handed neutrino N k in the internal line as M k , the one loop neutrino mass can be estimated as [19] where M k is the mass eigenvalue of the right handed neutrino mass eigenstate N k in the internal line and the indices i, j = 1, 2, 3 run over the three neutrino generations. The function L k (m 2 ) is defined as From the physical scalar masses given above, we note that m 2 η R − m 2 η I = λ 5 v 2 . In this model for the neutrino mass to match with experimentally observed limits (∼ 0.1 eV), Yukawa couplings of the order 10 −3 are required if M k is as low as 1 TeV and the mass difference between η R and η I is kept around 1 GeV. Such a small mass splitting between η R and η I will correspond to small quartic coupling λ 5 ∼ 10 −4 . Thus, one can suitably choose the Yukawa couplings, quartic coupling λ 5 and M k in order to arrive at sub eV light neutrino masses. To be in exact agreement with light neutrino masses, we first rewrite the neutrino mass given above in Eq. (5) in the form of type I seesaw formula where we have introduced the diagonal matrix M with elements and The light neutrino mass matrix (7) is diagonalised by the usual Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixing matrix U , which is determined from the neutrino oscillation data (up to the Majorana phases): Then the Yukawa coupling matrix satisfying the neutrino data can be written as where O is an arbitrary complex orthogonal matrix. This is the equivalent of the Casas-Ibarra parametrisation [23] for scotogenic model [24]. Leptogenesis from annihilations In the minimal scotogenic model discussed in the previous section, there are different types of annihilation processes which violate lepton number. They are namely, 1. annihilation process of scalar doublet η: ηη → L α L β . 2. coannihilation process of scalar doublet and one of the singlet fermions: Interestingly, if we put the additional constraints that such lepton number violating annihilations and coannihilations also generate a non-zero CP asymmetry, they lead to two different DM possibilities namely, 1. the lightest neutral component of inert scalar doublet η as DM, 2. the lightest right handed neutrino N as DM. The Boltzmann equations for leptonic asymmetry is given as follows: where z = M DM T , M PL is the Planck mass and Y = n/s denotes the comoving number density, as the ratio of number density to entropy density. The details of the derivation of this Boltzmann equation as well as the relevant equations for DM are presented in appendix A. In the above equation, ηη and N i η will be given appropriately later and N i is taken from [21]. M DM in s is the mass of Z 2 odd particle taken appropriately depending on the scenario mentioned above. Here K n is the nth order Modified Bessel function of second kind. The details of the Boltzmann equations are given in appendix A. We will now discuss this general framework in the context of two specific scenarios of DM mentioned above in the upcoming subsections. The above Boltzmann equation contains the next leading order (NLO) contributions to lepton asymmetry. For usual type I seesaw leptogenesis, such NLO effects have been calculated already, for example, see [25] and references therein. It is therefore necessary to include or consider all the diagrams which can either contribute to lepton asymmetry or washout at the same order of couplings. We first only show the annihilation of dark sector particles into SM ones in Fig. 1 for the case of scalar doublet dark matter. We do not present the usual well-known two body decay diagrams (both tree and one loop) for simplicity. Based on the diagrams shown in Fig. 1, one can construct several other diagrams just by interchanging initial and final state particles. For example, it is straightforward to consider three body decay diagram of N i into η, L , X, X ≡ γ, W ± , Z . This diagram, which Feynman diagrams contributing to σ v DMDM→X L and the asymmetry at leading order. Here X ≡ γ, W ± , Z contributes to the asymmetry at order O(y 4 g 2 ) will be suppressed due to not only phase space but also additional O(g 2 ) suppression. Now, although the 3-body decay and the co-annihilation are of the same order, the contribution in the Boltzmann evolution is different as they enter the Boltzmann equation differently. The 3-body decay will enter into the equation as an addition to the 2-body decay in which the contribution coming from the 2 body decay will dominate. As will be shown later, the contributions of coannihilation to lepton asymmetry may dominant over that from decay process. This is due to fact that the imaginary part of the interference is not suppressed as it becomes two 2 → 2 processes (referring to the diagrams in the second line in Fig. 2 leading to the interference at order O(y 4 g 2 )) both mediated by the leptons after cutting loop diagrams, whereas in case of the decay the loop diagram becomes 1 → 2 and 2 → 2 in which the 2 → 2 is a t-channel process mediated by the right-handed neutrino after cutting. We present all such 1-loop diagrams contributing to the asymmetry arising from the interference at order O(y 4 g 2 ) and O(y 6 ) in Fig. 2. Similarly, there exists several washout processes that can be constructed by swapping initial and final state particles. This is true for fermion dark matter as well, which we discuss in one of the upcoming sections. To point out the significance of such processes at the same NLO, we therefore make a comparison of different washout processes (both inverse decay and scatterings) and show them in in Fig. 3 (lightest scalar as Dark Matter) and in Fig. 6 (lightest fermion as Dark Matter). Scalar doublet η as dark matter In this scenario the only way to get asymmetry is through coannihilations. Pure scalar annihilations give rise to vanishing leptonic asymmetry if η is the lightest Z 2 odd particle, as required for it to be the DM candidate. This is particularly due to the fact the "on-shell" criteria of loop particles can not be realised in such case, resulting in a vanishing CP asymmetry, as we discuss below. The relevant coannihilation processes, both tree level as well as one loop level, are shown in Figs. 1 and in 2. One may notice that the one loop self-energy diagrams, arising from the lepton propagator, do not contribute to the lepton asymmetry because the processes occur before electroweak symmetry breaking and thus lepton mass should be zero giving rise to vanishing self-energy loop contribution. Thus, only interference between tree and one loop vertex correction can give rise to CP asymmetry. For this scenario the Boltzmann equations for the Z 2 odd particles take the following form: The CP asymmetry arising from the interference between tree and 1-loop diagrams in Figs. 1 and 2 can be estimated as where the details of the asymmetry is shown in appendix B. Although the above expression is an s− wave approximation for actual expression shown in appendix B, we have used the actual expression for our analysis. It should be noted that in the above expression always (1 ≤ x j ≤ x i ) where j stands for N j inside the loop while i stands for N i as one of the initial state particles, shown in Figs. 1 and 2. This is simply to realise the "on-shell" -ness of the loop particles in order to generate the required CP asymmetry. There are several wash-out processes in this scenario, categorised as follows: • L = 2: Lη → Lη, ηη → L L are purely wash-out processes. • L = 1: there are two main sources of such wash-out, namely We have taken them into account in our numerical calculations. Adopting the Casas-Ibarra parametrisation given in Eq. (11) , we see that CP phases in U do not contribute to N i η , but complex variables in the orthogonal matrix O can lead to non-vanishing value of N i η . This is similar to leptogenesis from pure decay in this model [21] where, in the absence of flavour effects, the orthogonal matrix O played a crucial role. In general, this 3×3 orthogonal matrix O can be parametrised by three complex parameters of type θ αβ = θ R αβ +iθ I αβ , θ R αβ ∈ [0, 2π ], θ I αβ ∈ R [26]. 1 In general, the orthogonal matrix O for n flavours can be product of n C 2 number of rotation matrices of type with rotation in the α − β plane and dots stand for zero. For example, taking α = 1, β = 2 we have The above asymmetry along with this rotation (one at a time) takes the following form where m i 's are the light neutrino masses and i 's are defined above in Eq. (5). On the right hand side of the above equation, a summation over index j is implicit. As an example, we have taken the benchmark values shown in Table 1 to compute the baryon asymmetry as well as scalar DM relic numerically. Here we consider η I as the DM candidate (η I ≡ DM, corresponding to positive value of quartic coupling λ 5 ) which is similar to the inert doublet model discussed extensively in the literature [19,28,29]. Typically there exists two distinct mass regions, M DM ≤ 80 GeV and M DM ≥ 500 GeV, where correct relic abundance criteria can be satisfied. In both regions, depending on the mass differences m η ± − m η I , m η R − m η I , the coannihilations of η I , η ± and η R , η I can also contribute to the DM relic abundance [30,31]. As for the mixing angles in the PMNS matrix U we took the best fit values obtained from the recent global fit analysis [32] shown in Table 2. To perform the numerical analysis, we implement the model in SARAH 4 [33] and extract the thermally averaged annihilation rates from micrOMEGAs 4.3 [34] to use while solving the Boltzmann equations above. In Fig. 3, we plot the comoving number densities of all Z 2 odd particles, taking part in generating the lepton asymmetry along with the generated asymmetry L, as functions of temperature. The left panel corresponds to normal hierachy (NH) of neutrino mass spectrum and the right panel to the case of inverted hierarchy (IH). The horizontal solid black line labelled as " L observed" correspond to the value of L that is partially converted into the observed baryon asymmetry via the electroweak sphaleron processes with the conversion factor are the number of fermion generations and Higgs doublets respectively [35]. While sphalerons violate B + L, they conserve B − L symmetry. The sphaleron processes are effective with a thermal rate sph ∼ (α 2 T ) 4 with SU (2) gauge coupling constant α 2 at high temperature until EW phase transition, while they are exponentially suppressed due to finite gauge boson masses after EW gauge symmetry breaking. The shaded regions in the panels correspond to the temperature below which the sphaleron processes become inoperative(T 200 GeV [36]). The benchmark parameters are chosen in such a way that the generated lepton asymmetry L by the epoch of sphaleron freeze-out is sufficient enough to produce the observed baryon asymmetry. The dashed horizontal black line corresponds to the observed DM relic abundance in the present universe [1]. As can be seen from this plot, the lepton asymmetry grows as the temperature cools down due to the contributions from the co-annihilation diagrams. While the lepton asymmetry gets converted into the baryon asymmetry at EW phase transition temperature, it takes a while for DM to freeze-out. Since non-zero CP asymmetry arises from coannihilations between η and heavier right handed neutrinos N 2,3 , the lepton number generating processes get frozen out much earlier compared to DM self annihilations. This makes sure that a net lepton asymmetry is created with the Table 3 The range of λ 5 allowed by phenomenological requirements of satisfying direct detection bounds and generating the required lepton asymmetry Table 1, are set by two constraints. One is the mass difference between the neutral scalar and pseudo-scalar ( = m η R −m η I ) which needs to be more than approximately 100 keV, in order to avoid Z mediated inelastic direct detection scattering of DM off nucleons, as we discuss below. The other constraint is coming from the required leptonic asymmetry with maximal CP asymmetry. The range is shown in Table 3. Since the DM in this scenario has electroweak gauge interactions, we include DM direct detection constraints arising from tree level Z boson mediated processes η R n → η I n, n being a nucleon (shown in Fig. 4). The cross section for Z mediated inelastic process is [38] We note that one can forbid such scattering if δ = m η I − m η R > 100 keV. Using the expressions for physical masses above, it leads to a lower bound on the dimensionless quartic coupling λ 5 as This lower limit on λ 5 becomes weaker for heavier DM masses. Now, if we avoid this bound i.e the mass difference between the scalar and pseudo-scalar dark matter is above 100 keV then the direct detection is majorly dominated by the first process shown in Fig. 4 Now, we move onto the discussion of fermion singlet DM where the direct detection constraints are less severe due to its gauge singlet nature. This is the topic of our next subsection. Right handed neutrino as dark matter If the lightest Z 2 odd particle is the lightest of the right handed neutrinos (and hence the DM candidate), then the annihilation processes responsible for creating a non-zero lepton asymmetry are shown in Fig. 5. Once again, pure self annihilation of DM can not provide the asymmetry due to the absence of "on-shell" condition for loop particles. If the scalar doublet η is the next to the lightest Z 2 odd particle, then the annihilation processes shown in Fig. 5 can produce the required lepton asymmetry. For this scenario the Boltzmann equations for the Z 2 odd particles take the following form: Now, in this scenario along with the co-annihilation channels discussed earlier, the annihilation channels shown in Fig. 5 also contribute to the asymmetry. The CP asymmetry coming from the interference of the tree (Fig. 5) and the loop (bottom two diagrams in Fig. 2) leading to O(y 6 ) are given as: . (25) At this point one may notice that the asymmetry expression given in Eq. (B9) is much more complicated than the one given in Eq. (24) which is purely due to the presence of several coannihilating particles. As pointed out earlier, in this scenario we would require at least one of the right handed neutrinos to be lighter than the scalar doublet η whose annihilations are responsible for creating the asymmetry. In our case we have considered only N 1 to be lighter than η and the rest to be heavier. Alternatively, if N 2,3 are lighter than η, then their (co)annihilations can contribute more to the generation to the asymmetry. For our chosen benchmark points, as shown in Table 4, the contribution of N 2,3 annihilations to lepton asymmetry is sub-dominant compared to η annihilations as well as η − N k (k = 2, 3) coannihilations. In fact in the fermion DM scenario, both η − N k (k = 2, 3) coannihilations (shown in Fig. 1) as well as η annihilations (shown in Fig. 5) can contribute to lepton asymmetry. It is worthwhile to note that the η annihilations shown in Fig. 5 can not contribute to lepton asymmetry in the scalar DM scenario due to the absence of "on-shell" condition for loop particles. The washout effects in this scenario are categorised as follows : • L = 2: there are two processes of this type: 1. ηη → L L and Lη → Lη where the former is also responsible for the source of asymmetry while the latter is purely wash-out. • L = 1: there are two main sources of such washout processes: 1. the inverse decay of (N k → Lη) and (η → L N k ), and the second one is purely wash-out not contributing to the asymmetry. the inverse process of co-annihilation In Fig. 6, we plot the predictions for relic abundance of all Z 2 odd particles, taking part in generating the lepton asymmetry along with the generated asymmetry L, as functions of temperature, for fermion DM scenario. The upper left panel corresponds to the NH of neutrino masses, whereas the upper right panel to the IH of neutrino masses. In the lower panel, we compare the relative contributions to the lepton asymmetry: from η − η annihilations ( L η ), from η − N k coannihilations ( L N η ), from N 2 decay ( L N ), as functions of temperature. Similar to the scenario of scalar DM, here also the lepton asymmetry grows as temperature cools due to contributions from the annihilations and coannihilations and saturate around the temperature where the processes responsible for creating the asymmetry tend to go out of equilibrium. Also, the lightest right handed neutrino freezes out to give the required relic abundance for DM in the universe. The shaded regions in the panels correspond to the temperature below which the sphaleron processes become inoperative. The first bump in the curve for the lepton asymmetry shown in upper panel plots of Fig. 6 arises from the coannihilation diagrams while the decay contribution enters later and further increases the asymmetry at later stages. Here we notice that the mass of the lightest right handed fermion is very close to the Z 2 odd scalar counterparts in order to enhance the coannihilations. This is evident from the bottom panel plot of Fig. 6 which shows the co-annihilation among the lightest N and Z 2 odd scalar components contribute dominantly to lepton asymmetry. Along with that we have an interesting feature of λ 5 . In this we have seen that for particular parameter set as shown in Table 5, the upper bound on λ 5 is set by the requirement of the required leptonic asymmetry needed to explain the observed baryonic asymmetry in the Universe and the lower bound is set by the Lepton flavor violating process BR(μ → eγ ) < 4.2 × 10 −13 [39], as we discuss below. Since the dark matter candidate is a fermion singlet in this case, the parameter λ 5 is not constrained by dark matter direct detection. While there exist some washout effects for the coannihilating processes, there is not much of that for the annihilation processes, as the processes are already out of equilibrium when wash-out becomes effective. In this case, large Yukawa couplings are required to achieve successful leptogenesis, which in turn leads to very small λ 5 , but yet does not affect fermion DM phenomenology much. The Yukawa couplings can be represented by the following matrix for the chosen benchmark point in this scenario. We then use the SPheno 3.1 [37] interface to check the constraints from flavour data. We particularly focus on three charged lepton flavour violating (LFV) decays namely, μ → eγ, μ → 3e and μ → e (Ti) conversion that have strong current limit as well as good future sensitivity [24]. The present bounds are: BR(μ → eγ ) < 4.2 × 10 −13 [39], BR(μ → 3e) < 1.0 × 10 −12 [40], CR(μ, Ti → e, Ti) < 4.3 × 10 −12 [41]. While the future sensitivity of the first two processes are around one order of magnitude lower than the present branching ratios, the μ to e conversion (Ti) sensitivity is supposed to increase by six order of magnitudes [24] making it a highly promising test to confirm or rule out different TeV scale BSM scenarios. It should be noted that such charged LFV process arises in the SM at one loop level and remains suppressed by the smallness of neutrino masses, much beyond the current and near future experimental sensitivities. Therefore, any experimental observation of such processes is definitely a sign of BSM physics, like the one we are studying here. We show the predictions for LFV processes in our model in Fig. 7, also highlighting the second benchmark point (BP2) mentioned above. The similar contributions for the BP1 scenario remain far more suppressed due to smallness of the corresponding Yukawa couplings. The scatter plot in Fig. 7 is obtained by only varying μ η from 100 GeV to 1 TeV, and fixing the other parameters with the values presented in Table 4. It can be seen that some part of the Fig. 7 Predictions for LFV processes for 10 2 GeV < M DM < 10 3 GeV. The two benchmark points are highlighted with red and blue coloured points. The μ η are varied from 100 GeV to 1 TeV, and the other parameters are taken as presented in Table 4 parameter space, specially the region which generates correct DM abundance, lies close to the current experimental limits. As mentioned earlier, this bound on BR(μ → eγ ) decides the lower bound on the parameter λ 5 shown in Table 5. If λ 5 is lower than the one chosen in BP2, the corresponding Yukawa couplings will be bigger (from Casas-Ibarra parametrisation) enhancing the decay rate. For μ → eγ , the latest MEG 2016 limit [39] can already rule out several points. The promising future sensitivity of the μ to e conversion (Ti) will be able to explore most part of the parameter space. Conclusion We have proposed a scenario where baryogenesis via leptogenesis can be achieved through annihilations and coannihi-lations of particles belonging to a Z 2 odd sector including t-channel processes. We have considered a popular model known as the scotogenic model to implement the idea, and addressed the the possibility of explaining the coincidence of DM abundance and baryon asymmetry in the present universe along with non-zero neutrino masses. Pointing out two different possible scenarios corresponding to scalar and fermion DM respectively, we show the non-trivial role played by t-channel annihilation as well as coannihilation processes between different Z 2 odd particles. For the two benchmark points chosen in our work, we could obtain successful leptogenesis along with other requirements like DM relic, DM direct detection and light neutrino masses for M DM ∼ 870 GeV whereas vanilla leptogenesis in scotogenic model works for M 1 ≥ 10 TeV. Another interesting feature is the testability of the model at DM direct detection and rare decay experi-ments. Even though the particle spectrum is in a few O(100) GeV regime or above, away from the reach of current collider experiments, the model can still be tested at near future run of rare decay experiments looking for charged lepton flavour violation like μ → eγ, μ → 3e, μ to e conversion etc. We highlight our interesting results by adopting benchmark points here and leave a detailed numerical analysis of this scenario to an upcoming work. and for the scattering processes it is given as where K n is the nth order Modified Bessel function of second kind, λ(a, b, c) = a 2 + b 2 + c 2 − 2ab − 2ac − 2bc and s min = Max{(m i 1 + m i 2 ) 2 , (m f 1 + m f 2 + · · · ) 2 }. Now, we present the explicit expressions of the BEs for N k and η in the case of scalar dark matter. And similarly expressions of BEs for N k and η in the case of fermion dark matter. Finally the expressions of BEs for the lepton numbers as follows: where the leptonic comoving number density are defined The rates γ D (N j → L α η) are the decay process considered in [21] and γ eq s (ηη → L L) are shown in Fig. 5, whereas γ eq s (ηN i → LSM) and γ eq s (N i X → ηL) are shown in Fig. 1. The processes γ eq s (N i X → ηL) and γ eq s (ηX → N i L) are the same as Fig. 1 by just interchanging the one of the initial with the final particle. And finally the processes γ eq s (ηL → ηL) and γ D (η → N 1 L) are shown in Fig. 8. There are no other further processes which contributes in the above Boltzmann equation. Now taking the difference between Eq. (A8) and Eq. (A9) and keeping the terms asymmetry to leading order i.e Y 2 L (Y eq L ) 2 + Y eq L Y L , one would get the final Boltzmann equation for asymmetry Eq. 12) Now, one would notice that in Eqs. (A8),(A9) and (A10) the index j runs from 1 − 3 if scalar is the Dark Matter in which case the last decay term γ D (η → N 1 L) = 0. But, for the case of N 1 as the dark matter j runs from 2 − 3 and the last decay term γ D (η → N 1 L) = 0. Now, one may notice that from CPT invariance and unitarity terms proportional to N i L and ηL cancel out exactly. Hence, the effects coming from the γ eq s (N i X → ηL) and γ eq s (ηX → N i L) only contribute to the wash-out but it is suppressed compared to the inverse decay processes as shown in Fig. 3 and Fig. 6. Appendix B: Details of the asymmetry In this section we give the details of the asymmetry shown in Eq. (B9). We would first start with the the basic general expression giving the asymmetry shown as follows: where W corresponds to the wavefunction of the incoming and outgoing particles, C's corresponds to the couplings of the tree (C 0 ) and loop (C 1 ) and A's corresponds the rest of the amplitude respectively. Now, starting with the tree level amplitudes where the amplitude M 0 corresponds to the the tree diagram shown in Fig. 1. The corresponding amplitudes for the loop correction are given as follows . (B3) The cross term coming from the above two expressions will give us where
8,390.2
2020-06-01T00:00:00.000
[ "Materials Science" ]
Using a Virtual Patient via an Artificial Intelligence Chatbot to Develop Dental Students’ Diagnostic Skills Knowing how to diagnose effectively and efficiently is a fundamental skill that a good dental professional should acquire. If students perform a greater number of clinical cases, they will improve their performance with patients. In this sense, virtual patients with artificial intelligence offer a controlled, stimulating, and safe environment for students. To assess student satisfaction after interaction with an artificially intelligent chatbot that recreates a virtual patient, a descriptive cross-sectional study was carried out in which a virtual patient was created with artificial intelligence in the form of a chatbot and presented to fourth and fifth year dental students. After several weeks interacting with the AI, they were given a survey to find out their assessment. A total of 193 students participated. A large majority of the students were satisfied with the interaction (mean 4.36), the fifth year students rated the interaction better and showed higher satisfaction values. The students who reached a correct diagnosis rated this technology more positively. Our research suggests that the incorporation of this technology in dental curricula would be positively valued by students and would also ensure their training and adaptation to new technological developments. Introduction Diagnosis is the foundation on which all medical treatments are based. Making a correct, effective, and efficient diagnosis is a fundamental skill that dental students must acquire to be good practitioners. Diagnostic learning in the undergraduate curriculum can be effectively re-enacted through repeated practice of clinical cases with subsequent feedback from faculty, as well as by encouraging self-evaluation to hold students accountable for their deficiencies [1,2]. During undergraduate training it is common to focus on elaborate clinical cases in which trainees must rely on several diagnostic tests before they can make their diagnostic judgment. But it has been questioned whether an extremely detailed anamnesis can be counterproductive if trainees get lost in irrelevant details [3]. In fact, authors such as Bordage [4] urge practice with more focused cases that are based on important discriminative symptoms so that the student can practice with a larger number of clinical cases, a fundamental requirement for acquiring diagnostic competence [2]. In dental education, as a medical discipline, much of the students' professional development occurs when they begin to interact with patients [5], i.e., when they begin to develop interpersonal communication. However, sometimes patients with good cases, from a teaching point of view, are not available for all students and this causes a limitation in the possibilities of student interaction with a large number of cases [5], which is why in recent years, the use of simulation for the development of students' psychomotor skills 2 of 14 has become standard in dental education because it allows them to follow an appropriate learning curve in a less stressful and controlled environment than in a clinic [6]. Simulation in which interactions with patients are recreated, such as role-plays with teachers, with patient-instructors, or standardized patients, are already commonly used in dental schools [7] and are perceived by students as very positive because of their similarity to their professional practice [8] and also allow increasing the realistic self-assessment of the students [7]. In order to perform these simulations of personal interaction with a standardized patient, a high level of planning and training is required by the organizers [9], which could make it difficult to perform them regularly, as well as the appearance of variables that are not foreseen in the original script that can cause the simulation to fail. In this sense, virtual patients (VP) are part of the integration of new technologies in patient simulations and could favor a greater practice of clinical cases by students, printing knowledge more effectively [10], facilitating the planning of cases to teachers, and with less budget and infrastructure [11]. With the use of VPs, students can perform learning with greater self-nomination [12]; the learning of a strategic and self-reflective nature with the advantage of the ubiquity that is provided by technology [13]. It is, therefore, an excellent resource as a complement to interaction with real patients [14] when direct contact with the patient is not yet possible [10] due to a lack of preparation of the student or situations such as that which was caused by the COVID-19 pandemic, also allowing the recreation of unusual clinical cases in daily practice [15]. In general, VPs are usually well perceived by students because of all the advantages that were previously pointed out [16], but they are not free of limitations such as a disconnect between the available VP programs and the needs of educators [17], or that VPs are usually concentrated on a single pathology while in reality different pathologies can coexist at the same time [18]. Moreover, it should be taken into account that, according to different studies [19,20], students prefer certain features in VP design such as relevance, an adequate level of difficulty, feedback, high interactivity, and above all realism [16]. In this sense, artificial intelligence (AI), defined as that technology that uses machines to mimic intelligent human behavior [21], offers a range of possibilities in the development of VPs due to the ability of AI to allow a computer system to perform perceptual processes that are typical of a human being [22][23][24], offering more realism to the interaction with the VP, in addition to being part of the most promising areas of medicine [25]. In recent years, it has been observed how young people invest less time in learning and more in the use of their cell phone [26]. In this context, chatbots or conversational agents through an instant messaging service are presented in the literature as an application of the emerging field of AI [27] that could attract the attention of students and, therefore, be an interesting alternative in the development of VPs [6,28,29]. In relation to education, despite the fact that advances in clinical dentistry have been adapting to digital technological developments that integrate the area of diagnosis and treatment [30,31], it is suggested that there is a need for more research at the academic level on the impact of the use of these digital technologies in clinical practice, with special attention to the ethical issues that may arise as well as the need for dental educators to integrate them into the curriculum [31]. The integration of technology into dental education also makes it possible to implement improvements in patient safety, as it allows practice in scenarios in which the health of a real patient is not compromised [32]. In the specific field of dentistry, some works [6,[32][33][34] investigate the use VPs in dentistry, but no studies were found that integrated VPs with AI. For all of the above, the creation and assessment of a VP through an AI chatbot for the development of diagnostic skills of pulp pathology in dental students was proposed as the objective of the present study. Materials and Methods The present descriptive cross-sectional study was approved by the research committee of the Universidad Europea de Madrid (CIPI/22.142). Participants Students in the 4th and 5th year of the degree in dentistry at the Universidad Europea de Madrid who were taking practical courses with patients participated in the study. All the students who wished to take part in the study had to sign an informed consent form in which they were informed about the study and were assured that their data would be treated anonymously. Sample Size With a total of 457 students of 4th and 5th year of dentistry enrolled in the subjects with clinical practice at the Universidad Europea de Madrid, the formula that is shown in Figure 1 was applied to calculate the sample size. A confidence percentage of 95% and a margin of error of 6% were taken into account and a minimum of 169 students were needed for the sample to be representative. Materials and Methods The present descriptive cross-sectional study was approved by the research committee of the Universidad Europea de Madrid (CIPI/22.142). Participants Students in the 4th and 5th year of the degree in dentistry at the Universidad Europea de Madrid who were taking practical courses with patients participated in the study. All the students who wished to take part in the study had to sign an informed consent form in which they were informed about the study and were assured that their data would be treated anonymously. Sample Size With a total of 457 students of 4th and 5th year of dentistry enrolled in the subjects with clinical practice at the Universidad Europea de Madrid, the formula that is shown in Figure 1 was applied to calculate the sample size. A confidence percentage of 95% and a margin of error of 6% were taken into account and a minimum of 169 students were needed for the sample to be representative. Conceptualization To create the virtual patient that we called Julia, we chose to create a conversational chatbot with AI. To this end, a working group was created with two professors of dentistry from the Universidad Europea de Madrid to begin the conceptualization work and define everything that was necessary for Julia to present as a patient. In this study, it was decided that she suffered from reversible pulpitis. After an analysis of the literature [33,34], five main categories were defined for her to answer: anamnesis, description of the pain, relationship of the pain with stimuli, previous dental treatments, and intraoral exploration. In order to establish a dose of reality and to create more interest among the students, it was decided to create the chatbot using an informal language that could answer some questions that were unrelated to the clinical case. Figure 2. Subsequently, work was done to create sub-categories in which the most frequently used expressions were included with more informal linguistic variations to which a response was associated in order to establish a flow of dialogue (Table 1). Conceptualization To create the virtual patient that we called Julia, we chose to create a conversational chatbot with AI. To this end, a working group was created with two professors of dentistry from the Universidad Europea de Madrid to begin the conceptualization work and define everything that was necessary for Julia to present as a patient. In this study, it was decided that she suffered from reversible pulpitis. After an analysis of the literature [33,34], five main categories were defined for her to answer: anamnesis, description of the pain, relationship of the pain with stimuli, previous dental treatments, and intraoral exploration. In order to establish a dose of reality and to create more interest among the students, it was decided to create the chatbot using an informal language that could answer some questions that were unrelated to the clinical case. Subsequently, work was done to create sub-categories in which the most frequently used expressions were included with more informal linguistic variations to which a response was associated in order to establish a flow of dialogue (Table 1). Chatbot Design The Dialogflow ® application (Palo Alto, Santa Clara County, CA, USA), was used for the creation of chatbot conversational flows through the use of intuitive artificial intelligence [35] that was capable of understanding the nuances of human language by learning through action and feedback. Since the people who created the chatbot were not experts in the field, it was decided to design the chatbot in a simple way. To do this, we defined the "intents" (or what the user wanted to say), added all the expressions that a user could use to express that "intent" and that the group of experts had defined in the previous phase to add them in the "training phrases" space, and then associated a specific response to that intent. Through natural language processing algorithms, the AI will be able, with a few training phrases, to learn the different ways of asking the same question (Table 2 and Figure 3). Table 2. Question-answer sequence of the chatbot. Intents Training Phrases Answer Cold (pulp response to cold application) Once the chatbot was created, it was integrated with an instant messaging application (Telegram) because it was intended to offer this experience easily, quickly, and using an application that was frequently used by students, also giving them the possibility of interacting with Julia at any time. something cool? Does it hurt if you drink something with ice? Does it hurt more with cold? If you drink something cold, do you feel it? Once the chatbot was created, it was integrated with an instant messaging application (Telegram) because it was intended to offer this experience easily, quickly, and using an application that was frequently used by students, also giving them the possibility of interacting with Julia at any time. In order to carry out the integration of Julia in Telegram, the application was accessed and then the following steps were followed: In order to carry out the integration of Julia in Telegram, the application was accessed and then the following steps were followed: 1. Create a name ending in "bot". 5. Then Telegram generates a token to access the hhtp API. 6. In Dialogflow, go to "Integrations" and then click on the Telegram icon. 7. Paste the token in the corresponding field and click on "start". In order for Julia to generate curiosity among the students and given the possibility that some questions were not focused on the clinical case, "intents" were created for various questions such a "Do you want to go out with me?" generating natural answers that would lead the student back to the main objective of the chatbot, the pulp diagnosis: "I'm a computer virus that right now is deleting all the papers you had to submit. . . it's a joke! I'm an artificial intelligence named Julia and I've been created for you to learn pulp diagnosis well. You will thank me when you are in the clinic. So focus well and ask me about pulpal diagnosis". When students gave an incorrect diagnosis, Julia encouraged them to keep asking "I'm not an expert. . . but that diagnosis sounds weird to me" in case of giving correct answer Julia replies and closes the chat "Thank you! I will make an appointment to see you". would lead the student back to the main objective of the chatbot, the pulp diagnosis: "I'm a computer virus that right now is deleting all the papers you had to submit...it's a joke! I'm an artificial intelligence named Julia and I've been created for you to learn pulp diagnosis well. You will thank me when you are in the clinic. So focus well and ask me about pulpal diagnosis". When students gave an incorrect diagnosis, Julia encouraged them to keep asking "I'm not an expert... but that diagnosis sounds weird to me" in case of giving correct answer Julia replies and closes the chat "Thank you! I will make an appointment to see you". Figure 4. Start-Up The operationalization was carried out in two phases. In the first instance, a panel of experts consisting of 5 professors and doctors of dentistry interacted with Julia. All of the failed interactions or evidenced errors were reported for further adjustment to improve the chatbot conversation flow. For this purpose, the Dialogflow training function was used to test those interactions with users that the AI itself considers should be revised. In this way, the AI is learning from the actions that it performs and the feedback we give it ( Figure 5). Start-Up The operationalization was carried out in two phases. In the first instance, a panel of experts consisting of 5 professors and doctors of dentistry interacted with Julia. All of the failed interactions or evidenced errors were reported for further adjustment to improve the chatbot conversation flow. For this purpose, the Dialogflow training function was used to test those interactions with users that the AI itself considers should be revised. In this way, the AI is learning from the actions that it performs and the feedback we give it ( Figure 5). When the validation by expert judgment was positive, we proceeded to a second phase in which Julia was sent to 4th and 5th year dental students with all the information and the route to interact with Julia via Telegram. Survey After four weeks of operation, the students who were interested in participating in the study were asked to fill out an eleven-question questionnaire in which nine questions dealt with their experience after their interaction with Julia and two open-ended questions (Tables 3 and 4). Table 3. Questions of the questionnaire with possible answers. Questions Possible Answers (Only One) 1-Were you satisfied when interacting with the artificial intelligence? 2-Did the artificial intelligence answer all your questions about the pulp pathology I presented? Figure 5. If the user misspelled a word and the AI was able to identify that it was an error and associate it with the correct intent. When the validation by expert judgment was positive, we proceeded to a second phase in which Julia was sent to 4th and 5th year dental students with all the information and the route to interact with Julia via Telegram. Survey After four weeks of operation, the students who were interested in participating in the study were asked to fill out an eleven-question questionnaire in which nine questions dealt with their experience after their interaction with Julia and two open-ended questions (Tables 3 and 4). Table 3. Questions of the questionnaire with possible answers. Questions Possible Answers (Only One) 1-Were you satisfied when interacting with the artificial intelligence? 2-Did the artificial intelligence answer all your questions about the pulp pathology I presented? 3-Did the language used by the artificial intelligence seem natural and realistic to you? 4-Do you feel that this type of teaching methodology can help you improve your communication skills? 5-Do you think this type of teaching methodology can help you feel more confident and secure when treating patients? 6-Do you think that this type of teaching methodologies could help you grow as a future professional? 7-Did you manage to ask all the necessary questions to reach a pulp diagnosis? 8-Would you recommend this artificial intelligence-based technology to other students? 9-Do you think that interaction with artificial intelligences should be part of the dental degree curriculum? Open Questions -What pulp pathology do you think the patient had? -What would you modify or add after interacting with this artificial intelligence. Statistical Análisis The questionnaire responses were collected and the data were entered into a Microsoft Excel spreadsheet. They were then analyzed using SPSS software (IBM, SPSS Statistics, Version 20.0, Armonk, NY, USA: IBM Corp). The Kolgomorov-Smirnov test was performed to evaluate whether the samples met the normality criterion. For comparisons between the courses and sex, the Student's t-test was used for those samples that had a normal distribution and the Mann-Whitney U test for those that did not; for the association between the qualitative variables, the chi-square test was used, considering the p-value ≤ 0.05 as statistically significant. Results The sample size of the study was 193 subjects, of whom 58 belonged to the fourth year and 135 to the fifth year. There were 109 females and 84 males. In fourth year, women accounted for 55.2% and men 44.8% of the sample while in fifth year, women accounted for 57.04% and men 42.26%. Global Data The results of the response to the questionnaire, which were measured with a Likert scale (1)(2)(3)(4)(5), are shown in Table 5 and in Figures 6 and 7. accounted for 55.2% and men 44.8% of the sample while in fifth year, women accounted for 57.04% and men 42.26%. Global Data The results of the response to the questionnaire, which were measured with a Likert scale (1)(2)(3)(4)(5), are shown in Table 5 and in Figures 6 and 7. accounted for 55.2% and men 44.8% of the sample while in fifth year, women accounted for 57.04% and men 42.26%. Global Data The results of the response to the questionnaire, which were measured with a Liker scale (1)(2)(3)(4)(5), are shown in Table 5 and in Figures 6 and 7. When comparing the responses to the questionnaire by course, statistically significant differences were found, with fifth-year students showing the highest satisfaction values (Tables 6 and 7). Table 6. Mann-Whitney U-test results. When the Chi-square test (χ 2 ) was performed, the results showed that the fifth year students got the diagnosis right more frequently (p-value = 0.005) than the fourth year students. When comparing between sexes, females failed more often than males (p-value = 0.000). We also looked for whether there was a correlation between establishing a correct diagnosis and a higher score on the questionnaire. When the Chi-square (χ 2 ) test was performed, it was observed that a correct diagnosis implied a higher score on the questionnaire items (Table 8). In the second free field of the questionnaire, the students were asked about what could be modified or added to the AI. The responses are shown in Table 9. Table 9. Responses to the free text field in which students could add their impressions after the interaction. The colloquial language should be expanded. It should answer several questions at the same time. The language is very complete but does not always respond to colloquial phrases. Lack of feedback, although being like a real patient it is logical that you do not get it. Very curious. Very interesting. It would have been nice to see it in pre-clinical courses. Should not replace a patient. Cannot establish the diagnosis because the patient did not define time of pain. duration in the cold sensitivity test. Does not resemble a patient. Should have the possibility to add images. We should be able to make an appointment. I would like to get the right answer. I would want an option to know the correct diagnosis after mine. X-rays. You could have many to practice. Super interesting to practice. A simpler patient. Fourth Year Student Data When the Mann-Whitney U test was used to compare the values of each of the responses to the questionnaire items with sex, no significant differences were obtained. When the χ 2 test was performed to compare the correct diagnosis with sex, no significant differences were obtained. Fifth Year Student Data When comparing the values of the items with sex through the Mann-Whitney Utest, statistically significant values were obtained in the item "realistic natural language" (p = 0.022), with women scoring higher, and in the item "complete all the questions" (p = 0.042), with men scoring higher. When χ 2 was performed to compare the correct diagnosis vs. sex, 19.26% of women failed more than men with 5.19% (p = 0.004). Discussion The university must respond to the dynamic needs of current technological updating. In this sense, AI presents itself as a novel and unfamiliar resource for many trainers, but it has the potential to achieve effective learning [36,37]. In fact, it is claimed that students can improve their skills and knowledge if, in addition to interacting with human teachers, they interact with technological trainers who have reasoning and decision-making capabilities that are similar to human ones [6,36,38,39]. AI has experienced great advances in recent years, causing a great impact on science, economics, and education [36]. In reference to the field of education, in some previous studies with students of health branches [40,41], they valued very positively, as in the present study, the interaction with artificial intelligences. Moreover, as in this study, they affirmed the need to implement this technology in the curricula. However, AI also presents certain limitations. It has been shown that a possible limitation would be related to the knowledge about artificial intelligence and machine learning of students [42]. In addition, it has been observed that some students may be reluctant to accept these technological developments as they consider that they have greater learning with a teacher interacting face-to-face and not on-line, being interaction and error correction one of the basic learning points for them [43,44]. Moreover, students who teach with patients highly value observational or vicarious learning [45] together with their fellow trainees. All of these reasons may explain the lack of updating in these developments in dental school [40]. Any simulation-based learning should be based on sound principles of prior knowledge [46], so this study was conducted with final-year dental students treating real patients, as there is an integration of theory with practice. In addition, students often present difficulties in diagnostic competence and VPs offer more practical opportunities to improve their future performance with patients [6]. This may be the reason why discrepancies between diagnostic successes are observed, with final year students scoring clearly higher than fourth year students. With real patients, situations are very changeable, so varying degrees of difficulty, and these situations can be counterproductive for students due to the frustration and distress they may be subjected to [8]; in this sense, VPs can recreate in a controlled, stimulating, and safe environment, the doctor-patient relationship [47] and encourage reflective learning [6,41]. In the dental students' interaction with the virtual patient Julia, we focused on the ability to obtain a preliminary diagnosis with the data provided in a direct conversation because the collection of information during the patient interview significantly influences the quality of the diagnosis [48]. As the preliminary diagnosis must be confirmed with complementary tests [49], Julia requested a subsequent appointment at a clinic when the diagnosis was correct. In relation to the development budget, the economic view of this technological resource cannot be ruled out since it has been shown that virtual simulation minimizes the cost of the activity compared to simulation that is based on traditional simulators (mannequins), high-fidelity simulators, haptic simulators, as well as the use of standardized patients (actors) [6,11]. In the present study, the high economic investment that is traditionally also associated with innovative developments was ruled out, since it was possible to recreate a VP using the free version of a very intuitive software. In order to carry out the step-by-step creation of Julia and its integration into the instant messaging program, the indications of the numerous free tutorials that are available online were followed. During the testing phases and in the first days of operation, it was observed that not being able to identify users increased the risk of asking controversial questions, offtarget questions to make Julia feel bad, or simply questions that were asked to observe the possible reaction of the artificial intelligence. Due to this, a collection of insults, rude phrases, out-of-place comments, etc. was also carried out in order to redirect the users. During the implementation, it was possible to see how a small group of users tried to "troll" Julia and how she redirected the user to the activity using a sarcastic text. The fifth year students showed greater satisfaction in all the items of the questionnaire, perhaps due to their almost two years of practice on patients and the global vision of curricular development that can be perceived when graduation is near. In addition, in the free text field, they were the ones who expressed greater satisfaction with the interaction and proposed the possibility of implementing this technology in pre-clinical courses. On the contrary, the fourth year students rated the interaction with Julia worse, being more critical with the difficulty of the case, with the language that was used, and they also needed the possibility that the patient could answer several questions at the same time, etc. All the data that were collected in the study lead us to think that VPs through chatbot with AI should be adapted to each course and type of student. In the case of fourth year students, who are beginning to have contact with real patients, perhaps it should be more oriented towards practice and the development of anamnesis skills during medical history taking so that they could practice more times and thus feel more confident with their first patients. On the contrary for fifth year students, more complex and challenging scenarios should be developed by providing complementary material such as radiographs, laboratory tests, photographs, etc. Authors such as Joda et al. [50] also propose increasing the realism of VPs with avatars in which skin and tissues are replicated by superimposing and merging 3D images, these lines of research continue to be developed and it is hoped that, in the near future, it will be part of the curriculum for dental students as a complement to faceto-face interaction with patients. In relation to this last point, we should emphasize the importance in dental practice of the dentist's empathy, the ability to recognize nonverbal communication, establish bonds of trust with patients, know their expectations and fears, etc. [21], feelings that today no machine can replicate as they are exclusive to human beings [51]. Conclusions Our results highlight the usefulness of simulating a VP with AI by giving students the possibility of multiple clinical cases to practice, as well as offering an engaging and personal experience to students because of the interface and the natural language that are used, without underestimating the economic and space savings for universities. Therefore, our research suggests the need to incorporate AI into dental curricula while also ensuring that students are at the forefront of current technological developments. Informed Consent Statement: Informed consent was obtained from all subjects that were involved in the study. Data Availability Statement: Not applicable.
6,850.2
2022-07-01T00:00:00.000
[ "Medicine", "Computer Science" ]
A Multi-Class Neural Network Model for Rapid Detection of IoT Botnet Attacks The tremendous number of Internet of Things (IoT) devices and their widespread use have made our lives considerably more manageable and safer. At the same time, however, the vulnerability of these innovations means that our day-to-day existence is surrounded by insecure devices, thereby facilitating ways for cybercriminals to launch various attacks by large-scale robot networks (botnets) through IoT. In consideration of these issues, we propose a neural network-based model to detect IoT botnet attacks. Furthermore, the model provides multi-classification, which is necessary for taking appropriate countermeasures to understand and stop the attacks. In addition, it is independent and does not require specific equipment or software to fetch the required features. According to the conducted experiments, the proposed model is accurate and achieves 99.99%, 99.04% as F1 score for two benchmark datasets in addition to fulfilling IoT constraints regarding complexity and speed. It is less complicated in terms of computations, and it provides real-time detection that outperformed the state-of-theart, achieving a detection time ratio of 1:5 and a ratio of 1:8. Keywords—Internet of Things (IoT); IoT botnets; IoT security; intrusion detection system; deep learning; neural network I. INTRODUCTION The dominant features of the modern era can be illustrated by the abundant data that are collected and monitored via Internet of Things (IoT) devices, as well as by the endless functionalities enabled by this innovation. As estimated by experts [1], the number of IoT devices is expected to reach 30 billion by 2020-an important development given that these widespread and convenient technologies have strongly influenced many aspects of people's lives. At the same time, however, they have also compounded the consequences of security threats. Given the innumerable IoT devices that are constantly running and accessible over the public Internet, such innovations have become an attractive platform for cybercriminals. The hack value of IoT devices is not confined to the critical information stored, collected, or monitored by these technologies but extend to any other assets that can be breached via large-scale botnets. This problem is further exacerbated by the fact that the IoT ecosystem imposes constraints on security techniques because of limited resources with respect to central processing units (CPUs), memory, and power consumption. These shortcomings render the battle against IoT botnets a critical and challenging issue. An equally significant concern is the higher risk that IoT devices present compared with that arising from generalpurpose computers. This threat stems from numerous factors [2]. First, the requirements for IoT applications are extremely heterogeneous in terms of device types, communication protocols, and operating systems. Second, the global distribution of IoT devices translates to monitoring by different parties, thereby preventing the establishment of well-defined perimeters among these overseers. From the involvement of multiple parties comes user and device mobility, which causes continuous changes in perimeters. Third, IoT devices lack strong authentication and authorization mechanisms, as reflected by the tendency of most IoT users to employ weak passwords and default account settings. Devices equipped with IoT technology usually do not require user permission or direct interaction for the installation of software or the modification of settings, thus facilitating malware propagation through application programming interfaces (APIs) and firmware. Finally, vendors experience difficulties in patching software vulnerabilities. As a result, the conventional security techniques developed for general-purpose computers, such as antivirus programs or hostbased intrusion detection systems, are inadequate measures for securing IoT. The threat model included in this study consists of attackers with no physical access to the IoT devices connected to home routers, functioning as network gateways or other middleboxes. The actualization of a threat is described as follows: An attacker needs to exploit the vulnerabilities of different IoT devices to gain access to them, but it must first discover the existence of such devices by sending probes to certain ports. The probes initially pass through a network gateway before reaching the destination. Most IoT communications are executed through cloud API services [3] instead of proceeding directly from one local IoT device to another. In this process, therefore, a network gateway occupies a vantage point from which it can inspect every network packet. The use of this point has been increasingly emphasized in the implementation of different intrusion detection techniques. Furthermore, a network gateway provides a homogeneous and lightweight defensive mechanism and policy enforcer that protects devices from being assimilated into a botnet without interrupting their normal functionality. This study focused only on the detection techniques applicable to network gateways. Neural networks and deep learning have demonstrated promising outcomes in many fields, especially in developing accurate anomaly-based intrusion detection systems [4]- [7]. Unfortunately, they require high computational use, and it takes a long time to train a model and detect an attack. At the same time, rectifying the problem of IoT botnets necessitates specialized solutions that take into account IoT's own constraints. An adequate number of research projects have been tailored toward the detection and prevention of IoT botnet attacks using machine or deep learning. However, to the best of our knowledge and according to the provided literature review [8]- [13], we found there were no studies considered the IoTs requirements for real-time detection and lightness while taking into account the multi-classification issue. Although, it is a critical point to recognize the attack type and then take the appropriate countermeasures to prevent any intrusions. Motivated by these issues, our study provides an independent, accurate, real-time, and lightweight model applicable to IoT gateways that is able to multi-classify the IoT network traffic. Therefore, the main contribution of this study is to adapt the fast, accurate, stable, tiny gated recurrent neural network (FastGRNN) [14] algorithm, which is dedicated to text classification, for use with intrusion detection by treating network packets as sentences and headers as words. The objectives of this study were to • provide a model that has accurate detection of IoT botnets, • decrease the training time, • decrease the detection time, and • decrease the model complexity. We also took into account that the model is independent and would only consider the features that are directly readable by the gateway and do not require additional equipment or a third party to fetch the features and target multi-classification. The results proved that using FastGRNN [14] provided high speed for training the model and detecting attacks, with much less complexity compared to the state-of-the-art while also preserving a high F1 score, where it attained a score of 99.04% with the RGU dataset [8] in comparison to the gated recurrent unit (GRU) model's 97.82%, and the long short-term memory (LSTM) model's 98.60%. Furthermore, the FastGRNN-based model completed its detection within 29 seconds for the entire test set for both datasets, while the model proposed by Hwang et al. [9] took 245, 249 seconds for detection. The rest of the paper is organized as follows. Section II presents a detailed background on IoT botnets, with particular attention paid to how they operate and what destructive effects they exert. The section also introduces the FastGRNN [14] algorithm. Section III consists of a literature review and a comparison of the proposed model and the current state-ofthe-art models. Section IV describes the methodology and the propose model. Section V summarizes the results and findings, and Section VI concludes the paper. A. IoT Botnets A botnet basically consists of compromised devices called bots, each running malicious code under a botmaster's command and control (C&C) [2]. Specifically, a bot can propagate throughout the network. To do so, it scans the entire network ranges and exploits the known vulnerabilities or weak credentials of devices. After breaking into an unprotected gadget, the bot embeds itself into the equipment and waits for instructions from a botmaster to perform malicious activities. An example of these attacks is the collaborative flooding of a target (an IoT or non-IoT device) with numerous illegitimate requests, thus preventing the device from processing legitimate ones and causing a distributed denial-of-service (DDoS) attack. The other ill-intentioned activities of IoT botnets [2] include cryptocurrency mining, password cracking, and email spam sending, keylogging. Although the first IoT botnet, Linux.Hydra, was discovered in 2008 [15], the security community did not realize the seriousness of this issue until the emergence of the Mirai botnet [16]. In September 2016, a Mirai attack was directed against the Krebs on Security blog, generating 620 Gbps of traffic. The availability of Mirai's original source code led to the development of dozens of variants and inspired the creation of many other botnets. For instance, the following month saw a Mirai variant take down the service provider Dyn, representing the largest DDoS attack in history. This event engendered other destructive outcomes, which were summarized by [17]. Mirai was merely the tip of the iceberg, as predicted by Vlajic and Zhou [18] and indeed we are now witnessing progressively sophisticated IoT botnet attacks with considerably more critical victims. In the same year, Rapidity Networks discovered Hajime, which has a decentralized (or peer-to-peer [P2P]) architecture in contrast to the centralized structure of Mirai [19]. The year 2017 saw a demonstration of BrickerBot's ability to permanently destroy an IoT device through a permanent denial-of-service (PDoS) attack [20], and 2018 witnessed Radware's honeypot capture JenX, which uses servers to scan vulnerable IoT devices and propagates itself within such equipment. The centralized scanning mechanism of JenX enables attackers to offer botnet-for-hire and DDoSfor-hire services [21]. Other attacks were explored by [18] and [22], who inquired into potential attacks by employing IoT as a reflector of DDoS attacks, which are very difficult to trace. Adding to our understanding of cyberattacks, Soltan et al. [23] examined a possible attack in which a botnet utilizes highwattage IoT devices to manipulate demand and thus disrupt power grid operations. Scrutinizing the distinctive behaviors of IoT botnets plays a crucial role in endeavors to combat them. Generally, the lifecycle of an IoT botnet consists of two main phases, namely, the botnet establishment and attack launch phases (Fig. 1). These stages are described below. sponding commands. The attack ranges from PDoS and DDoS attacks to cryptocurrency mining and so on. B. FastGRNN Algorithm A recurrent neural network (RNN) is a class of neural network proposed by Jeffrey Elman in 1990 [24]. RNNs have the ability to preserve learned information from the past (or previous output) and modify it regularly with current input. This is done via a structure called hidden states, which are updated using different mechanisms or gates. A gate is simply a sigmoid neural net layer and a matrix multiplication. This ability to preserve historical data has meant that RNNs are well suited for the tasks of processing time series or sequence data, such as with neural language processing (NLP). However, traditional RNN is prone to a vanishing gradient problem that arises when long input sequences are processed, which is the problem that LSTM [25], a different algorithm from the RNN class, was designed to resolve. The complexity of LSTM and its number of computations led to the GRU [26] which is less complicated because it has only two gates instead of the three in LSTM. Basically, GRU merges two gates, the forget and input gates, into an updated gate. In addition, it combines the cell state from LSTM with a hidden state. FastGRNN goes further in decreasing model complexity and speeding up the learning process by adding a scalar weighted residual connection for each and every coordinate of the hidden state h t . As shown in Fig. 2, FastGRNN reuses the low-rank, sparse, and quantized matrices W ∈ RD ×D , and U ∈ RD ×D for the vector-valued gating function as well. In other words, instead of directly feeding the input x t and previous hidden state h t−1 into the gates or nonlinear function, these matrices squeeze those values into smaller size before passing them to the sigmoid σ or tanh function. The learning process starts when W is added to U, and the result flows into sigmoid σ and tanh, resulting in z t according to Equation 1 and h t according to Equation 2. The outputs of both functions are used to calculate the final hidden state h t , as shown in Equation 3. Notably, 0 ≤ ζ, ν ≤ 1 are trainable parameters the sigmoid function parameterizes along with b ∈ RD. III. RELATED WORK Anomaly detection involves the adoption of various machine or deep learning algorithms. It centers on building a model of normal behavior for a device and then leveraging the model to detect outliers that could exhibit potential attacks. Undoubtedly, deciding on appropriate features will affect the model's speed and complexity and leverage strong results in the development of considerably reliable learning models. On this basis, relevant studies were reviewed to highlight the features and how they are selected. Each study was analyzed with regard to the following criteria: detection method, whether it is multi-classification or binary, whether it is independent or dependent, whether it is real-time or offline, and whether it is lightweight or not. That information was then used as a reference in drawing the contributions of this paper. IoTGUARD [10] observes diverse traffic types, including malicious and benign traffic, regardless of the source of flow; such traffic are fundamentally the dataset features collected from a gateway and each device log. A dataset is subjected to preprocessing steps, including oversampling and undersampling for the resolution of imbalance issues, feature extraction, analysis, and reduction techniques. Subsequently, fuzzy c-means (FCM) is used to cluster data according to self-similarities. The principal property of FCM is its ability to maintain a strong association within a cluster and weak associations with all other clusters. Weak associations facilitate task prediction because of their consideration of all clusters in determining labels for new, unknown traffic. A fuzzy interpolation scheme is then employed to ascertain the degree of malice in an attack and accordingly determine appropriate measures for various malicious traffic types. IoTGUARD has been evaluated using a dataset collected from consumer IoT devices. Aside from encompassing normal traffic, the dataset includes information on authentication attacks, botnet activities, port sweeps, port scans, spying, and worms. It achieves a high prediction accuracy with low false-positive-rate. Its operation makes minimal demands on systems because it undergoes preprocessing and reduction. However, IoTGUARD depends on features that are not directly readable and extracted via a gateway. Concentrating on generating more relative features, Moustafa et al. [11] proposed the use of statistical features in conjunction with an ensemble method to classify IoT network traffic. To derive the features, the authors used the Bro-IDS tool [27], and to acquire specific features, they employed a novel extractor. These features consist of flow-based, Message Queuing Telemetry Transport (MQTT), and service-based characteristics, which consist of DNS and HyperText Transfer Protocol (HTTP) features. Then, the authors applied the correntropy measure to evaluate the feature set. The most important features were selected, and the unnecessary ones were eliminated on the basis of correlation coefficients (CCs). According to the correntropy results, the difference between normal and attack vectors was very small, thereby giving rise to the need to use many classification techniques, each designed on a particular kernel, like a probability, weight or feature value. Given this issue and the need to increase the accuracy of detection, an ensemble method was used along with three classification techniques: a decision tree (DT), Naïve Bayes, and artificial neural networks (ANNs). Afterwards, AdaBoost was employed to distribute network data among the techniques. The ensemble method outperformed every individual approach in terms of accuracy and detection over two benchmark datasets, namely, UNSW-NB15 [28] and NIMS [29]. However, it required more time in detecting an attack than that needed by each individual classifier, except for ANNs. Similar to IoTGUARD [10], the ensemble method is typified by statistical features that are not directly readable by the gateway and whose extraction requires another party-deficiencies that disqualify this approach as a means of online detection. In contrast to IoTGUARD [10] and the ensemble method established by Moustafa et al. [11], the technique proposed by Doshi et al. [12] examines only flow-based features that are directly readable by most modern gateways. The dataset that comes with the approach includes classes of DoS attacks that might be generated by a Mirai-infected device; examples of such assaults are transmission control protocol synchronize (SYN) flooding, a user datagram protocol (UDP) flood, and an HTTP GET flood. The main contribution of Doshi et al.'s [12] work is feature engineering, which guides the feature extraction process. Selected features were either found in each packet's header or generated in flows from different packets. After this, evaluation was directed toward binary classification algorithms from among the following list: K-nearest neighbors (KNN), random forest, DT, support vector machines with linear kernel (LSVM), and deep neural networks (DNN). All the algorithms performed excellently, achieving an accuracy of 99%, with the exception of the LSVM, which exhibited the worst performance, possibly because the data could not be separated in a linear manner. The study also confirmed the effectiveness of neural networks despite their use with a small dataset that consisted of only 491,855 packets. The selected features are common among all protocols, indicating that Doshi et al.'s [12] proposed method is a protocol-independent technique. It supports low memory constraints because it depends on a stateless algorithm, but the accompanying dataset reflected only one phase of Mirai propagation, that is, the launch of a DoS attack. The dataset was also imbalanced, containing 459,565 malicious packets and only 32,290 benign packets, potentially adversely affecting the results. Given that flow-based approaches suffer significant detection delay, other researchers proposed to replace flow features with packet features. For instance, Pulse's dataset [13] comprises only the attack time, the destination IP address, the protocols used, and the packets size, as well as labels that indicate malicious or benign traffic. It is a Naïve Bayes classifier that focuses on botnets' primary behaviors, specifically network scanning, network probing, and DoS. The model was built using Weka [30], which in turn, imports the dataset collected from a testbed equipped with real IoT devices. The model is better at detecting probing attacks than it is for flood-type attacks, which might be due to insufficient feature vectors. The authors [13] chose Naïve Bayes because it outperforms other methods, but they did not specify which approaches were compared and what the results were. In like manner, McDermott et al. [8] introduced a novel approach wherein word embedding is applied on texts in network packets and fed into a bidirectional long short-term memory-based recurrent neural network (BLSTM-RNN). The main advantage of BLSTM-RNN over LSTM-RNN is its www.ijacsa.thesai.org Fuzzy C-means (FCM) clustering [11] Decision tree (DT), Naïve Bayes, Artificial neural networks (ANNs), and AdaBoost [12] K-nearest neighbor (KNN), random forest, DT Support vector machines with linear kernel (LSVM), and deep neural networks (DNN) [13] Naïve Bayes [8] Bidirectional long short-term memory (BiLSTM) [9] Long short-term memory (LSTM) Proposed model Fast, accurate, stable, and tiny gated recurrent neural network (FastGRNN) ability to accumulate contextual information from both the past and future. The framework consists of three modules. First, data preprocessing is completed on network packets to extract length, protocol, and payload information within the info field, after which word embedding is implemented on each token and encoded into an integer format. Next, packets are normalized and unnecessary ones are removed. Second, LSTM-RNN and BLSTM-RNN models are defined and evaluated, and third, a test dataset is used to determine the effectiveness of anomaly detection. The authors also provided the dataset named Mirai-RGU, which was generated using Mirai and IoT cameras. The traffic consisted of Mirai messages between a bot (an infected IoT) with a C&C. Additionally, four attack vectors were chosen, including User Datagram Protocol (UDP) flood, Acknowledgment (ACK) flood, DNS flood, and SYN flood attacks, as well as normal traffic generated by the cameras. A couple of experiments indicated that the accuracy and loss metrics exhibited by LSTMN-RNN and BLSTM-RNN were close but favor the latter. Nevertheless, the bidirectional model added to the overhead and increased processing time. Similar to [13] and [8], Hwang et al. [9] eliminated the time required for accumulating network packets to generate flow-based features by directing attention exclusively to the headers of individual packets. At the same time, the authors avoided the high cost of deep-packet inspection required by [8]. The central advantage here is that packet header fields are directly readable by gateways once they arrive, thus facilitating real-time detection. The authors put forward the application of word embedding on an incoming network packet to extract its semantic meanings, then adjusting three layers of LSTM to classify the packet as normal or malicious. To evaluate the model, a dataset called Mirai-CCU collected besides Mirai-RGU [8] and ISCX2012 [31] dataset. Primarily, the performance is affected by word-embedding and attack representation in the dataset. Unfortunately, the size of the input data exceeded the size of flow-based features. Thus, the time required for training was higher than usual, reaching 17 hours at 200 epochs on some datasets. As discussed in this section, a growing body of the literature has recognized the importance of developing machine or deep learning models to detect IoT botnet attacks. These efforts are confronted with critical challenges that also point to gaps in this prominent research area. Distinctly, most proposed mechanisms focus on accuracy and disregard the analysis of other important metrics, such as algorithmic complexity and speed. Thus, this study proposes a lightweight model that provides real-time detection. Table I shows the proposed model in comparison to the current state-of-the-art. IV. METHODOLOGY The proposed classification model follows the same principle as in [9]. Thus, it treats each packet as a single sentence and each packet header field as a word because the stringent order of fields serves as a grammar rule, which is in essence creating sentence patterns for benign or malicious traffic. Therefore, word embedding is used to derive the semantics and syntactical features of packets. In the following subsections we will discuss dataset selection, feature extraction, dataset sampling, input preprocessing, proposed architecture, and experimental setup. A. Dataset Selection The effectiveness of neural network or deep learning models hinges primarily on the quality and size of a dataset. Research on IoT security suffers from the absence of benchmark datasets, but recent endeavors have been initiated to publish datasets meant to overcome this issue. Nevertheless, certain drawbacks remain. For example, the effectiveness of the dataset put forward in [33] is impeded by highly imbalanced records because it has only 477 legitimate traffic samples and 3,668,045 attack traffic samples. Among these recent efforts, and for the purposes of this study, the MedBIoT [32] and Mirai-RGU [8] datasets were selected. These datasets have been selected for the following reasons: • A variety of IoT devices were used to generate the network traffic. • There was realism in the attacks because real botnet binary codes were used to launch the attacks. • Both phases of IoT botnet lifecycle are covered (see Section II-A). • There was a diversity of attacks that were launched. The MedBIoT dataset [32] is collected from a mediumsized network with 83 physical and virtual IoT devices, including switches, light bulbs, locks, and fans. Mirai [17], BashLite [15] and Torii [34] were used to initiate the malicious behavior of botnets. In contrast, the Mirai-RGU [8] dataset was generated using two Sricam AP009 IP cameras infected with Mirai source code that initiated different attacks against a raspberry Pi. B. Feature Extraction Both datasets consist of raw network packets as packet capture files (PCAPs). To provide an independent, lightweight, and real-time model, we needed to extract the features that are directly readable by the gateway. The required features were extracted from PCAPs using TShark [35] and converted into comma-separated values (CSVs). The extracted features were Ethernet, IP, TCP, and UDP headers, as displayed in Table II. C. Dataset Sampling A random undersampling technique followed to minimize the number of samples and to introduce some kind of balancing for the imbalanced classes. For MedBIoT [32], we split the dataset into two halves, normal and attacks, and then divided the attack classes equally. For the Mirai-RGU [8], we followed the attack vectors distribution of Mirai published by [17] to reflect a more realistic situation. Fig. 3, 4, 5, and 6 illustrate the undersampling effect on the classes of the datasets. D. Input Preprocessing Unlike LSTM-based model by [9], we did not duplicate any features. We only considered real packet headers because they require less preprocessing. To prepare a packet for embedding, all features were first converted into strings. Then, we split the dataset into training and testing sets in a ratio of 80:20, respectively. Afterward, tokenizer was applied to produce the dictionary and map each packet header with its associated integer number from the dictionary. Finally, we padded each packet to be the size of 32 words. E. Architecture Designing Basically, the proposed model consists of input layer, embedding layer, FastGRNN layer, dropout layer, and dense layer as illustrated in Fig. 7. First, vector of tokenized words or header fields with a size of 32 represented the input layer. The second layer was the random embedding layer that transferred each tokenized word n into a vector of size 64. Then, each embedded vector was passed into a FastGRNN cell with a hidden state of size 64. As mentioned in Section II-B, FastGRNN was selected due to its simplicity and lightness. Afterward, the dropout layer was used with 0.2 as the dropout rate to overcome the overfitting by dropping random neurons from the previous layer. To generate the desired output for the multi-classification task, a dense layer with Softmax as the activation function was used. Finally, to compile the model, a categorical cross entropy was used as the loss function in addition to RMSProp optimizer to adjust the learning rate. F. Experimental Setup The model was written in Python 3.7.3 and TensorFlow 1.15.0 [36] with Keras 2.2.4 [37]. All the experiments were done using Tesla K20 GPU, with 2496 CUDA cores and 5 GB memory besides 96 GB RAM. V. RESULTS AND DISCUSSION To evaluate the model against the desired objectives, we needed to calculate the correctness of classification and the required time for training and prediction. Because both datasets were imbalanced and the model targets multi-classification, F1 score was the most appropriate metric to use. The F1 score was calculated according to Equation 4. In addition, wall time was considered when calculating the time required for training and detection. Furthermore, the model was compared with the LSTM-based model proposed by [9], but because there is no published information regarding time or the MedBIoT dataset in the paper by [9], we implemented their model and trained it ourselves. Moreover, we implemented the proposed architecture once with LSTM as a replacement for FastGRNN and called it the LSTM-based model, and then we implemented it once with GRU and called it the GRU-based model. We trained both of those architectures with both datasets, and the results are summarized in Table III. As shown in Table III, our FastGRNN achieved the lowest training time for MedBIoT at only 1 hour, 18 minutes, and 51 seconds (1:18:51), while the second-lowest one was the LSTM-based model at 4 hours, 3 minutes, and 5 seconds (04:03:05). In addition, FastGRNN had the fastest detection speed of only 29 seconds for the entire test set, compared to the second-lowest time which was 53 seconds for the GRU. The reason the GRU had a longer training time than the LSTM is that the GRU needed more epochs to train before stopping. Actually, GRU takes about 25 minutes to complete one epoch, while LSTM completes an epoch in about 27 minutes. The LSTM-based model proposed by [9] had the slowest performance in training and detection due to using multiple LSTM layers and a large hidden states size, which makes the computations more expensive. For the F1 scores, all of the models had F1 scores of 99.99%, which might be due to the balancing of the attack classes. Afterward, we followed another strategy of balancing classes with Mirai-RGU, as mentioned in Section IV-C. Again, the proposed model completed training within 2:0:41 while the second-fastest one, which was GRU, took 3:51:2. Also, work by [9] took the longest time to train, 10:42:38. Regarding the detection time, FastGRNN succeeded in reaching a detection time of 29 seconds, while GRU needed about 55 seconds. The proposed model outperformed GRU and LSTM in F1 score as well, with 99.04%, 97.82%, and 98.60%, respectively. Furthermore, FastGRNN achieved an F1 score close to that of LSTM. Finally, Fig. 8 and 9 illustrate the performance of the proposed model in terms of training and detection time compared to other models. VI. CONCLUSION IoT botnets are increasingly recognized as a serious worldwide cybersecurity concern. Investigating machine and deep learning is a continuing concern in relation to intrusion detection approaches against IoT botnets, but such exploration involves several issues. This study focused on developing a lightweight multi-classification neural network-based model with the aim of providing fast training time, real-time detection, and accuracy. According to the experiments, we proved that the proposed FastGRNN outperformed the other models when benchmarking both datasets by decreasing training and detection time while also preserving a high F1 score. Specifically, the proposed model completed training in 1:18:51 and 2:0:41 for the MedBIoT and RGU datasets, respectively. Detection was completed by FastGRNN within 29 seconds for the entire test set. Moreover, our model had competitive F1 scores of 99.99% and 99.04% for multi-classification of MedBIoT and RGU, respectively. Finally, distinct technologies, along with IoT botnet detection measures, may be adopted. As future work, we will look into the opportunities engendered by federated learning. Because we aim to centralize the learning process using FastGRNN on the grounds of a fog or cloud and distributing a collection of network traffic data among several nodes or gateways, this direction would promote the application of collaborative intrusion detection approaches.
6,981.8
2020-01-01T00:00:00.000
[ "Computer Science" ]
Automorphisms of Kronrod-Reeb graphs of Morse functions on 2-sphere Let $M$ be a compact two-dimensional manifold and, $f \in C^{\infty}(M,\mathbb{R})$ be a Morse function, and $\Gamma_f$ be its Kronrod-Reeb graph. Denote by $\mathcal{O}_{f}=\{f \circ h \mid h \in \mathcal{D}\}$ the orbit of $f$ with respect to the natural right action of the group of diffeomorphisms $\mathcal{D}$ on $C^{\infty}(M,\mathbb{R})$, and by $\mathcal{S}(f)=\{h\in\mathcal{D} \mid f \circ h = f\}$ the corresponding stabilizer of this function. It is easy to show that each $h\in\mathcal{S}(f)$ induces a homeomorphism of $\Gamma_f$. Let also $\mathcal{D}_{\mathrm{id}}(M)$ be the identity path component of $\mathcal{D}(M)$, $\mathcal{S}'(f)= \mathcal{S}(f) \cap \mathcal{D}_{\mathrm{id}}(M)$ be group of diffeomorphisms of $M$ preserving $f$ and isotopic to identity map, and $G_f$ be the group of homeomorphisms of the graph $\Gamma_f$ induced by diffeomorphisms belonging to $\mathcal{S}'(f)$. This group is one of the key ingredients for calculating the homotopy type of the orbit $\mathcal{O}_{f}$. Recently the authors described the structure of groups $G_f$ for Morse functions on all orientable surfaces distinct from $2$-torus $T^2$ and $2$-sphere $S^2$. The present paper is devoted to the case $M=S^{2}$. In this situation $\Gamma_f$ is always a tree, and therefore all elements of the group $G_f$ have a common fixed subtree $\mathrm{Fix}(G_f)$, which may even consist of a unique vertex. Our main result calculates the groups $G_f$ for all Morse functions $f:S^{2}\to\mathbb{R}$ whose fixed subtree $\mathrm{Fix}(G_f)$ consists of more than one point. Introduction f (x, y) = f (z) + g z (x, y), where g z : R 2 → R is a homogeneous polynomial without multiple factors. for each critical point z of f . In that case, due to Morse Lemma, one can assume that g z (x, y) = ±x 2 ± y 2 . Let f ∈ C ∞ (M, R), Γ f be a partition of the surface M into the connected components of level sets of this function, and p : M → Γ f be the canonical factor-mapping, associating to each x ∈ M the connected component of the level set f −1 (f (x)) containing that point. Endow Γ f with the factor topology with respect to the mapping p: so a subset A ⊂ Γ f will be regarded as open if and only if its inverse image It is well known, that if f ∈ F (M, R), then Γ f has a structure of a one-dimensional CW-complex called the Kronrod-Reeb graph, or simply the graph of f . The vertices of this graph correspond to critical connected components of level sets of f and connected components of the boundary of the surface. By the edge of Γ f we will mean an open edge, that is, a one-dimensional cell. Denote by H(Γ f ) the group of homeomorphisms of Γ f . Notice that each element of the stabilizer h ∈ S(f ) leaves invariant each level set of f , and therefore induces a homeomorphism ρ(h) of the graph of f , so that the following diagram is commutative: Let also D id (M) be the path component of the identity map id M in D(M). Put Thus, G f is the group of automorphisms of the Kronrod-Reeb graph of f induced by diffeomorphisms of the surface preserving the function and isotopic identity. Since G f is finite and ρ is continuous, it follows that ρ reduces to an epimorphism of the group π 0 S ′ (f ) path components of S ′ (f ) being an analogue of the mapping class group for f -preserving diffeomorphisms. Algebraic structure of the group π 0 S ′ (f ) of connected components of S ′ (f ) for all f ∈ F (M, R) on orientable surfaces M distinct from 2-torus and 2-sphere is described in [11], and the structure of its factor group G f is investigated in [7]. These groups play an important role in computing the homotopy type of the path component O f (f ) of the orbit of f , see also [8], [9], [1], [2], [3]. The purpose of this note is to describe the groups G f for a certain class of smooth functions on 2-sphere S 2 . The main result Theorem 1.4 reduces computation of G f to computations of similar groups for restrictions of f to some disks in S 2 . As noted above the latter calculations were described in [7]. First we recall a variant of the well known fact about automorphisms of finite trees from graphs theory. Lemma 1.3. Let Γ be a finite contractible one-dimensional CW-complex ( a topological tree ), G be a finite group of its cellular homeomorphisms, and Fix(G) be the set of common fixed points of all elements of the group G. Then Fix(G) is either a contractible subcomplex or consists of a single point belonging to some edge E an open 1-cell), and in the latter case there exists g ∈ G such that g(E) = E and g changes the orientation of E. Suppose f : S 2 → R belongs to F (M, R). Then it is easy to show that Γ f is a tree, i.e., a finite contractible one-dimensional CW-complex, and by Remark 1.2 G f is a finite group of cellular homeomorphisms of Γ f . Therefore, for G f , the conditions of Lemma 1.3 are satisfied. Note that according to Remark 1.2 the second case of Lemma 1.3 is impossible, and hence G f has a fixed subtree. In this paper we consider the case when the fixed subtree of the group G f contains more than one vertex, i.e. has at least one edge. Let us also mention that D id (S 2 ) coincides with the group D + (S 2 ) of diffeomorphisms of the sphere preserving orientation, [12]. Therefore S ′ (f ) consists of diffeomorphisms of the sphere preserving the function f and the orientation of S 2 . is an isomorphism of groups. Proof. Then By definition, ρ(h)(x) = x, whence ρ(h) either preserves both Γ A and Γ B or interchange them. We claim that , which contradicts to our assumption. Thus Γ A and Γ B are invariant with respect to the group G f . Now we can show that A and B are also invariant with respect to h. By virtue of the commutativity of the diagram (1.1) ρ(h)(p(y)) = p(h(y)) for all y ∈ Γ . In particular: Therefore, h(A) = p −1 (Γ A ) = A. The proof for B is similar. Thus, A and B are invariant with respect to S ′ (f ). (2) Notice that the function f takes a constant value on the simple closed curve p −1 (x) being a common boundary of disks A and B, and does not contain critical points of f . Therefore, the restrictions f | A , f | B satisfy the conditions 1) and 2) the Definition 1.1, and so they belong to F (M, R) and F (M, R) respectively. (3) We should prove that the map φ : First we will show that φ is correctly defined. Let γ ∈ G f = ρ(S ′ (f )), that is, γ = ρ(h), where h is a diffeomorphism of the sphere preserving the function f and isotopic to the identity. We claim that h| A ∈ S ′ (f | A ) = S(f | A ) ∩ D id (A). Indeed, for each point x ∈ A we have that: Moreover, since h preserves the orientation of the sphere, it follows that h| A preserves the orientation of the disk A, and therefore by [12], Similarly γ| Γ B ∈ G f | B , and so φ is well defined. Let us now verify that φ is an isomorphism of groups, that is, a bijective homomorphism. Let δ, ω ∈ G f . Then sp φ is a homomorphism. Let us show that ker φ = {id Γ }. Indeed, suppose γ ∈ ker φ, that is γ| Γ A = id Γ A and γ| Γ B = id Γ B . Then γ is fixed on Γ A ∪ Γ B = Γ , and hence it is the identity map. is implied by the following simple lemma whose proof we leave to the reader. Lemma 1.5. Suppose f : D 2 → R belgns to the space F (M, R). Then for arbitrary α ∈ G f , there exists a ∈ S ′ (f ) fixed near the boundary ∂D 2 and such that α = ρ(a). Let (α, β) ∈ G f | A × G f | B , then by Lemma 1.5 there exist a ∈ S ′ (f | A ) and b ∈ S ′ (f | B ) fixed near ∂A = ∂B = p −1 (x) and such that α = ρ A (a) and β = ρ B (b). Define h by the following formula: h = a(x), x ∈ A, b(x), x ∈ B. Then, h is a diffeomorphism of the sphere, preserving the function and orientation, whence h ∈ S ′ (f ).
2,146.2
2019-03-22T00:00:00.000
[ "Mathematics" ]
Stability Analysis of Linear Fractional-Order Neutral Systems with Time Delay In this paper, we mainly study the Lyapunov asymptotical stability of linear and interval linear fractional order neutral systems with time delay. By applying the characteristic equations of these two systems, some simple sufficient Lyapunov asymptotical stability conditions are deserved, which are quite different from other ones in literature. In addition, some numerical examples are provided to demonstrate the effectiveness of our results. Introduction Fractional order systems have many obvious advantages since fractional order differential is more adequate to describe real word problems because it has more degrees of freedom. At the same time, a memory is also included in the model. Therefore, fractional order systems have gained important applications in various sciences such as signal processing, viscoelasticity, electroanalytical chemistry, electric conductance of biological systems, modeling of neurons, diffusion processes, damping laws, rheology physics, electrode electrolyte polarization, electromagnetic wave, etc. For more details, please see [1][2][3][4]. Time delay may have considerable impacts on the stability of the system because it often presents in real processes due to transportation of materials or energy. Thus, most fractional systems may contain delay terms, such as fractional order neutral systems or some other fractional order delay systems. If the system contains delays both in its states and in the derivatives of its states, then the system is called a neutral type delay system. Neutral type delay systems are very common in realities. Stability analysis is one of the most important issues in the theory of differential equations and their applications for both deterministic and stochastic cases. Stability analysis of fractional differential equations is more complex than that of classical differential equations, because fractional derivatives are nonlocal and have weakly singular kernels. The stability analysis of time delay systems can be generally classified as two types: the time delay dependent criteria and the time delay independent stability. As there is no the upper limit to time delay, time delay independent results can be regarded as conservative in practice. Because of the complex definition of fractional order integral, the analysis of fractional order equations is more difficult than that of integral equations. Nowadays, various stability analysis techniques have been used to derive stability criteria for the fractional system. The most well-known one is Matignon's stability theorem [5]. This theorem permits us to determine the stability of the linear fractional order system through the location in the complex plane of the dynamic matrix eigenvalues of the state space like system representation. Matignon's theorem is the starting point of several results in the field of linear fractional order system stability analysis. In addition, Lambert functions approach ( [6,7]), Lyapunov's second approach [8], Matrix measure approach ( [9,10]), Bellman-Gronwall's approach ( [11]) and LMI approach ( [12]) are also used to investigate the stability of fractional order linear systems. All of these approaches have their own advantages and disadvantages. Recently, a finite-time stability analysis of fractional order time delay systems is firstly presented and reported on paper [13]. But till now, only a few papers studied the stability of fractional neutral systems with delay. Lyapunov approach of nonlinear fractional order neutral system were extended in paper [14]. However, it is difficult to use Lyapunov method to study the stability of fractional order neutral systems with delay for the complicated of the fractional derivatives. All of those have motivated our research. In this paper, we are interested in the Lyapunov asymptotical stability of linear fractional order neutral systems with time delay. By using the characteristic equation of the system, some simple sufficient Lyapunov asymptotical stability conditions are deserved. In addition, we studied the Lyapunov asymptotical stability of interval linear fractional order neutral system with time delay. Finally, two examples are provided to demonstrate the effectiveness of our results. The rest of the paper is organized as follows. In Section 2, we give some notations and lemma, recall some concepts and preparation results. In Section 3, using the characteristic equations of the systems, we study the Lyapunov asymptotical stability of linear and interval linear fractional order neutral systems with time delay. Some sufficient conditions are deserved. In Section 4, two numerical examples are provided. Problem Formulation and Preliminaries In this section, we introduce some notations, definitions, and preliminary facts needed in this paper. The idea of fractional calculus has been known since the development of the regular calculus, with the first reference probably being associated with Leibniz and L'Hospital in 1695 where half-order derivative was mentioned. The differ-integral operator, denoted by a t D α , is a combined differentiation and integration operator commonly used in fractional calculus, which is defined by , 0 Beyond all doubt, there are different definitions for fractional derivatives, see [15]. The most commonly used definitions are the Grunwald-Letnikov, Riemann-Liouville and Caputo definitions. Riemann-Liouville and Caputo definitions are often used in pure mathematicians, and the last one is often adopted by applied scientists, because Caputo definitions is more convenient in engineering applications. The Caputo definition is sometimes called smooth fractional derivative in literature because it is suitable to be treated by the Laplace transform technique, while the Riemann-Liouville definition is unsuitable. Here we only discuss Caputo derivative, so in the rest of the paper, D α is used to denote the Caputo fractional derivative of order α . Define In engineering physics and economics, the fractional order α often lies in (0, 2), so in this paper we always suppose on the case that the fractional order is 0 2 Firstly, let us consider the linear fractional order neutral system with time delay described by the following form: are the constant matrices, and matrix E is singular, that means rank 1 E n n = < , and 0 τ > is the pure time delay. If matrices A and B are uncertain, then the interval linear fractional order system with time delay above can be described by the state space equation of the following form This kind of matrices are called interval matrices. Throughout this article, let ( ) A ρ be the spectral radium of the matrix A , |A| denote the modulus matrix of the matrix A , and let ( ) e s ℜ be the real part of s . First, let us recall a known lemma about matrix theory. To prove the main results in the next section, we need this very important lemma. Lemma 2.1 ( [15]). Let R, T, and Main Results In this section, we consider the stability of linear fractional neutral system (1) and interval linear fractional neutral system (2). Here, we always assume that these two fractional neutral systems have unique continuous solutions Stability of Linear Fractional Neutral Systems with Time Delay In this subsection, several sufficient conditions of stability of linear fractional order neutral systems with time delay are given. (1) is Lyapunov asymptotically stable. Proof. Similar to [16], by taking Laplace transform of the linear fractional order system (1), we can easily prove this theorem. Next, we assume the matrix pair ( , ) E C is regular with index one, then there exist nonsingular matrices , In addition, let According to (4) and (5) where M G is the matrix formed by taking the maximum magnitude of each element of the following matrix Proof. If the condition (1) in Theorem 3.3 holds, then the following matrix Here If the condition (2) in Theorem 3.3 holds, then using the Lemma 2.1, we can obtain Then, according to (6) and (7), we can get So we have ( ) 0 D s ≠ for any ( ) 0 e s ℜ ≥ . Thus we complete the proof. Stability of Interval Linear Fractional Order Neutral System with Time Delay In this subsection, several sufficient conditions of stability of interval linear fractional order neutral systems are given. Now, we consider the interval linear fractional order neutral system (2). Start with, we need to give some definitions. Let . We have the following theorems about the interval linear fractional order neutral system (2). According to (8) and (9) If the condition (2) According to (10) and (11), we can get ( ) 0 D s ≠ for ( ) 0 e s ℜ ≥ . So when that two conditions in Theorem 3.5 hold, we can obtain that all the roots of the characteristic equation of system (2) have negative real parts, then the interval system (2) is Lyapunov asymptotically stable. Thus the proof is completed. Numerical Examples In this section, some numerical examples are given to demonstrate the effectiveness of those theorems in section 3. Example 4.1 Consider the stability of the following linear fractional order neutral system with time delay . Firstly, note that Conclusions In summary, this paper mainly presents some brief sufficient conditions for the stability of a class of linear fractional order neutral system with delay and linear interval fractional order neutral system with delay. The proposed method here is quite different from other ones in literature. Two simple examples also demonstrate that this method is feasible.
2,108.4
2017-03-06T00:00:00.000
[ "Mathematics" ]
VCC-DASH: A Video Content Complexity-Aware DASH Bitrate Adaptation Strategy : Traditional DASH (dynamic adaptation streaming over HTTP(i.e., HyperText Transfer Protocol)) bitrate strategy cannot di ff erentiate segments with di ff erent complexities of video content, resulting in the user’s QoE (quality of experience) of segments with high content complexity as worse than that with low content complexity. In case of this, this paper firstly studies video coding and puts forward the definition of video content complexity. Then the e ff ects of content complexity on user’s QoE is analyzed and the QoE utility function of the segment is formulated based on its MOS (mean opinion score, related to the content complexity and bitrate) and bitrate switching between consecutive segments. Last, in order to maximize user’s QoE, this paper proposes VCC-DASH (video content complexity-aware DASH bitrate adaptation strategy) under the constraints of the network bandwidth and the bu ff er occupancy. In simulations, we compare VCC-DASH with the classical bitrate adaptation strategy proposed by Liu et al. (LIU’s strategy, for short). The simulation results show that the two strategies have similar performances in bitrate switching numbers, playback interruption times, and bu ff er lengths. In addition, it is more important for simulation results to reveal that VCC-DASH’s average bitrate is much higher than that of LIU’s strategy, which means that VCC-DASH can make fuller use of the network bandwidth than LIU’s strategy does. Moreover, the MOS distribution of the VCC-DASH is more concentrated on the better scores “4~5”, which profit from its content complexity-aware adaptation to allocate more bandwidth resources to high-complexity segments. same. Traditional DASH bitrate adaptation strategies (e.g., [5][6][7][8]) adapt to the network bandwidth to select the media representation for each segment and ignore the differentiated requirement of the content complexity on the bitrate, which results in much worse playback quality for segments with high content complexity than those with low content complexity. As a result, the QoE of the whole video is fluctuant. Inspired by the MOS (mean opinion score) metrics provided by Klaue et al [10], we get the MOS of segments with different bitrates and content complexities. In addition, Kim et al [11] indicates that the bitrate switching between consecutive segments can reduce users' QoE. Thus, at each decision epoch, the time-varying QoE of a viewer is counted by accounting for the MOS of the segment and the bitrate switching loss between segments. Based on the network bandwidth and the buffer occupation, the bitrate is adapted to maximize the resulting QoE. In case of this, the bitrate adaptation strategy VCC (video content complexity)-DASH is proposed. The contributions of this paper are as follows: (i) defines the content complexity of video using its encoding information; (ii) formulates QoE utility function of the segment based on its MOS (relates to its content complexity and bitrate) and the bitrate switching between consecutive segments; and (iii) establishes an QoE optimization model under the constraints of the network bandwidth and the buffer occupation to adjust the bitrate dynamically and accordingly maximize users' QoE. Background and Key Issues Here, we analyze the feature of video content and propose how to measure the complexity of video content. Furthermore, we extend the MPD file by adding a VCC attribute for each segment The above-mentioned DASH bitrate adaptation strategies [5][6][7][8] can offer best-effort viewing quality assurance, but they cannot optimize the playback quality based on the video content complexity, which is a feature of the video content to reflect the motion intensity of video sequences. To get similar playback quality, videos with high content complexity need more encoding bit numbers since they carry more information [9]. This is done assuming that the video is divided into small segments with different media representations, and the complexity of each segment is not the same. Traditional DASH bitrate adaptation strategies (e.g., [5][6][7][8]) adapt to the network bandwidth to select the media representation for each segment and ignore the differentiated requirement of the content complexity on the bitrate, which results in much worse playback quality for segments with high content complexity than those with low content complexity. As a result, the QoE of the whole video is fluctuant. Inspired by the MOS (mean opinion score) metrics provided by Klaue et al [10], we get the MOS of segments with different bitrates and content complexities. In addition, Kim et al [11] indicates that the bitrate switching between consecutive segments can reduce users' QoE. Thus, at each decision epoch, the time-varying QoE of a viewer is counted by accounting for the MOS of the segment and the bitrate switching loss between segments. Based on the network bandwidth and the buffer occupation, the bitrate is adapted to maximize the resulting QoE. In case of this, the bitrate adaptation strategy VCC (video content complexity)-DASH is proposed. The contributions of this paper are as follows: (i) defines the content complexity of video using its encoding information; (ii) formulates QoE utility function of the segment based on its MOS (relates to its content complexity and bitrate) and the bitrate switching between consecutive segments; and (iii) establishes an QoE optimization model under the constraints of the network bandwidth and the buffer occupation to adjust the bitrate dynamically and accordingly maximize users' QoE. Background and Key Issues Here, we analyze the feature of video content and propose how to measure the complexity of video content. Furthermore, we extend the MPD file by adding a VCC attribute for each segment functioning as its video content complexity tag for the convenience of the client to differentiate segments with different content complexities. The Analysis and Measurement of Video Content Complexity The content complexity is a feature of the video to reflect its motion intensity [12]. On the server side, the video is encoded into different code rate versions, which are related to the coded bits number, bit rate, frame rate, etc. According to the video compression standard MPEG (moving pictures experts group) [13], the video is encoded in units of GOP (group of pictures). A GOP is a set of consecutive frames consisting of I-frames, P-frames, and B-frames. The I-frame uses intra-frame compression, which contains a large amount of information and reflects the texture characteristics of the video. The P-frame and B-frame use inter-frame prediction coding to compress pictures by sufficiently reducing the time redundancy between frames, both of which contain less information and reflect the motion characteristics of the video. In general, the encoded bits number of an I-frame is much larger than the P-frame and B-frame within a GOP. So, the video content complexity, which reflects the motion intensity of the video, can be characterized by the GOP-related ratio r of the average encoded bits number of P-frames and B-frames R P,B and the average encoded bits number of I-frames, which can be expressed as follows. where R P,B and R i are defined as in Equation (2). N I , N P , and N B are the number of I-frames, P-frames, and B-frames within a GOP, respectively. R I,i , R P,i and R B,i are the coded bits number of the i-th I-frame, the i-th P-frame, and the i-th B-frame within a GOP, respectively. All the classic video sequences used in this paper are from the website (i.e., http://trace.kom.aau. dk/yuv). According to Equations (1) and (2) above, we get the GOP-related ratio r of the sequences Akiyo, Container, Foreman, Coastguard, Soccer, and Football with different average bits/frame and draw the scatter plot as shown in Figure 2. Electronics 2020, 9, x FOR PEER REVIEW 3 of 13 functioning as its video content complexity tag for the convenience of the client to differentiate segments with different content complexities. The Analysis and Measurement of Video Content Complexity The content complexity is a feature of the video to reflect its motion intensity [12]. On the server side, the video is encoded into different code rate versions, which are related to the coded bits number, bit rate, frame rate, etc. According to the video compression standard MPEG (moving pictures experts group) [13], the video is encoded in units of GOP (group of pictures). A GOP is a set of consecutive frames consisting of I-frames, P-frames, and B-frames. The I-frame uses intra-frame compression, which contains a large amount of information and reflects the texture characteristics of the video. The P-frame and B-frame use inter-frame prediction coding to compress pictures by sufficiently reducing the time redundancy between frames, both of which contain less information and reflect the motion characteristics of the video. In general, the encoded bits number of an I-fram e is much larger than the P-frame and B-frame within a GOP. So, the video content complexity, which reflects the motion intensity of the video, can be characterized by the GOP-related ratio r of the average encoded bits number of P-frames and B-frames , P B R and the average encoded bits number of I-frames, which can be expressed as follows. , and Ri are defined as in Equation (2). All the classic video sequences used in this paper are from the website (i.e., http://trace.kom.aau.dk/yuv). According to Equations (1) and (2) above, we get the GOP-related ratio r of the sequences Akiyo, Container, Foreman, Coastguard, Soccer, and Football with differen t average bits/frame and draw the scatter plot as shown in Figure 2. As usual, the P-frame and the B-frame reflect the motion characteristics of the video, and the higher motion intensity of the video means its content complexity is higher. Known from Equation (1), the bigger the GOP-related ratio r is, the higher the encoding bits number of P-frames and Bframes is relatively, and then the higher the content complexity of the video. As shown in Figure 2, the GOP-related ratio r of the sequence Football is the biggest, so its content complexity is the largest. As usual, the P-frame and the B-frame reflect the motion characteristics of the video, and the higher motion intensity of the video means its content complexity is higher. Known from Equation (1), the bigger the GOP-related ratio r is, the higher the encoding bits number of P-frames and B-frames is relatively, and then the higher the content complexity of the video. As shown in Figure 2, the GOP-related ratio r of the sequence Football is the biggest, so its content complexity is the largest. On the contrary, the GOP-related ratio r of the sequence Akiyo is the lowest, so its content complexity is the lowest. In addition, r becomes larger and the gradient becomes slower and slower as the average bits/frame increases. However, since the content complexity is a feature of the video sequence, the mode value of r under average coded bits per frame for each video sequence is token to value its video content complexity in statistics. If more than one mode number exists, then the average number is taken. The content complexity of each video sequence is shown in Table 1. Furthermore, we obtained the content complexity of more video sequences according to the method proposed above and used the k-means clustering algorithm to classify them into three levels: low-level complexity (VCC = 1), middle-level complexity (VCC = 2), and high-level complexity (VCC = 3), as shown in Table 2. Tagging VCC for DASH Segments In ISO/IEC MPEG-DASH standard [2], an MPD file on the server side describes the collection of encoded and deliverable versions of media content. The basic structure and components of the XML (i.e., Extensible Markup Language)-schema MPD are shown in Figure 3. The sequences of Period in the timeline make up the Media Presentation. A Period typically represents a media content period during which a consistent set of encoded versions of the media content is available. Within a Period, material is arranged into an Adaptation Set, which depicts a set of interchangeable encoded versions of one or several media content components. An Adaptation Set contains a set of Representations, which describes a deliverable encoded version of one or several media content components. Typically, this means that the client may switch dynamically from one Representation to other Representation within an Adaptation Set in order to adapt to network conditions or other factors. Within a Representation, the content may be divided into multiple segments for proper accessibility and delivery. In order to access a segment, its URL (i.e., Uniform Resource Locator) is provided explicitly. Consequently, a segment is the largest unit of data that can be retrieved with a single HTTP request. Proposed Algorithm Here we firstly define the QoE utility function of the segment as the user's satisfaction with watching it, which is decided by its MOS and the bitrate switching between consecutive segments. Then in order to maximize the user's QoE, we propose the bitrate adaptation strategy VCC-DASH under the constraints of the network bandwidth and the buffer occupancy, which can be used to select media representation. Last, we present the implement of the proposed VCC-DASH. QoE Utility Function In a video streaming session, the actual viewing quality experienced by users (i.e., QoE) greatly depends on the segment's MOS [14] and the segment bitrate switching loss [15]. The former refers to the average value of the subjective score offered by a group of non-professionals after watching the segment in a standard test environment, which can be used to evaluate the subjective quality of the segment. According to the MOS metrics method provided by Klaue et al [10], we choose CIF (common intermedia format) video sequences with different VCC, use the open source video quality evaluation tool-set EvalVid [16] to calculate the MOS, and then draw a scatter plot illustrating the relationship between the MOS, the VCC, and the encoding bitrate as shown in Figure 4. An MOS curve against bitrate is shown in Figure 4, and MOS grows close to a logarithmic rate when the VCC is fixed. Therefore, for the segment with the same VCC, the relationship between MOS and its bitrate can be formulated by a logarithmic function. The distribution trend of MOS curve varies distinctly with different VCC, which means that the logarithmic fitting functions of MOS-bitrate under different VCC should have different fitting parameters, which is expressed as follows. Electronics 2020, 9, x FOR PEER REVIEW 5 of 13 MPEG-DASH is an open standard that allows the extension of MPD file for adding components of the media content as needed. In each segment, we add a VCC attribute functioning as its video content complexity tag for the convenience of the client to differentiate segments with differen t content complexities, as shown in Figure 3. When the video streaming session starts, the MPD file is downloaded to the client, and the corresponding VCC attributes are parsed out as an input parameter for the decision of the media representation. Proposed Algorithm Here we firstly define the QoE utility function of the segment as the user's satisfaction with watching it, which is decided by its MOS and the bitrate switching between consecutive segments. Then in order to maximize the user's QoE, we propose the bitrate adaptation strategy VCC-DASH under the constraints of the network bandwidth and the buffer occupancy, which can be used to select media representation. Last, we present the implement of the proposed VCC-DASH. QoE Utility Function In a video streaming session, the actual viewing quality experienced by users (i.e., QoE) greatly depends on the segment's MOS [14] and the segment bitrate switching loss [15]. The former refers to the average value of the subjective score offered by a group of nonprofessionals after watching the segment in a standard test environment, which can be used to evaluate the subjective quality of the segment. According to the MOS metrics method provided by Klaue et al [10], we choose CIF (common intermedia format) video sequences with different VCC, use the open source video quality evaluation tool-set EvalVid [16] to calculate the MOS, and then draw a scatter plot illustrating the relationship between the MOS, the VCC, and the encoding bitrate as shown in Figure 4. An MOS curve against bitrate is shown in Figure 4, and MOS grows close to a logarithmic rate when the VCC is fixed. Therefore, for the segment with the same VCC, the relationship between MOS and its bitrate can be formulated by a logarithmic function. The distribution trend of MOS curve varies distinctly with different VCC, which means that the logarithmic fitting functions of MOS-bitrate under different VCC should have different fitting parameters, which is expressed as follows. Table 3. In Figure 4, the MOS-bitrate curves under different VCC are fitted according to the nearest neighbor principle. In Equation (3), the fitting parameters a VCC i and b VCC i are shown in Table 3. In DASH standard [2], the video is divided into segments with different media representations, and so the client may switch from one representation to another representation in order to adapt to network conditions and playback environment. Therefore, for two consecutive segments, their bitrates may be different, and if bitrate switching exists then this should worsen the user's QoE [15]. In this paper, we define the reduction of QoE resulting from the bitrate switching as a loss, which is greatly decided by the switching times and the switching range [17]. As usual, the wider the switching range, the bigger the loss value and the sharper the gradient. For simplicity, this relationship between switching range and QoE loss is expressed by an exponential function. As a whole, the loss of the i-th segment is expressed as follows. where γ 1 and γ 2 are the QoE loss weight of video bitrate switching and the switching range, QoE Optimization Model In nature, the main idea of the proposed bitrate strategy VCC-DASH is to discriminately select an optimized bitrate for each requesting segment with different content complexity under the constraints of the network bandwidth and the buffer occupancy, whose final purpose aims at maximizing users' QoE during a video streaming session. Hence, the bitrate adaptation of VCC-DASH can be formulated as the following optimization model. In our model, each segment is assigned the same duration τ seconds assumed that the selected bitrate of the i-th segment is BS i , which satisfies BS i ∈ , where = {B 1 , B 2 . . . , B M } is the set of available bitrates. The delivery time of the segment is t i seconds, which is the time from when the client sends out an HTTP-get message for the i-th segment to when the segment is successfully received by the client. In this way, the current bandwidth BC i for delivering the i-th segment is estimated as τ t i BS i , which is used as the decision basis for the selection of the bitrate of the (i+1)-th segment. Further, the buffer occupancy after the i-th segment is received is expressed as Electronics 2020, 9, 230 Equation (6) represents the VCC differentiation constraint of VCC-DASH (i.e., how to discriminately select bitrate for a segment with different content complexity). For low-complexity segments, it requires that the selected bitrates from the available bitrate set should be no more than the estimation of the current bandwidth BC i . For high-complexity segments, the differences between the selected bitrate BS i of the prior ω segments and the estimated network bandwidth BC i for delivering the prior ω segments are counted as the bitrates surplus, i.e., Equation (7) represents the constraint on buffer occupancy (i.e., how to avoid buffer overflows and underflows). The buffer occupancy is the total playing duration of the segments loaded in the buffer, which is expressed as i (τ − t i ). When the duration of the segment τ is greater than the download time of segment t i , the buffer occupancy increases. If the buffer occupancy continues to increase and exceed the buffer size, then buffer overflow will occur, causing waste of resources. Conversely, if the duration of the segment τ is less than the download time of the segment t i , the buffer occupancy decreases. Obviously, the continuous decrease of buffer occupancy causes buffer underflow, which then causes playback interruptions. Therefore, VCC-DASH sets the upper bound λ max and the lower bound λ min for the number of buffered segments to impose restrictions on the selection of the media representation, and to avoid the bandwidth waste and playback interruptions. Equation (8) represents the constraint on the available bandwidth, which requires VCC-DASH takes both the available network bandwidth BC i and the buffer occupancy i−1 (τ − t i ) into account to select the media representation. In fact, this constraint guarantees the requesting segment arrival at the buffer before its queue is empty. VCC-DASH Implement Here we present the implement of the proposed strategy VCC-DASH, as shown in the following textbox of Figure 5. The bitrate of the first segment is initialized as the minimum bitrate B 1 in the available bitrate set . VCC-DASH selects bitrate for each segment. Lines (5-17) get the available bitrate set under the constraints of Equation (6). Lines (18-23) shows the joint constrains of Equations (7) and (8) on the bitrate selection. If there is no available bitrate, VCC-DASH directly assigns the lowest bitrate B 1 to the segment as shown in lines (24-27). Otherwise, the segment with the maximized QoE is selected as shown in lines (28-36). This is assuming that there are N segments of the video and the number of bitrate representations in the available bitrate set is M. As usual, M is much smaller than N. When the network bandwidth is so stable and sufficient that the biggest bitrate B M in is available, the VCC-DASH has to traverse the available bitrate set to calculate the QoE for each segment according to the optimization model, and then select the segment with the optimal QoE. In this case, the time complexity is O(N · M 2 ), which means that the proposed VCC-DASH is fast enough for practical deployment. Performance Evaluation In this section, we evaluate the proposed VCC-DASH and compare it with a classic bitrate adaptation strategy, known as LIU's strategy [5], under the same simulation scenarios setup. The simulation results show that the two strategies have similar performances in bitrate switching Input: The available bitrate set R={B1,B2,…,BM}kbps; The total number of the segments N; The duration of the segment τ(s); The VCCi of the i-th segment; The delivery time of the i-th segment ti; The segment number of bitrate surplus w; The loss weight of video bitrate switching ɤ1; The loss weight of the switching range ɤ2. Output: The selected bitrate of the i-th segment BSi. Performance Evaluation In this section, we evaluate the proposed VCC-DASH and compare it with a classic bitrate adaptation strategy, known as LIU's strategy [5], under the same simulation scenarios setup. The simulation results show that the two strategies have similar performances in bitrate switching numbers, playback interruption times and buffer lengths. Moreover, the bitrate and the MOS of the segments selected by VCC-DASH are distinctly higher than that of LIU's strategy, which means that VCC-DASH offers users better QoE. Simulation Scenarios Setup The proposed strategy VCC-DASH is performed on 100 video segments with different content complexities extracted from the video sequences Akiyo, Container, Foreman, Coastguard, Soccer, and Football, respectively, which are re-arranged as the VCC distribution shown in Figure 5b. For a fast start, assuming that when there are five segments with the lowest bitrate B 1 in the buffer, the video begins to play and the proposed strategy VCC-DASH comes into effect. The parameters settings are shown in Table 4. Since LIU's strategy [5] is one of the most famous strategies among the existing DASH bitrate adaptation ones, our simulations regard it as a benchmark to investigate the performances of VCC-DASH. In fact, both of them have the same point in adaptively matching the media representation with the network condition and pursuing goals, including less switching times, higher average bitrate, no buffer overflow or underflow, and no playback interruptions. Different from LIU's strategy, VCC-DASH differentiates segments in content complexity and considers the constraint of buffer occupancy. In addition, the comparisons of the two strategies are done under the same scenario's setup, including the network bandwidth, buffer size, segment duration, content complexity, available bitrate set, etc. Simulation Results Analysis Under a worst network condition (as shown in Figure 6a) where bandwidth fluctuations with higher amplitudes occur frequently, our simulations compare the performances of the two strategies in terms of selected bitrate, the QoE items (including MOS and loss), and the buffered media time. If the strategy acts well in this worst condition, it can also adapt to normal conditions. In simulations, the distribution of VCC of 100 segments is shown in Figure 6b (1) The selected bitrate: The selected bitrate of the two strategies is shown in Figure 6c. The average bitrate of VCC-DASH is 492.77 Kbps, which is significantly higher than the 452.67 Kbps in LIU's strategy. The statistical distribution of the bitrate is shown in Figure 7a. The bitrate selected by VCC-DASH is concentrated at 540 Kbps and 720 Kbps while that of LIU's strategy is concentrated at 360 Kbps and 540 Kbps. The reason for the results is that LIU's strategy deploys a step-wise switching-up and aggressive switching-down method to change the media representation and prevent buffer underflow, which means it is easy for the bitrate to switch down but it is cautious to switch up, so the bitrate is more concentrated at a relative low bitrate and the average bitrate is low. The VCC-DASH directly selects the segment with the best QoE under the constrains of the network bandwidth and buffer occupancy without limit to the switching range between consecutive segments, so the network bandwidth is well used to transmitting the segment with higher bitrate and the bitrate is more concentrated at higher bitrate. (2) QoE items: Statistics of QoE items are shown in Table 5. For the two strategies, there is no significant difference in the total number of switching times and switching range, and furthermore, both of them are at a rather small level. Although differently, the sum of the MOS and the sum of QoE of VCC-DASH are obviously higher than that of LIU's strategy. Moreover, the distribution of MOS is shown in Figure 7b, and the proportion of subjective opinion "excellent" is as high as 86% in VCC-DASH, which is significantly higher than that of LIU's strategy. The advantage roots from that VCC-DASH collect bitrate surpluses of the prior segments and provide them to segments with high VCC so that the MOS of the segments with high VCC is visibly enhanced, and at the same time the QoE of the whole video is more equalized. In general, the proposed VCC-DASH can improve users' QoE and offer an equalized viewing experience. In addition to the above comparison, we add a comparative experiment under the constraint of the equal transmitting bits for any requiring segment in order to further illustrate the advantage of VCC-DASH relative to LIU's strategy. Here according to VCC-DASH, we can obtain the bitrate selection decision (i.e., the bitrate sequence for the 100 segments under two network conditions). Then in the experiment, LIU's strategy requests and delivers each segment based on the bitrate sequence pre-defined by VCC-DASH. Apparently, each segment received and played by LIU's strategy has the same bit number as the corresponding segment by VCC-DASH. Under an associated VCC-DASH bitrate sequence, we obtain the QoE statistics results of LIU's strategy as shown in Table 6. Compared with the results in Table 5, we observe that LIU's QoE under the associated VCC-DASH bitrate sequence is much worse than the ones under the respective optimal bitrate sequences independently determined by their self-adaption policies. Numerically, LIU's QoE sum of 100 segments at the associated VCC-DASH bitrate sequence (that is about 327.32 in Table 6) is lower by 24.83% than the one at LIU's optimal bitrate sequence (that is about 408.55 in Table 5), and much lower at 29.06% than VCC-DASH's QoE sum at its optimal bitrate sequence (that is about 422.44 in Table 5). As shown by Figure 7b and Figure 8b in much more detail, the number of excellent-level MOS for LIU's reduces (1) The selected bitrate: The selected bitrate of the two strategies is shown in Figure 6c. The average bitrate of VCC-DASH is 492.77 Kbps, which is significantly higher than the 452.67 Kbps in LIU's strategy. The statistical distribution of the bitrate is shown in Figure 7a. The bitrate selected by VCC-DASH is concentrated at 540 Kbps and 720 Kbps while that of LIU's strategy is concentrated at 360 Kbps and 540 Kbps. The reason for the results is that LIU's strategy deploys a step-wise switching-up and aggressive switching-down method to change the media representation and prevent buffer underflow, which means it is easy for the bitrate to switch down but it is cautious to switch up, so the bitrate is more concentrated at a relative low bitrate and the average bitrate is low. The VCC-DASH directly selects the segment with the best QoE under the constrains of the network bandwidth and buffer occupancy without limit to the switching range between consecutive segments, so the network bandwidth is well used to transmitting the segment with higher bitrate and the bitrate is more concentrated at higher bitrate. Figure 6a. Hence, the QoE performance of LIU's is much worse than that of VCC-DASH in the case of the exact same received bits. (2) QoE items: Statistics of QoE items are shown in Table 5. For the two strategies, there is no significant difference in the total number of switching times and switching range, and furthermore, both of them are at a rather small level. Although differently, the sum of the MOS and the sum of QoE of VCC-DASH are obviously higher than that of LIU's strategy. Moreover, the distribution of MOS is shown in Figure 7b, and the proportion of subjective opinion "excellent" is as high as 86% in VCC-DASH, which is significantly higher than that of LIU's strategy. The advantage roots from that VCC-DASH collect bitrate surpluses of the prior segments and provide them to segments with high VCC so that the MOS of the segments with high VCC is visibly enhanced, and at the same time the QoE of the whole video is more equalized. In general, the proposed VCC-DASH can improve users' QoE and offer an equalized viewing experience. In addition to the above comparison, we add a comparative experiment under the constraint of the equal transmitting bits for any requiring segment in order to further illustrate the advantage of VCC-DASH relative to LIU's strategy. Here according to VCC-DASH, we can obtain the bitrate selection decision (i.e., the bitrate sequence for the 100 segments under two network conditions). Then in the experiment, LIU's strategy requests and delivers each segment based on the bitrate sequence pre-defined by VCC-DASH. Apparently, each segment received and played by LIU's strategy has the same bit number as the corresponding segment by VCC-DASH. Under an associated VCC-DASH bitrate sequence, we obtain the QoE statistics results of LIU's strategy as shown in Table 6. Compared with the results in Table 5, we observe that LIU's QoE under the associated VCC-DASH bitrate sequence is much worse than the ones under the respective optimal bitrate sequences independently determined by their self-adaption policies. Numerically, LIU's QoE sum of 100 segments at the associated VCC-DASH bitrate sequence (that is about 327.32 in Table 6) is lower by 24.83% than the one at LIU's optimal bitrate sequence (that is about 408.55 in Table 5), and much lower at 29.06% than VCC-DASH's QoE sum at its optimal bitrate sequence (that is about 422.44 in Table 5). As shown by Figures 7b and 8b in much more detail, the number of excellent-level MOS for LIU's reduces from 74 to 31 when the bitrate selection of segment changes from adjusting by LIU's to being predefined by VCC-DASH. Here, most of the excellent-level MOS for LIU's degrade to the good-level (about 24) and fair-level ones (about 19). The reason for degradation of MOS levels for LIU's roots from the mismatch of the associated VCC-DASH bitrate sequence to LIU's decision-making solution under the network bandwidth condition is shown in Figure 6a. Hence, the QoE performance of LIU's is much worse than that of VCC-DASH in the case of the exact same received bits. (3) Buffered media time: The buffer occupancy is shown in Figure 6d where both maintain buffer occupancy in a fair proper level, and do not appear to overflow or underflow. For LIU's strategy, the results root from two causes. The first is its step-wise switching-up and aggressive switching-down method, which avoids the buffer underflow. The second is that the client should pause a certain period of time to request the next request if the buffer occupancy is large enough to cover the maximum draining of buffered media time during fetching the segment, which prevents buffer overflow. For VCC-DASH, the results root from one cause, which is that VCC-DASH sets the upper bound λ max and the lower bound λ min for the buffer occupancy to prevent buffer overflow or underflow, respectively. As a result, the network bandwidth suddenly drops from 500 Kbps to 0 Kbps in 105 s; the buffer occupancy decreases rapidly but is still above 0, which means that although the network performs extremely bad, playback interruptions will not occur. As a whole, Figure 6d shows that the two strategies can well control the filling level of the client buffer to avoid overflow and underflow. Conclusions Traditional DASH bitrate adaptive strategies (e.g., LIU's strategy), only adapt to the network bandwidth to download segments, and cannot differentiate segments with different content complexities. Our proposed VCC-DASH strategy makes full use of the network bandwidth to download segments and allocates more bandwidth resources for segments with high video content complexity, thus offering users a better QoE. The simulation performance shows that it performs remarkably well even under a highly variable throughput network condition. Apart from content complexity, there are many other features of video content that can be studied and integrated with DASH bitrate adaptation strategy to optimize users' QoE in future work. What is more, some other QoE items may be introduced to measure users' subjective satisfaction. Conflicts of Interest: The authors declare no conflict of interest.
7,995.4
2020-01-31T00:00:00.000
[ "Computer Science" ]
Characterizing a CCD detector for astronomical purposes : OAUNI Project Caracterizando un detector CCD para uso astronómico : proyecto OAUNI This work verifies the instrumental characteristics of the CCD detector which is part of the UNI astronomical observatory. We measured the linearity of the CCD detector of the SBIG STXL6303E camera, along with the associated gain and readout noise. The linear response to the incident light of the detector is extremely linear (R2 =99.99%), its effective gain is 1.65 ± 0.01 e/ADU and its readout noise is 12.2 e-. These values are in agreement with the manufacturer. We confirm that this detector is extremely precise to make measurements for astronomical purposes. INTRODUCTION The National University of Engineering (UNI) has an astronomical observatory project (OAUNI, [1]) at the peruvian central Andes (Huancayo, 3300 m.u.s.l.).This ongoing effort aims to provide of a facility to develop science programs, teaching and outreach in astronomy.The observatory has several instruments being the most important the scope and the detector.The proper selection of both is necessary to acquire astronomical images of quality. CCD detectors are the standard device to register optical digital images in practically all types of applications ranging from domestic and scientific ones.They are bidimention arrays of detection elements (or pixels) that convert incident photons on them into electrons.In particular, their use in professional astronomy let a huge enhancement on the precision of photometric and spectroscopic measurements of astronomical objects over the last decades. The great advantage of CCDs when comparing with other detectors (as photographic emulsions, for example) is its linearity to the response of incident light.In addition, the better sensibility for the optical spectral range makes the CCDs the natural choice for astronomical applications ( [2]).There are several types of CCDs with different approaches (frontilluminated, back-illuminated, full frame, frame transfer, etc.; [3], [4]) to enhance the sensibility and linearity depending of the particular application. In general, each CCD is characterized by its linearity, quantum efficiency (sensibility), gain and readout noise.The gain refers to the conversion between the number of electrons (e-) recorded by the CCD and the number of analog to digital units DOI : http://dx.doi.org/10.21754/tecnia-26022016.03 Revista TECNIA Vol.26 N°2 Agosto-Diciembre 2016 (ADU) contained in the CCD image.Gain is given in (e-/ADU). The three main sources of noise in CCD measurements are the readout noise, the thermal noise and the photon noise.The readout noise (R) is present in all images with the same amount regardless of integration time.This represents the onchip noise source that affect the measurement and it is given in e-RMS.The thermal noise (ND) is dependent of the temperature of the chip and responds to electrons (D) created over time that are independent of the light falling on the detector.It can be reduced at lower temperatures.Finally, the photon noise (N) depends on the amount of light hitting the chip (S).The last two noises responds to the Poisson statistics (ND 2 = D, N 2 = S).All the noises given in electrons are added in quadrature to the total noise, as following. N 2 tot = N 2 + ND 2 + R 2 , or, The CCD camera of OAUNI's project is of the extreme sensibility and its characterization is needed in order to be used properly on the observational programs which include photometry and spectroscopy.The camera is a SBIG STXL6303E (Fig. 1, top) from Diffraction Limited / SBIG provider and it includes the front-illuminated CCD chip Kodak KAF-6303E (Fig. 1, middle). This work presents systematic laboratory tests in order to characterize the linearity, the gain and readout noise of the CCD KAF 6303E.These values are then compared with the specifications given by the manufaturer.Finally, the conclusions about the feasibility of the OAUNI's detector to make precise astronomical measurements are summarized. MEASUREMENTS Measurements using the camera STXL-6303E were performed in controlled conditions.They consisted in sequences of flat field and dark current images with a CCD temperature of -5°C at several integration times.The image acquisition was done using the software CCDOPs ver.5.55.The flat field images were used to compute the linearity and gain of the CCD.On the other hand, the dark current images were used to calculate the readout noise.A white LED source was used to obtain a homogenous illumination of the roof lab for flats fields.The camera was located pointed to the roof with a proper lateral protection to avoid reflected light.In order to obtain larger integration times, an astronomical broadband blue filter was used in front of CCD.The effective central wavelength for this filter is 4353Å with a broadband of 781Å.The attenuation was enough to perform integration times (IT) between 1 and 14 seconds (in steps of 0.5s) covering all the dynamical range of the CCD.A short shoot of 0.4s also was gathered.For each IT, 5 different shoots were done.The 4.0s sequence was incidentally lost and it was not considered on the below analysis. Dark current images were acquired with the lights off and the entrance of the camera closed.Again, 5 shoots by IT were performed for 0.4, 1, 2, 3, 5, 8, 11 and 14s. All the images were acquired including the overscan mode in CCDOps software.This let include and measure the electronic offset associated with the readout process.This is done on each image in a proper section of inactive pixels.This section is defined by the manufacturer and for the CCD KAF-6303E this is indicated in Fig 1 .(bottom).All the images were reduced using the IRAF 2.16 software in Linux/Ubuntu 12.04 operational system.The first step was to perform the overscan correction and to select the optimal region for the data (or trimming section).These sections are indicated in Table 1.Flat field and dark images were then corrected by overscan subtracting the mean value computed on the overscan region for each individual image.Fig. 2 shows a histogram of a typical overscan region.The mean value is aprox.1000 ADU.After overscan correction, cropping was applied using the trimmed section to recover the operational working area of the chip in each image.In principle, a zero bias level may still prevail after the overscan correction.This zero level calibration is obtained by taking images with zero integration times.In order words, this level considers the electronic offset on active pixels.Several zero bias images must be taken and then properly combined to get an averaged zero bias image.In our case, the CCDOps software only lets 0.4s as the minimum integration time.We used the 0.4s dark current images, after overscan correction, as representative of the zero bias level.The sequence of 5 images was averaged and Fig. 3 shows the result.A residual mean bias value of ~11 ADU is detected. After that, all the flat field and darks images were corrected subtracting this mean zero bias in each image.The 5-images sequences for the dark current images were then combined and averaged for each integration time. For test the linearity, two approaches were used with the flat fields.The first was including only the mean zero bias correction.The second one was consider only dark current correction.This let us to know if the temperature dependence of the dark current is important or not on the linearity of the detector for the temperature used in these tests. With this, each 5-images sequence of flat fields for each type of correction was properly combined and averaged. RESULTS The test of linearity [7] was performed in a homogenous subsection of the flat field images (as seen in Fig 4).The mode for the pixels in this subsection of the chip was computed for each averaged flat image and in each integration time available.This step was done for both flat field corrections commented above.The results are shown in Table 2.Only 8 points are available with the dark current correction because this is the same number of darks sequence available.The linear fit for the flat fields corrected by dark current is plotted in Fig. 6.Again, the CoD (R 2 = 99.999%)indicates a sharp linearity for the chip.The effect of the dark current correction does not seem important for the temperature used in this tests (-5C).In principle, scientific data can be reduced only with overscan and bias correction when dark current frames are not available.Of course, flat fielding also is necessary for a complete calibration process.As shown in eq. 1, the signal (S) on each pixel can be represented by the Poisson statistic after correction by the readout and dark current noises, following, This can be applied when the signal and the noise are counted in electrons.As the gain (g) relates electrons and ADUs, the above expression can be expressed by, (gN´) 2 = gS´ , or, N´2 = (1/g) S´ where N´ and S´ are in ADUs.Therefore, measuring the signal and its noise (both in ADUs) for different signal levels (or integration times, in our case), the inverse of the slope will be the gain.The final noise is computed on the difIT image because the subtraction eliminates all the noise sources except the Poisson noise.The statistic is indicated in Table 3.The slope of the linear fit in Fig. 7 yields the gain for the chip (g = 1.654 ± 0.012 e-/ADU).The nominal gain as appears on the specifications of the manufacturer is 1.47 e-/ADU and the difference with respect to our value is 12%.Our tests are in a reasonable agreement with the nominal gain.In order to compute the readout noise of our chip, we used the dark current images obtained at several integrations times [9].Five readout noise images were constructed for each IT subtracting of the averaged dark current image each individual dark frame, following, RONIT i = darkIT avg -darkIT i A total of (8x5=) 40 RON images were computed.As each dark current image only includes the thermal signal without incident light on the chip, the subtraction of the averaged image results in only one term for the total noise in eq. 1, With this, the mean value of each RON image is of course close to zero but its standard deviation represents the readout noise.The statistics was made considering the full frame.Fig. 8 shows the readout noise computed for all the dark current images following the above procedure using images without overscan correction.Considering the 40 RON images, the mean readout noise value is 7.382 ± 0.034 ADU.The dispersion is small and the result is very precise.The same exercise using dark current images with overscan correction yields a similar result. In order to compare with the readout noise given by the manufacturer (11e-, [9]), we convert this value to ADU using the proper manufacturer's gain (11e-/ (1.47 e-/ADU) = 7.48 ADU).The accuracy of our result is evident.Finally, if we transform our RON result using our own gain value computed before, we obtained RON = 12.2 e-.This compares very well with the manufacturer value being the difference only of 11%. CONCLUSIONS We present calibration tests of the SBIG STXL-6303E camera.These include a verification of the main specifications of the CCD chip as linearity, gain and readout noise.Our tests indicate a rigorously linear detector (R2 = 99.99%) between 3 and 90% of its full well capacity.The gain and readout noise computed in this work are slightly higher than the Fig 2 . Fig 2. Histogram for the overscan region.The mean value is 1002.00± 10.08 ADU.The error is the sample standard deviation. Fig 3 . Fig 3. (top) Mean zero bias image.(bottom) Histogram of mean zero bias image.The mean value along the chip is 11.04 ± 3.95 ADU.The error is the sample. Fig. 4 . Fig. 4. Averaged flat field image (for IT=14s) corrected by overscan and dark current in false color to highlight the typical pattern.The black box indicates the 500x500 pixel subsection where the mode of the signal was computed for the linearity tests.The linearity of the chip KAF-6303E including the bias correction is plotted in Fig.5The dynamical range sampled goes from 2k to 61k ADU.The reference full well capacity for this chip is 100k e-.Considering the Fig. 5 Fig. 5 Linearity tests for CCD KAF-6303E detector using flat field images for several integration times.Data include overscan and bias correction.Mean values for a regular section of 500x500 pixels are shown (black dots).The coefficient of determination for the linear fit (red) is also shown (R2= 99.998%). In order to get an estimate of the gain [8] we compute the average and the difference images of two flats with the same integration time, avgIT = ( flat1IT + flat2IT) / 2 difIT = ( flat1IT -flat2IT) We used the flat fields corrected by dark current in this exercise.The statistics were made also in the same subsection used for the linearity tests.The average image represents the signal (S´), S´= avgIT and the variance (or the square of the standard deviation) divided by 2 of the difIT image is associated to the squared signal noise (N´2), N´2 = variance (difIT) / 2 = stddev 2 (difIT) / 2. Fig. 7 . Fig. 7. Gain for the CCD KAF-6303E.The gain computed from the inverse of the slope is 1.654 ± 0.012 e-/ADU. Fig. 8 . Fig. 8. Computing of readout noise for CCD KAF-6303E.The standard deviation of the differences between individual darks (without overscan correction) and their mean average images are shown for several integration times(black dots).The mean valuesfor each integration time are also shown (red dots).The mean standard deviation (7.382 ± 0.034 ADU) for all the measurements is indicated (solid line) along with its respective one-sigma dispersion (dotted lines).With the gain value (1.654 e-/ADU) computed before, the mean readout noise is 12.209 e-. Table 2 . Flat fields statistics.The signal is measured by the mode. DOI : http://dx.doi.org/10.21754/tecnia-26022016.03 Revista TECNIA Vol.26 N°2 Agosto-Diciembre 2016 manufacturer values in ~11-12%.The gain computed was 1.654 ± 0.012 e-/ADU and the readout noise was 12.2 e-.In general, these values are in agreement with the nominal values for this detector.The performed calibrations tests show the feasibility of the chip KAF-6303E to make precise measurements for astronomical purposes.
3,217.2
2016-12-01T00:00:00.000
[ "Physics" ]
Potassium Iodide Doping for Vacancy Substitution and Dangling Bond Repair in InP Core-Shell Quantum Dots This work highlights the novel approach of incorporating potassium iodide (KI) doping during the synthesis of In0.53P0.47 core quantum dots (QDs) to significantly reduce the concentration of vacancies (i.e., In vacancies; VIn−) within the bulk of the core QD and inhibit the formation of InPOx at the core QD–Zn0.6Se0.4 shell interfaces. The photoluminescence quantum yield (PLQY) of ~97% and full width at half maximum (FWHM) of ~40 nm were achieved for In0.53P0.47/Zn0.6Se0.4/Zn0.6Se0.1S0.3/Zn0.5S0.5 core/multi-shell QDs emitting red light, which is essential for a quantum-dot organic light-emitting diode (QD-OLED) without red, green, and blue crosstalk. KI doping eliminated VIn− in the core QD bulk by forming K+-VIn− substitutes and effectively inhibited the formation of InPO4(H2O)2 at the core QD–Zn0.6Se0.4 shell interface through the passivation of phosphorus (P)-dangling bonds by P-I bonds. The elimination of vacancies in the core QD bulk was evidenced by the decreased relative intensity of non-radiative unpaired electrons, measured by electron spin resonance (ESR). Additionally, the inhibition of InPO4(H2O)2 formation at the core QD and shell interface was confirmed by the absence of the {210} X-ray diffraction (XRD) peak intensity for the core/multi-shell QDs. By finely tuning the doping concentration, the optimal level was achieved, ensuring maximum K-VIn− substitution, minimal K+ and I− interstitials, and maximum P-dangling bond passivation. This resulted in the smallest core QD diameter distribution and maximized optical properties. Consequently, the maximum PLQY (~97%) and minimum FWHM (~40 nm) were observed at 3% KI doping. Furthermore, the color gamut of a QD-OLED display using R-, G-, and B-QD functional color filters (i.e., ~131.1%@NTSC and<EMAIL_ADDRESS>provided a nearly perfect color representation, where red-light-emitting KI-doped QDs were applied. Introduction Among various luminescent materials, quantum dots (QDs) have been intensively researched owing to their simple wavelength tunability, high efficiency, high stability, and high color purity [1][2][3][4][5][6][7][8][9][10].Consequently, in recent decades, research on the application of QDs in fields such as solar energy [11][12][13][14], bioimaging [15,16], and displays [10,[17][18][19][20][21] has been vigorously pursued.In particular, luminescent materials for display applications, such as quantum-dot organic light-emitting diodes (QD-OLED) and quantum-dot light-emitting diodes (QLED), are required to achieve a photoluminescence quantum yield (PLQY) of >95% and a full width at half maximum (FWHM) of <30 nm [1,10,[22][23][24].In addition, recent studies have explored the integration of QDs with GaN-based LEDs for full-color displays.For instance, Zhou et al. [25] investigated high-efficiency GaN-based green LEDs utilizing InGaN quantum wells with varying indium content, demonstrating improvements in light output power and reduction in efficiency droop.Fan et al. [26] examined the development of efficient full-spectrum WLEDs through monolithic integration of III-nitride quantum structures via bandgap engineering, achieving notable advancements in color rendering and luminous efficacy.These studies indicate ongoing efforts to enhance display technologies by combining QDs with GaN-based LEDs.For instance, quantum dots based on II-VI materials, such as cadmium selenide (CdSe) QDs, exhibit excellent optical properties, including a high PLQY (>95%), high stability, and a narrow FWHM (>25 nm) [2,[27][28][29][30].However, owing to environmental concerns, the use of Cd in display applications has been restricted according to the specifications of the European Restriction of Hazardous Substances Directive (RoHS) [31][32][33][34].In contrast, luminescent nanomaterials for display applications based on III-V materials, such as indium phosphide (InP), comply with RoHS standards.In particular, InP-based QDs, which are III-V materials with a bulk bandgap energy of approximately 1.35 eV [35], allow for the adjustment of their energy from nearblue (~2.5 eV) to near-infrared (~1.7 eV) through control of the core diameter [36][37][38][39].Although the environmentally friendly InP-based QDs have been actively researched, their high lattice covalency presents a challenge for achieving a high PLQY of >95%, leading to an inherently low PLQY for InP-based QDs [3,[40][41][42][43][44][45][46].Lattice covalency, which can be characterized by the Phillips' ionicity, represents the quantified value of the type of chemical bonding between ionic bonding (i.e., higher Phillips' ionicity) and covalent bonding (i.e., lower Phillips' ionicity) [47,48].The Phillips' ionicity of InP-based QDs is relatively low due to the covalent bonding between In and P atoms, resulting in a higher lattice covalency compared with CdSe-based QDs.The Phillips' ionicity values for InP and CdSe are 0.421 and 0.699, respectively [49].A high lattice covalency of InPbased QDs inherently requires a high growth temperature (i.e., >300 • C) and a reactive P 3− precursor (i.e., tris(trimethylsilyl) phosphine; (TMS) 3 P) [10,47,[50][51][52][53][54][55].InP-based QDs grown at a high temperature exhibit internal defects such as vacancies in the InP core QDs (i.e., In − vacancies (V In − ) and P + vacancies (V P + ) in the core QDs) during nucleation and growth [56][57][58], resulting in a low PLQY and wide FWHM [53,[59][60][61][62]. Recent advancements in doping methods for InP QDs have shown promising improvements in their optical properties.For instance, various metal impurities such as Cu, Ag, and Au have been successfully introduced into semiconductor nanocrystals, demonstrating control over the bandgap and Fermi energy, which significantly influences the photoluminescence (PL) and electronic properties of the QDs [63,64].The introduction of dopants during the synthesis process or post-synthesis treatment has been explored to enhance the PLQY and stability of QDs by minimizing the non-radiative recombination sites through improved crystallinity and surface passivation [64].Additionally, doping strategies involving surface ligand exchange and electrochemical doping have been reported to tailor the electronic and optical properties of QDs, thus optimizing their performance in various applications including display technologies and bioimaging [63,64].In addition, the surface of the InP-core QDs can be readily oxidized, resulting in InPO x [4,50,56,[65][66][67][68]. Thus, optimal hydrofluoric acid (HF) treatment is generally introduced prior to shell growth [10,[69][70][71][72][73][74][75][76][77].Moreover, halide ion diffusion and passivation after InP/ZnSeS/ZnS core/shell growth were applied to inhibit the oxidation of the InP core surface, reduce the number of interface defects between the InP core and ZnSeS shell and between the ZnS outer shell, and diminish vacancies such as V In − and V P + [78].However, the action mechanism has not been elucidated.In our study, doping with a metal halide (i.e., potassium iodide; KI) was precisely designed during In x P 1−x core synthesis, followed by optimal HF treatment and the multishell growth of a Zn 0.6 Se 0.4 /Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 nanolayer, as shown in Figure 1a.Note that the In x P 1−x core QDs and Zn 0.6 Se 0.4 /Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 multi-shells were precisely designed to maximize the PLQY (i.e., ~97%) and minimize the FWHM (i.e., ~40 nm) under 622 nm red-light emission.As shown in Figure 1a, the In x P 1−x core doped with an optimal concentration of KI had a diameter of 4.1 nm ± 0.5 nm.Following the shell growth process described in the next Section 2.2, Zn 0.6 Se 0.4 , Zn 0.6 Se 0.1 S 0.3 , and Zn 0.5 S 0.5 multi-shell layers were grown, resulting in a final core-shell QD structure with dimensions of approximately 7.5 nm.Moreover, Figure 1b,c schematically represent the concept of improving optical properties through our designed doping process, illustrating the aim to obtain structure; Figure 1b shows the In x P 1−x core with internal vacancies and their substitution by KI doping, while Figure 1c demonstrates the passivation of surface oxidation, specifically inhibiting the formation of oxidized InPO x on the core QD surface through KI doping.In particular, among metal halides, KI was selected to minimize the ionic-radius mismatch between vacancies (i.e., V In − or V P + ) and metal halides (i.e., K + or I − ), minimizing the degree of non-radiative recombination, where the ionic radii of In 3+ , P 3− , K + , I − , V In − , and V P + were 80, 212, 138, 220, 155, and 100 pm, respectively [79,80], as shown in Figure 1b. Nanomaterials 2024, 14, x FOR PEER REVIEW 3 of 20 described in the next Section 2.2, Zn0.6Se0.4,Zn0.6Se0.1S0.3, and Zn0.5S0.5 multi-shell layers were grown, resulting in a final core-shell QD structure with dimensions of approximately 7.5 nm.Moreover, Figure 1b,c schematically represent the concept of improving optical properties through our designed doping process, illustrating the aim to obtain structure; Figure 1b shows the InxP1−x core with internal vacancies and their substitution by KI doping, while Figure 1c demonstrates the passivation of surface oxidation, specifically inhibiting the formation of oxidized InPOx on the core QD surface through KI doping.In particular, among metal halides, KI was selected to minimize the ionic-radius mismatch between vacancies (i.e., VIn − or VP + ) and metal halides (i.e., K + or I − ), minimizing the degree of nonradiative recombination, where the ionic radii of In 3+ , P 3− , K + , I − , VIn − , and VP + were 80, 212, 138, 220, 155, and 100 pm, respectively [79,80], as shown in Figure 1b.In addition, the effect of K + or I − doping on the passivation efficiency of VIn − and VP + in only the InxP1−x core QDs was investigated as a function of the KI doping concentration.The core QD average diameter and diameter distribution were investigated using highresolution transmission electron microscopy (HR-TEM), and the relative vacancy concentration was investigated using electron spin resonance (ESR).Moreover, to examine the effect of the KI doping concentration on the photoelectric performance enhancement, the photo-optical properties of the InxP1−x/Zn0.6Se0.4/Zn0.6Se0.1S0.3/Zn0.5S0.5 core/multi-shell QDs were estimated as a function of the KI doping concentration by measuring the emission wavelength, PLQY, FWHM, and exciton lifetime using time-resolved photoluminescence (TRPL) spectra.Furthermore, to determine the mechanism whereby KI doping during core synthesis significantly enhanced the photo-optical performance, the dependence of the crystalline properties of the core/multi-shell QDs on the KI doping concentration was In addition, the effect of K + or I − doping on the passivation efficiency of V In − and V P + in only the In x P 1−x core QDs was investigated as a function of the KI doping concentration.The core QD average diameter and diameter distribution were investigated using high-resolution transmission electron microscopy (HR-TEM), and the relative vacancy concentration was investigated using electron spin resonance (ESR).Moreover, to examine the effect of the KI doping concentration on the photoelectric performance enhancement, the photo-optical properties of the In x P 1−x /Zn 0.6 Se 0.4 /Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 core/multi-shell QDs were estimated as a function of the KI doping concentration by measuring the emission wavelength, PLQY, FWHM, and exciton lifetime using time-resolved photoluminescence (TRPL) spectra.Furthermore, to determine the mechanism whereby KI doping during core synthesis significantly enhanced the photo-optical performance, the dependence of the crystalline properties of the core/multi-shell QDs on the KI doping concentration was precisely characterized using X-ray diffraction (XRD).Finally, KI-doped InP-based red-lightemitting QDs were applied in a hybrid-display application that combined quantum-dot functional color filters (QDCF) and a blue OLED backlight unit (BLU) [81][82][83][84].The color gamut performance was evaluated by comparing the CIE1931 x,y color coordinates with the Rec.2020 color standard and the National Television System Committee (NTSC) color standard [85][86][87].The NTSC standard, established in 1953, defines a color gamut based on the RGB color model for CRT displays, while Rec.2020, introduced by the International Telecommunication Union (ITU) in 2012, encompasses a significantly larger range of colors for Ultra-High-Definition (UHD) television.The absence of crosstalk between the primary colors was confirmed by measuring the polarized R-, G-, and B-light photoluminescence (PL) spectra. Preparation of Stock Solutions For the synthesis of red-light-emitting QDs, a 0.316 M Zn(st) 2 precursor was prepared by dissolving 4.74 mmol of Zn(st) 2 in 15 mL of ODE, and the mixed solution was degassed and heated using the same conditions.A 0.2 M solution of (TMS) 3 P was prepared in a N 2 -filled glovebox by combining 2 mmol of (TMS) 3 P with 10 mL of TOP.A 1.79 M Se-TOP mixed solution was prepared in an N 2 -filled glovebox by dissolving 17.9 mmol of Se powder in 10 mL of TOP.A 0.1M HF-acetone mixed solution was prepared by dissolving 1.4 mmol of HF in 14 mL of acetone.After that, all mixed solutions were degassed at 200 • C for 30 min in an N 2 -filled glovebox. Synthesis of KI-Doped Red-Light-Emitting In x P 1−x Core QDs For the synthesis of KI-doped In 0.53 P 0.47 core QDs, 0.65 mmol of In(OAc) 3 , 0.002 mmol of KI, and 1.95 mmol of PA were loaded into a 100 mL, 3-neck flask with 14 mL of ODE at RT.The mixed solution in flask was heated up to 150 • C with stirring and degassed under a vacuum of 100 mTorr for 60 min.Afterward, the degassed mixed solution in flask was heated up to 320 • C to obtain a colorless transparent In(PA) 3 solution.After that, the mixed solution in syringe of 1.63 mL of 0.2 mM (TMS) 3 P precursor was rapidly injected into the flask at 320 • C. Next, the KI-doped In x P 1−x core was grown for 10 min at the same temperature.And then, to obtain red-light-emitting KI-doped In x P 1−x core QDs, the mixed solution in flask was cooled down to RT and then centrifuged twice with acetone to eliminate impurities generated from unreacted precursors and byproducts.Finally, the precipitated QDs were redispersed in 5 mL of toluene.internal quantum efficiency (IQE) values were converted to percentages and represented as whole numbers.Moreover, PLQY was measured using an integrating sphere setup, which calculates the ratio of the number of photons emitted to the number of photons absorbed, ensuring accurate absolute quantum efficiency measurements.Measurements were performed using a 150 mm integrating hemisphere in a liquid sample state, with fluorescein solution at 493 nm excitation wavelength as the reference.The internal quantum efficiency (yield) of fluorescein was calculated as 0.903 (concentration: 6.43 × 10 −6 mol•L −1 ), matching the literature value [88].The reference measurement was followed by the sample measurement, and a correction for re-excitation was applied to determine the final internal quantum efficiency. In particular, the exciton lifetime of the QDs was determined by using time-correlated single-photon counting measurements with single-photon avalanche diodes (PDL Series, from PicoQuant, Berlin, Germany) and a HydraHarp 400 multichannel (from PicoQuant, Berlin, Germany) picosecond event timer module.Furthermore, the crystallinity and crystal structure of the core/multi-shell QDs were characterized by X-ray diffraction (XRD). Dependence of Defect Passivation Efficiency on KI Dopant Concentration for KI-Doped In x P 1−x Core QDs To estimate the effect of KI doping on the passivation of vacancies (i.e., V In − or V P + ) and surface defects (i.e., oxidation of P-dangling bonds), only KI-doped In x P 1−x core QDs were synthesized by varying the KI doping concentration from 1% to 7% ([KI/Indium precursor] molar ratio).The actual KI doping concentrations were confirmed by ICP-AES analysis, showing a linear increase in the K + /In 3+ molar ratio from 0 to 0.02 as the KI doping concentration increased from 0% to 7%, as detailed in the Figure S4.The dependence of the crystalline properties of the KI-doped core QDs on the KI doping concentration was examined using HR-TEM, as shown in Figure 2. The undoped core QDs were well crystallized, with a zincblende structure having a distance of 3.42 Å between {111}.The average QD diameter and diameter distribution were 3.7 and ±1.0 nm, as shown in Figure 2a.Note that the core average diameter and its deviation were determined by measuring the diameters of well-dispersed core QD particles within a defined window size of 100 nm × 100 nm.This window size was used to ensure a representative sample of the particle population.By measuring a sufficient number of particles (at least 200 particles) within this window, we obtained the average diameter and the standard deviation, reflecting the size distribution.The KI doping in the core QDs significantly increased the average core QD diameter from 3.7 to 4.6 nm, and all the KI-doped core QDs were well crystallized with a zincblende structure, having a distance of 3.38-3.42Å between {111}, as shown in Figure 2a-f.This result would indicate that the dissociated K + and I − not only substitute for V In − and V P + but also produce interstitial K + and I − within the In x P 1−x core QDs.The ionic radii of In 3+ , P 3− , K + , I − , V In − , and V P + were 80, 212, 138, 220, 155, and 100 pm, respectively.When the KI doping from 0% to 3%, the diameter distribution of the core QDs narrowed from ±1.0 to ±0.5 nm, as shown in Figure 2a-c,f.Then, it considerably widened from ±0.5 to ±1.4 nm when the KI doping concentration increased from 3% to 7%, as shown in Figure 2c-f.The minimum diameter distribution of the In x P 1−x core QDs was observed at a specific KI doping concertation (i.e., 3% KI).The results from increasing the KI doping concentration from 0% to 3% indicated that substituting K + and I − for V In − and V P + enhanced the uniformity of the core QD diameter distribution, as a reduction in V In − and V P + leads to more homogeneous QD growth.However, the increase in the KI doping concentration further deteriorated the uniformity of the QD diameter distribution because the generation of K + and I − interstitials, instead of substitutes, resulted in inhomogeneous In x P 1−x core QDs. uniformity of the core QD diameter distribution, as a reduction in VIn − and VP + leads to more homogeneous QD growth.However, the increase in the KI doping concentration further deteriorated the uniformity of the QD diameter distribution because the generation of K + and I − interstitials, instead of substitutes, resulted in inhomogeneous InxP1−x core QDs.To determine why the minimum diameter distribution of the InxP1−x core QDs was observed at a specific KI doping concentration (i.e., 3% KI) via K + and I − substitutes, the dependence of the integrated ESR signal on the KI doping concentration was investigated for only KI-doped core QDs.Note that electromagnetic waves within the GHz (microwave) range were utilized in ESR spectroscopy.The signals arose from the interaction between the unpaired electrons in the sample and the applied external magnetic field, which was attributed to the Zeeman effect [89 -94].During the synthesis of the InxP1−x core QDs, the internal defects such as VIn − and VP + or K + and I − interstitials in the core bulk, as well as In-and P-dangling bonds on the core QD surface, can generate observable electron spin states in ESR analysis.These spin states, which are associated with the electrons captured around such internal defects and dangling bonds, indirectly reveal the chemical environment or structural imperfections related to VIn − and VP + .Furthermore, the intensity of these signals is indicative of the concentration of internal defects and dangling bonds; in other words, a higher ESR signal intensity suggests a higher concentration of internal defects and dangling bonds.The integrated ESR signal intensity was calculated by integrating the ESR signal of the magnetic field between 3400 and 3650 mT, because the internal defects and dangling bonds produced a g-factor ranging from 1.909 to 1.939, as shown To determine why the minimum diameter distribution of the In x P 1−x core QDs was observed at a specific KI doping concentration (i.e., 3% KI) via K + and I − substitutes, the dependence of the integrated ESR signal on the KI doping concentration was investigated for only KI-doped core QDs.Note that electromagnetic waves within the GHz (microwave) range were utilized in ESR spectroscopy.The signals arose from the interaction between the unpaired electrons in the sample and the applied external magnetic field, which was attributed to the Zeeman effect [89 -94].During the synthesis of the In x P 1−x core QDs, the internal defects such as V In − and V P + or K + and I − interstitials in the core bulk, as well as Inand P-dangling bonds on the core QD surface, can generate observable electron spin states in ESR analysis.These spin states, which are associated with the electrons captured around such internal defects and dangling bonds, indirectly reveal the chemical environment or structural imperfections related to V In − and V P + .Furthermore, the intensity of these signals is indicative of the concentration of internal defects and dangling bonds; in other words, a higher ESR signal intensity suggests a higher concentration of internal defects and dangling bonds.The integrated ESR signal intensity was calculated by integrating the ESR signal of the magnetic field between 3400 and 3650 mT, because the internal defects and dangling bonds produced a g-factor ranging from 1.909 to 1.939, as shown in Figure 3a.The g-factor, also known as the Landé g-factor, describes the ratio of the magnetic moment to the angular momentum, which determines the interaction of particles with an external magnetic field.In ESR spectroscopy, the g-factor provides insights into the electronic environment of the sample, indicating the presence of unpaired electrons associated with defects [94].In the context of InP quantum dots, bulk vacancy defects such as negatively charged indium vacancies (V In − ) and phosphorus-related dangling bonds (P-X) on the surface can significantly influence the ESR signals.These defects generate characteristic ESR signals, allowing for the identification and quantification of defect concentration.The integrated ESR signal intensity decreased remarkably from 5.54 to 0.896 a.u.when the KI doping concentration increased from 0% to 3%.Subsequently, it increased slightly and became saturated when the KI doping concentration increased from 3% to 7%, as shown in Figure 3b.Thus, the lowest integrated ESR signal intensity was observed at a specific KI doping concentration (i.e., 3% KI). in Figure 3a.The g-factor, also known as the Landé g-factor, describes the ratio of the magnetic moment to the angular momentum, which determines the interaction of particles with an external magnetic field.In ESR spectroscopy, the g-factor provides insights into the electronic environment of the sample, indicating the presence of unpaired electrons associated with defects [94].In the context of InP quantum dots, bulk vacancy defects such as negatively charged indium vacancies (VIn − ) and phosphorus-related dangling bonds (P-X) on the surface can significantly influence the ESR signals.These defects generate characteristic ESR signals, allowing for the identification and quantification of defect concentration.The integrated ESR signal intensity decreased remarkably from 5.54 to 0.896 a.u.when the KI doping concentration increased from 0% to 3%.Subsequently, it increased slightly and became saturated when the KI doping concentration increased from 3% to 7%, as shown in Figure 3b.Thus, the lowest integrated ESR signal intensity was observed at a specific KI doping concentration (i.e., 3% KI).To comprehend why the integrated ESR signal intensity was minimized at a KI concentration of 3%, it is essential to theoretically analyze the dependence of this signal intensity on the number of internal defects and dangling bonds as a function of the KI doping concentration.During the growth of InxP1−x core QDs, In 3+ and P 3− were derived from In(OAc)3 and TMS3P, respectively, to facilitate QD synthesis.Meanwhile, the dopant KI dissociated into K + and I − with a dissociation energy of 3.4 eV [95].During synthesis, K + and I − can replace VIn − and VP + within the core QD bulk and passivate the In-and P-dangling bonds on the surface of the core QDs.The K + -VIn − substitutes in the InxP1−x core QD bulk would be preferentially produced over I − -VP + substitutes, as the diameter difference between K + and VIn − is smaller than that between I − and VP + .In addition, because the surface ligands PA with carboxylic acid functional groups produce H2O in the synthesis solution [65,68,96,97], H2O chemically oxidizes the oxyphilic P-dangling bonds of the core QD surface, resulting in InPOx well formation on the surface of the core QDs [4,50,56,[65][66][67][68]. The presence of I − in the synthesis solution can passivate P-dangling bonds and produce P-I bonds, prohibiting the formation of InPOx on the core QD surface.Note that the oxyphilicity of P is 0.7, whereas that of I is almost 0; thus, the P-I bonds located on the surface of the InxP1−x core QDs are difficult to oxidize [65,67,68].Because the average diameter of InxP1−x core QDs increased almost linearly with an increase in the KI doping concentration, To comprehend why the integrated ESR signal intensity was minimized at a KI concentration of 3%, it is essential to theoretically analyze the dependence of this signal intensity on the number of internal defects and dangling bonds as a function of the KI doping concentration.During the growth of In x P 1−x core QDs, In 3+ and P 3− were derived from In(OAc) 3 and TMS 3 P, respectively, to facilitate QD synthesis.Meanwhile, the dopant KI dissociated into K + and I − with a dissociation energy of 3.4 eV [95].During synthesis, K + and I − can replace V In − and V P + within the core QD bulk and passivate the In-and P-dangling bonds on the surface of the core QDs.The K + -V In − substitutes in the In x P 1−x core QD bulk would be preferentially produced over I − -V P + substitutes, as the diameter difference between K + and V In − is smaller than that between I − and V P + .In addition, because the surface ligands PA with carboxylic acid functional groups produce H 2 O in the synthesis solution [65,68,96,97], H 2 O chemically oxidizes the oxyphilic P-dangling bonds of the core QD surface, resulting in InPO x well formation on the surface of the core QDs [4,50,56,[65][66][67][68]. The presence of I − in the synthesis solution can passivate P-dangling bonds and produce P-I bonds, prohibiting the formation of InPO x on the core QD surface.Note that the oxyphilicity of P is 0.7, whereas that of I is almost 0; thus, the P-I bonds located on the surface of the In x P 1−x core QDs are difficult to oxidize [65,67,68].Because the average diameter of In x P 1−x core QDs increased almost linearly with an increase in the KI doping concentration, as shown in Figure 2, the number of dangling bonds on the core QD surface would increase with the KI doping concentration, given as 4π(diameter/2) 2 .Thus, the passivation degree of P-I did not linearly increase with an increase in the KI doping concentration, yielding the minimum formation amount of the InPO x on the core surface at a specific KI doping concentration (i.e., 3%), as proven later.The diameter distribution of the In x P 1−x core QDs was minimized at a specific KI doping concentration (i.e., 3%), indicating that the PLQY was maximized and the FWHM was minimized at the KI doping concentration of 3%.A narrower diameter distribution suggests a more mono-disperse QD population, leading to decreased variation in emission wavelengths, a narrower PL spectrum (FWHM), and an enhanced color purity.Conversely, a broader diameter distribution indicates a poly-disperse population.During the annealing step at 320 • C for the core nucleation and growth process, Ostwald ripening occurs, wherein smaller particles dissolve and redeposit onto larger particles due to their higher solubility [98].This process not only increases the average particle size but also broadens the size distribution, resulting in a redshift of the PL peak and increased FWHM, ultimately degrading the optical properties of the QDs.The number of K + -V In − substitutions increases with increasing KI doping concentration up to 3%, while the number of K + and I − interstitials also begins to increase at a doping concentration of 3%, so that the diameter distribution is minimized at the KI doping concentration of 3%.As a result, the PLQY peaks and the FWHM minimized at a doping concentration of 3%, since a smaller diameter distribution of the In x P 1−x core QDs results in a higher PLQY and narrower FWHM [4,[98][99][100].Therefore, KI doping during the synthesis of In x P 1−x core QD can reduce the amount of V In − dominantly in the core QD bulk and inhibit surface oxidation (i.e., InPO x ) of the core QDs, thereby increasing the PLQY and significantly narrowing the FWHM.The maximum positive effects (i.e., an increase in PLQY and a decrease in FWHM) occur at the optimal KI doping concentration of 3%.To evaluate the effects of KI doping on the photo-optical properties of In x P 1−x /Zn 0.6 Se 0.4 / Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 core/multi-shell QDs, the absorption, photoluminescence spectra, light-emitting wavelength, PLQY, and FWHM were measured with respect to the KI doping concentration for the In x P 1−x core QDs.The PLQY measurements were conducted using the QE-2100 system, which determines the quantum yield by comparing the number of photons emitted to the number of photons absorbed by the sample.The KI-doped core QDs were sequentially subjected to optimal HF treatment and Zn 0.6 Se 0.4 /Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 multi-shell growth.Note that optimal HF treatment sequentially proceeds the centrifuge of the KI-doped core QDs, the redispersion of QDs in organic solvent, the injection of HF (i.e., 0.14 mmol), and the heating up of the QD-solution up to 210 • C to grow the multi-shell structure.The first absorption peak was redshifted linearly from 597 to 601 nm when the KI doping concentration increased from 0% to 7%, indicating that the average diameter of the core QDs would increase with the KI doping concentration, being strongly correlated with the dependence of the average diameter of In x P 1−x core QDs on the KI doping concentration in Figure 2f, as shown in Figure 4a and Table S1.The wavelength of the red-light emission increased almost linearly from 617 to 631 nm as the KI doping concentration in the core QDs increased from 0% to 7%, indicating that the energy bandgap of the KI-doped core QDs decreased with the increasing KI doping concentration, as shown in Figure 3a,b.Moreover, the red-light emission occurred after illumination from an Xe lamp light having a 450 nm wavelength.Comparing the peak absorption wavelength with the red-light-emitting wavelength revealed that the Stokes shift was enhanced from 21 to 30 nm.When the KI doping concentration in the core QDs increased from 0% to 7%, it was implied that photon energy loss into the lattice atoms was enhanced due to the decreasing energy bandgap of the KI-doped core QDs with increasing KI doping concentration.However, the PLQY is not dependent on the emission wavelength.Instead, the emission wavelength is more significantly influenced by the core diameter of the QDs and the bandgap and thickness of the shelling material, which affects the quantized quantum well.The PLQY, on the other hand, is determined by the efficiency with which the absorbed energy is re-emitted as photons at the band edge, rather than being trapped by defects or lost through non-radiative recombination processes due to vibrations or other factors.As the KI doping concentration in the core increases, the emission wavelength redshifts, primarily due to the increase in particle size, as observed in HR-TEM images.At the optimal KI concentration (3%), a minimization of vacancies and interstitial defects was observed, leading to a more uniform core size distribution.This uniform nucleation and growth resulted in the narrowest FWHM and the synthesis of highly uniform cores.By growing a shell layer on these uniformly prepared cores, growth steps like Ostwald ripening, which can adversely affect optical properties, were minimized.Consequently, the PLQY reached its maximum value (~97%).This high PLQY indicates that the process of photon re-emission from the band-edge states was highly efficient, with minimal losses to non-radiative pathways.In addition, the PLQY increased notably from 74% to 97% when the KI doping concentration increased from 0% to 3%.Subsequently, it decreased significantly from 97% to 77% when the KI doping concentration increased from 3% to 7%.Thus, it was maximized at a KI doping concentration of 3%, i.e., 97%, as shown in Figure 4b.In addition, the FWHM decreased rapidly from 43 to 40 nm when the KI doping concentration increased from 0% to 3%.Subsequently, it increased from 40 to 44 nm when the KI doping concentration increased from 3% to 7%.Thus, it was minimized at a KI doping concentration of 3%, i.e., 40 nm, as shown in Figure 4b.In summary, while the emission wavelength redshift with increasing KI concentration is primarily due to particle size growth, the enhanced PLQY at the optimal doping concentration is a result of minimized defects and uniform core-shell structures that promote efficient radiative recombination. Dependence of Furthermore, the exciton lifetime of the In x P 1−x /Zn 0.6 Se 0.4 /Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 core/multi-shell QDs was measured using TRPL spectroscopy as a function of the KI doping concentration in the In x P 1−x core QDs, as shown in Figure 4c.Note that a longer exciton lifetime via TRPL implies a higher degree of radiative exciton recombination.In particular, the τ 1 component is associated with band-edge transition emissions, while the τ 2 component corresponds to defect-associated emissions; an increase in the value of τ 1 indicates a higher contribution from band-edge emissions [101], resulting in a higher PLQY [102,103].The biexponential decay function used for fitting is: where I(t) is the PL intensity at time t, A 1 and A 2 are the amplitudes, and τ 1 and τ 2 are the lifetimes of the fast and slow components, respectively.The average exciton lifetime (τ avg ) was then calculated using the equation: Moreover, to compare the fractions of band-edge transitions and defect-associated emissions in the total lifetime, we calculated the fractions using the following equations: The detailed PL decay curve components extracted for each condition can be found in Table S2.When the KI doping concentration increased from 0% to 7%, the average exciton lifetime (τ avg ) increased from 43 ns to a peak of 49 ns at 3% KI doping, then decreased to 45 ns.The proportion of τ 1 and τ 2 components showed that for the optimal 3% KI doping, the τ 1 fraction increased from 49% to 51%, while the τ 2 fraction decreased from 51% to 49%.This indicates an enhancement in band-edge transition emissions and a reduction in defect-associated emissions at the optimal doping concentration.Thus, it was maximized at a KI doping concentration of 3%, i.e., 49 ns, as shown in Figure 4d.As expected, the exciton lifetime was well correlated with the PLQY, i.e., a longer exciton lifetime corresponded to a higher PLQY.For the In 0.53 P 0.47 /Zn 0.6 Se 0.4 /Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 core/multi-shell QDs, both the PLQY and average exciton lifetime were maximized (i.e., 97% and 49 ns, respec-tively), and the FWHM was minimized (i.e., 40 nm) at a specific KI doping concentration (i.e., 3%). doping, the 1 fraction increased from 49% to 51%, while the 2 fraction decreased from 51% to 49%.This indicates an enhancement in band-edge transition emissions and a reduction in defect-associated emissions at the optimal doping concentration.Thus, it was maximized at a KI doping concentration of 3%, i.e., 49 ns, as shown in Figure 4d.As expected, the exciton lifetime was well correlated with the PLQY, i.e., a longer exciton lifetime corresponded to a higher PLQY.For the In0.53P0.47/Zn0.6Se0.4/Zn0.6Se0.1S0.3/Zn0.5S0.5 core/multi-shell QDs, both the PLQY and average exciton lifetime were maximized (i.e., 97% and 49 ns, respectively), and the FWHM was minimized (i.e., 40 nm) at a specific KI doping concentration (i.e., 3%).It could not be conclusively explained whether the PLQY, exciton lifetime, and FWHM were directly associated with VIn − , K + and I − interstitials in the core QDs or P-dangling bonds at the interface between the In0.53P0.47 core and the Zn0.6Se0.4shell, or both of them.Thus, for the core/multi-shell QDs, the crystalline properties were characterized using XRD as a function of the KI doping concentration in the core QDs.This could delineate the presence of P-dangling bonds at the interface between the InxP1−x core and the Zn0.6Se0.4shell, which will be demonstrated.For the undoped InxP1−x core QDs grown with the multi-shell layers, crystalline plane peaks of an unknown crystalline plane, {111}, {220}, and {311} were observed at 2θ = 19.8°,28.2°, 46.9°, and 55.7° (blue XRD intensity line in It could not be conclusively explained whether the PLQY, exciton lifetime, and FWHM were directly associated with V In − , K + and I − interstitials in the core QDs or P-dangling bonds at the interface between the In 0.53 P 0.47 core and the Zn 0.6 Se 0.4 shell, or both of them.Thus, for the core/multi-shell QDs, the crystalline properties were characterized using XRD as a function of the KI doping concentration in the core QDs.This could delineate the presence of P-dangling bonds at the interface between the In x P 1−x core and the Zn 0.6 Se 0.4 shell, which will be demonstrated.For the undoped In x P 1−x core QDs grown with the multi-shell layers, crystalline plane peaks of an unknown crystalline plane, {111}, {220}, and {311} were observed at 2θ = 19.8• , 28.2 • , 46.9 • , and 55.7 • (blue XRD intensity line in Figure 5a), indicating a typical zincblende crystalline structure (JCPDS32-0452) of the core and multi-shell QDs [104].Surprisingly, according to JCPDS 01-072-0144, an unknown crystalline plane was detected at 2θ = 19.8• , corresponding to indium phosphate dihydrate (InPO 4 (H 2 O) 2 ) having an orthorhombic crystalline structure.The existence of InPO 4 (H 2 O) 2 in the core/multi-shell QDs revealed oxidation on the surface of the In x P 1−x core QDs, although HF treatment proceeded, followed by multi-shell growth.This result clearly proves that the P-dangling bonds at the interface between the In x P 1−x core and the Zn 0.6 Se 0.4 shell were present and were chemically oxidized by H 2 O generated from the palmitic acid surface ligands.Otherwise, for the KI-doped core QDs grown with multishell layers, the XRD peak intensity at {210} of InPO 4 (H 2 O) 2 significantly decreased from 114 to 21 a.u.when the KI doping concentration of the core QDs increased from 0% to 3%.It then considerably increased from 21 to 117 a.u.when the KI doping concentration increased from 3% to 7%, as shown in Figure 5a,b.Thus, the XRD peak intensity at {210} of InPO 4 (H 2 O) 2 was minimized at a specific KI doping concentration (i.e., 3%) In addition, the XRD signal peak intensities corresponding to {111}, {220}, and {311} remained at ~300, ~150, and ~80 a.u., respectively, when the KI doping concentration in the core QDs increased from 0% to 7%, and they slightly decreased with a further increase in the KI doping concentration, as shown in Figure 5a,b.Thus, a KI doping concentration of >3% during core QD growth slightly degrades the zincblende crystalline properties.This result clearly demonstrates that the chemical oxidation degree of the P-dangling bonds on the In x P 1−x core QD surface during core growth can be significantly reduced by passivating P-dangling bonds with I − (i.e., forming P-I bonds) up to a KI doping concentration of 3%.Beyond this concentration, the passivation effect was noticeably diminished, as the increase in the number of P-dangling bonds significantly exceeded the increase in their passivation degree (i.e., P-I bonds).Note that the average diameter of the In x P 1−x core QDs increased linearly and significantly with an increase in the KI doping concentration during core QD growth, as shown in Figure 2f.Therefore, the number of P-dangling bonds on the core QD surface was proportional to 0.75 [the KI doping concentration (C KI )] 2 , as shown in Figure S6.Meanwhile, the passivation degree of the P-dangling bonds was linearly proportional to the KI doping concentration. Figure 5a), indicating a typical zincblende crystalline structure (JCPDS32-0452) of the core and multi-shell QDs [104].Surprisingly, according to JCPDS 01-072-0144, an unknown crystalline plane was detected at 2θ = 19.8°,corresponding to indium phosphate dihydrate (InPO4(H2O)2) having an orthorhombic crystalline structure.The existence of InPO4(H2O)2 in the core/multi-shell QDs revealed oxidation on the surface of the InxP1−x core QDs, although HF treatment proceeded, followed by multi-shell growth.This result clearly proves that the P-dangling bonds at the interface between the InxP1−x core and the Zn0.6Se0.4shell were present and were chemically oxidized by H2O generated from the palmitic acid surface ligands.Otherwise, for the KI-doped core QDs grown with multi-shell layers, the XRD peak intensity at {210} of InPO4(H2O)2 significantly decreased from 114 to 21 a.u.when the KI doping concentration of the core QDs increased from 0% to 3%.It then considerably increased from 21 to 117 a.u.when the KI doping concentration increased from 3% to 7%, as shown in Figure 5a,b.Thus, the XRD peak intensity at {210} of InPO4(H2O)2 was minimized at a specific KI doping concentration (i.e., 3%) In addition, the XRD signal peak intensities corresponding to {111}, {220}, and {311} remained at ~300, ~150, and ~80 a.u., respectively, when the KI doping concentration in the core QDs increased from 0% to 7%, and they slightly decreased with a further increase in the KI doping concentration, as shown in Figure 5a,b.Thus, a KI doping concentration of >3% during core QD growth slightly degrades the zincblende crystalline properties.This result clearly demonstrates that the chemical oxidation degree of the P-dangling bonds on the InxP1−x core QD surface during core growth can be significantly reduced by passivating P-dangling bonds with I − (i.e., forming P-I bonds) up to a KI doping concentration of 3%.Beyond this concentration, the passivation effect was noticeably diminished, as the increase in the number of P-dangling bonds significantly exceeded the increase in their passivation degree (i.e., P-I bonds).Note that the average diameter of the InxP1−x core QDs increased linearly and significantly with an increase in the KI doping concentration during core QD growth, as shown in Fig- ure 2f.Therefore, the number of P-dangling bonds on the core QD surface was proportional to 0.75 [the KI doping concentration (CKI)] 2 , as shown in Figure S6.Meanwhile, the passivation degree of the P-dangling bonds was linearly proportional to the KI doping concentration.A comparison of Figures 4b and 5b revealed that the dependence of the XRD signal peak intensity at {210} (i.e., InPO 4 (H 2 O) 2 ) on the KI doping concentration was well consistent with the dependence of the PLQY, exciton lifetime, and FWHM on the KI doping concentration of the core QDs.In other words, a smaller amount of InPO 4 (H 2 O) 2 at the interface of the In x P 1−x core QD and the Zn 0.6 Se 0.4 shell led to a higher PLQY, a higher exciton lifetime, and a narrower FWHM.These correlation results evidently prove that KI doping of the core QDs can inhibit the chemical oxidation (i.e., InPO 4 (H 2 O) 2 ) on the surface of the core QDs prior to HF treatment, thereby significantly enhancing the PLQY and exciton lifetime and narrowing the FWHM [105,106].Moreover, comparing Figure 2f with Figures 3b and 4d reveals that the average diameter and distribution of the In x P 1−x core QDs evidently affected the number of non-radiative recombination, PLQY, and FWHM.The PLQY, along with the exciton lifetime and FWHM, peaked at a specific average diameter of the core QDs (i.e., 4.1 nm) and a minimum core QD diameter distribution of ±0.5 nm, corresponding to the KI doping concentration of 3%.These correlations confirm that the number of internal defects, such as V In − , V P + , K + interstitials, and I − interstitials, in the In x P 1−x core QDs directly influences the PLQY and FWHM.The optimal reduction of V In − (K + -V In − ) occurred at the KI doping concentration of 3%, showing a slightly increased average diameter (i.e., from 7.2 to 7.5 nm) and the minimum diameter distribution (i.e., ±1.6 to ±0.9 nm).Thus, the maximum PLQY (i.e., 97%) and a minimum FWHM (i.e., 40 nm) were observed at the optimal KI doping concentration (i.e., 3%).However, over-doping of KI (i.e., >3%) increased the number of K + and I − interstitials in the In x P 1−x core QDs; thus, both the average diameter and diameter distribution of the core QD increased significantly, leading to an increase in the number of P-dangling bonds on the core QD surface.As a result, the PLQY decreased and the FWHM increased significantly beyond a KI doping concentration of 3%.Therefore, the above correlation clearly demonstrates that PLQY and FWHM were principally and preferentially determined by both the reduction of V In − via the formation of K + -V In − substitutes in the In x P 1−x core QD bulk and the inhibition of the chemical oxidation of P-dangling bonds (i.e., InPO 4 (H 2 O) 2 ) via the formation of P-I bonds.The application of red-light-emitting KI-doped In 0.53 P 0.47 /Zn 0.6 Se 0.4 /Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 QDs, which were carefully designed for precisely tuned optical properties (i.e., narrow FWHM of ~40 nm and high PLQY of ~97%) through the KI-doping process, demonstrated a significant improvement in color representation.The color representation performance of the QDCF-OLED hybrid displays was assessed to evaluate their potential as Ultra-High-Definition (UHD) displays by comparing their color gamut coverage to the NTSC and Rec.2020 standards [85], as shown in Figure 6. The structure of the QDCF-OLED hybrid display consisted of an emitting layer (QD functional CF) on the top and a backlight unit (blue-OLED BLU) on the bottom, as shown in Figure 6a.The top emitting layer, which was called a QD functional CF, was patterned with a mixture of optimized red-light-emitting In 0.53 P 0.47 /Zn 0.6 Se 0.4 /Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 core/multi-shell QDs doped with 3% KI, green-light-emitting InP/ZnSe/ZnSeS/ZnS coremulti-shell QDs and blue-light-emitting ZnSe/ZnS core-shell QDs with conventional R-, G-, and B-color filters.The B-, G-, and R-CFs coated on quartz glass used in this study exhibited broad transmittance spectra at 371-563 nm for blue, 478-595 nm for green, and >570 nm for red, with peak transmission spectra at 451 nm for blue, 527 nm for green, and >631 nm for red, as shown in Figure 6b.In addition, the optical properties of the InP-based G-QDs and ZnSe-based B-QDs were characterized using PL spectroscopy.The blue and green peak wavelengths were observed at 449 and 538 nm, with FWHMs of 32 and 36 nm and absolute PLQYs of 71% and 94%, respectively, as shown in Figure S7.The bottom layer of the BLU contained a blue light-emitting OLED with a PL spectrum that peaked at 446 nm and an FWHM of 96 nm.The properties of the blue OLED device are detailed in Figure 4b and the Supplementary Materials (see Figure S9). The light-emitting mechanism of the B-, G-, and 3% KI-doped R-QDs functional CFs was associated with energy-down-shift (EDS), which absorbed the blue-light energy from the blue-OLED BLU and emitted R-, G-, and B-light through the R-, G-, and B-QDCFs, respectively [107][108][109].The PL peak wavelengths and FWHMs for R-, G-, and B-emitting light were 635 and 28 nm, 533 and 22 nm, and 447 and 29 nm, respectively, as shown in Figure 6c and Table 1.The QDCF-OLED hybrid display clearly demonstrated the absence of crosstalk between the B-and G-light emissions, as well as between the G-and R-light emissions.In particular, the QD functional-CF OLED employing 3% KI-doped R-QDs presented a significantly narrower FWHM of 28 nm.The FWHM of red-light emission via the QD functional CF OLED for 3% KI-doped R-QDs (i.e., 28 nm) was reduced by ~6 nm compared with that for undoped R-QDs (i.e., 34 nm), as shown in Table 1.In addition, to assess the color-representation performance of high-resolution displays, the color gamut performance of the R-, G-, and B-QD-functionalized CF using a blue OLED BLU was estimated.By comparing the color gamut performance of devices fabricated with undoped core/multi-shell QDs and those doped with 3% KI, the enhanced color representation achieved through doping was clearly demonstrated.The color gamut for the QD functional-CF OLED, covering the NTSC and Rec.2020 standards [86,87,89], for devices using undoped R-QDs and 3% KI-doped QDs was 125.8%, 94.2%, 131.1%, and 98.1%, respectively, as shown in Figure 6d and Table 1.The color gamut for the QD functional-CF OLED using 3% KI-doped R-QDs showed improvements of 5.3% and 4% for the NTSC and Rec.2020 standards, respectively, compared with those using undoped R-QD functional-CF OLEDs.This result, compared with recent research results summarized in Table 1, indicates that significantly reducing the FWHM of KI-doped In 0.53 P 0.47 core QDs grown with multiple shells for red-light emission can substantially enhance the color gamut (i.e., NTSC 131.1%, Rec.2020 98.2%) for QD-functional CF-OLED hybrid displays using KI-doped R-, G-, and B-QD functional CFs and blue OLED BLU and demonstrated improved color gamut coverage. Conclusions For recently developed QD-OLED displays for TVs and monitors, the PLQY and FWHM of In 0.53 P 0.47 core QDs, grown with Zn 0.6 Se 0.4 /Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 multi-shells emitting red light, have been essentially maximized and minimized, respectively.Internal defects in the core QD bulk (i.e., vacancies and interstitials) and dangling bonds at the core-ZnSe shell interface predominantly reduce the PLQY and broaden the FWHM. In our study, KI doping during the growth of In 0.53 P 0.47 core QDs significantly reduced the concentration of vacancies (i.e., V In − ) within the core QD bulk.This reduction in vacancies was evidenced by a decreased relative intensity of non-radiative unpaired electrons in the core QDs.Consequently, the average diameter of the In 0.53 P 0.47 core QDs increased by 0.4 nm, and the diameter distribution narrowed by ±0.5 nm compared to undoped In 0.56 P 0.44 core QDs due to the substitution of V In − with K + from KI doping during synthesis.The optimal KI doping concentration was found to be 3%, beyond which interstitial defects (i.e., K + and I − ) were generated, reducing PLQY and increasing FWHM due to an increase in non-radiative unpaired electrons and dangling bonds.In addition to reducing vacancies, KI doping effectively inhibited oxidation on the core QD surface during synthesis.P-dangling bonds on the surface of the core QDs were chemically oxidized by H 2 O from surface ligands, forming InPO 4 (H 2 O) 2 , identified by the presence of {210} in core QDs grown with multi-shells, which significantly reduced PLQY and increased FWHM.The dissociated I − from doped KI passivated the P-dangling bonds on the QD surface by forming P-I bonds, significantly reducing InPO 4 (H 2 O) 2 formation and enhancing PLQY.Optimal KI doping concentration (3%) maximally inhibited InPO 4 (H 2 O) 2 formation at the In 0.53 P 0.47 core QD and Zn 0.6 Se 0.4 shell interface.Over-KI doping (>3%) led to an increase in the average core QD diameter, which, in turn, increased the number of P-dangling bonds, ultimately resulting in higher InPO 4 (H 2 O) 2 formation. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nano14121055/s1,Table S1: Summary of optical properties of undoped and KI-doped red-light-emitting InP-based core/shell QD; Table S2: Components of biexponential decay fitting for TRPL measurements of InP-based core/multi-shell QDs with varying KI doping concentrations; Figure 1 . Figure 1.Effects of the KI doping on the In x P 1−x core QDs.(a) Structure of red-light-emitting In x P 1−x /Zn 0.6 Se 0.4 /Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 core/multi-shell QDs, (b) substitution of vacancy defects in the zincblende crystalline structure of the In x P 1−x core QDs, and (c) passivation of P-dangling bonds on the In x P 1−x core QD surface. Figure 2 . Figure 2. In x P 1−x core QDs' average diameter and diameter distribution depending on the KI doping concentration.HR-TEM images of the core QDs with different KI doping concentrations: (a) undoped, (b) 1% KI, (c) 3% KI, (d) 5% KI, (e) 7% KI, and (f) Average In x P 1−x core QD diameter and diameter distribution for various KI doping concentrations. Figure 3 . Figure 3. Relative unpaired electron concentration of the InxP1−x core QDs depending on the KI doping concentration.(a) ESR spectra with the g-factor of KI-doped core QDs and (b) relative integrated ESR signal under a magnetic field between 3400 and 3650 mT. Figure 3 . Figure 3. Relative unpaired electron concentration of the In x P 1−x core QDs depending on the KI doping concentration.(a) ESR spectra with the g-factor of KI-doped core QDs and (b) relative integrated ESR signal under a magnetic field between 3400 and 3650 mT. Figure 4 . Figure 4. Optical properties of KI-doped red-light-emitting In x P 1−x /Zn 0.6 Se 0.4 /Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 core/multi-shell QDs depending on the KI doping concentration.(a) Absorption and PL spectra and (b) light-emitting wavelength, PLQY, and FWHM.(c) Normalized time-resolved PL decay curves and (d) relative exciton lifetime includes the calculated average lifetime (τ avg ) for each KI doping concentration and the fraction of components (τ 1 % and τ 2 %). Figure 5 . Figure 5. Crystallinity of InP-based core and core/multi-shell QDs depending on KI doping concentration.(a) XRD patterns of KI-doped red-light-emitting In x P 1−x /Zn 0.6 Se 0.4 /Zn 0.6 Se 0.1 S 0.3 /Zn 0.5 S 0.5 core/multi-shell QDs, and (b) relative XRD peak intensities on the crystalline planes. 3. 3 . Color Gamut Performance of QD Functional CF-OLED Hybrid Display Using KI-Doped R-, G-, and B-QD Functional CFs and Blue OLED BLU Figure 6 . Figure 6.Color gamut properties of the QD-functional CF OLED.(a) Schematics of the QDCF OLED; (b) PL spectra of the blue OLED and transmittance spectra of B-, G-, and R-CFs; (c) PL spectra of QDfunctional CF OLED using B-, G-, and KI-doped R-QDCF and blue OLED BLU; and (d) comparison of the RGB primary color triangles between the QD-functional CF OLED and KI-doped R-QDCF OLED under the blue OLED in the CIE 1931 color space. Table 1 . Optical characteristics, CIE coordinates, and color representation of QD Functional CF OLED.
12,558
2024-06-01T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
An efficient method for network security situation assessment Network security situational assessment, the core task of network security situational awareness, can obtain security situation by comprehensively analyzing various factors that affect network status. Thus, network security situational assessment can provide accurate security state evaluation and security trend prediction for users. Although plenty of network security situational assessment methods have been proposed, there are still many problems to solve. First, because of high dimensionality of input data, computational complexity in model construction could be very high. Moreover, most of the existing schemes trade computational overhead for accuracy. Second, due to the lack of centralized standard, the weights of indicators are usually determined empirically or by subjective opinions of domain expert. To solve the above problems, we propose a novel network security situation assessment method based on stack autoencoding network and back propagation neural network. In stack autoencoding network and back propagation neural network, to reduce the data storage overhead and improve computational efficiency, we use stack autoencoding network to reduce the dimensions of the indicator data. And the low-dimensional data output by hidden layer of stack autoencoding network will be the input data of the error back propagation neural network. Then, the back propagation neural network algorithm is adopted to perform network security situation assessment. Finally, extensive experiments are conducted to verify the effectiveness of the proposed method. Introduction With the prevalence of big data, the amount of services provided by Internet witnesses an explosive growth. 1 This is due to the extension of Internet application and the integration of various fields, such as national defense, military, and public transportation. However, network security incidents occur frequently and the techniques used in network attacks become more and more complex. As a result, how to accurately and effectively evaluate security status has become a hot research topic in the field of network security, which is related to the stability and security of network operation. 2 Therefore, it is necessary to adopt holistic approach to effectively deal with situational awareness data. Thus, network security situational awareness (NSSA) emerges. 3 NSSA 4 was first proposed by T bass, provides decision-makers with knowledge of the most critical assets, threats, and related vulnerabilities, and effective countermeasures and risk mitigation technologies to correctly and timely response to threats. 5 Network security situation assessment is the core of NSSA technology, which can comprehensively analyze all kinds of uncontrollable security factors and provide information about current network security situation. When network threats come in, proactive defense measures are taken to ensure timely protection of network security. Network security situation assessment determines the performance of techniques for NSSA, and it is of great importance to comprehensively understand the state of network environment, the ability to detect network security, and handle network threat events. Network security situation assessment is a crucial part of network security, and it is an useful technique to understand the status and performance of network, which is important to the management of networks. Network security situation assessment has been applied to many fields, such as electric power information network, 6 naval systems, 7 aviation cyber security, 8 and vehicle network. 9 Existing network security situation assessment methods can be summarized into the following categories, that is, methods based on mathematical model (MM), approaches based on knowledge reasoning (KR), and methods based on pattern recognition (PR). 10 As a PRbased method, back propagation neural network (BPNN) has a certain flexible network structure and a strong non-linear mapping ability. According to the specific situation, the number of intermediate layers and the number of neurons in each layer can be arbitrarily set. However, existing NSSA data exhibit characteristics, such as complex structure, multi-source, and massive. As a result, high dimensionality of input data will lead to high complexity of model construction, huge CPU costs in model training, slow training speed, and numerous parameters, which will ultimately affect the efficiency of the method. Therefore, it is necessary to perform data dimensionality reduction to avoid curse of dimensionality, improve computational efficiency, and reduce probability of overfitting. The main contributions of this article are summarized as follows: 1. Stack autoencoding network (SAE) is used to reduce the dimensionality of non-linear data and the complexity of model construction before performing security situation assessment; 2. Loss function is used to determine the number of self-coding network layers to ensure information integrity of the data; 3. BPNN is adopted to perform network security situation assessment, and contextual relevance of network security is fully taken into consideration. The remainder of this article is organized as follows. In section ''Related work,'' we review the related work, and give some preliminaries in section ''Preliminaries.'' Then, we propose the network security situation assessment method based on SAE + BPNN in section ''Proposed model.'' In section ''Experimental study,'' we briefly introduce the experimental environment and data we used, and the experimental results are analyzed in detail. Finally, we conclude the article in section ''Conclusion.'' Related work The network security situation assessment methods are usually divided into MM, KR, and PR. The evaluation method based on MM considers various factors to evaluate the situation, which aims to evaluate the network situation from different angles. Chen et al. 11 established a hierarchical network security threat situation quantitative assessment model based on the bottom-up, local first and global strategy. Their proposed method calculates the threat indicator by weighting the importance of attacks, services, hosts, and the whole network layer by layer, thus evaluating the security threat situation. Li et al. 12 used fuzzy cmean clustering and optimal clustering criteria to process the data, thus obtaining the optimal clustering center and number of clusters. Moreover, they combined analytic hierarchy process (AHP) to establish a multifactor two-level assessment model to obtain the final situation assessment result. Wang et al. 6 proposed a hierarchical chaos simulation annealing (CSA) method based on AHP and gray cluster analysis (GCA). In their method, they used AHP to build a hierarchical CSA to determine the weight of every threat. Meanwhile, GCA is used to build the standard layer of the indexing system. Bian et al. 13 proposed a multi-level fuzzy comprehensive network security situation evaluation model based on improved AHP and fuzzy comprehensive evaluation method. Note that traditional network situation assessment methods cannot effectively assess the security situation of distributed denial of service (DDoS) attacks. Zhang et al. 14 proposed a DDoS attack security situation assessment model based on the fusion features of fuzzy clustering algorithms. Their model can reasonably and effectively evaluate the security status of DDoS attacks. Meanwhile, their model is more flexible than non-fuzzy methods. However, the methods based on MM rely on expert knowledge in the process of index selection, index weight determination, and model construction. As a result, the evaluation results are easily affected by subjective factors. The KR-based network security situation assessment methods assume that there is certain degree of correlation between the network security situation and the state of the network, which is susceptible to the influence of historical and current information. It uses theory of evidence, mathematical statistics, and fuzzy theory to learn historical prior information and current information to infer the current security status of the network. Based on semi-supervised naive Bayes (NB) classifiers, Xu et al. 15 proposed an improved algorithm based on the confidence of data classification, which can achieve situational classification of air combat data. Jin et al. 16 proposed a network security situation assessment model based on random forest (RF). Their method is based on the idea of multiple classifier combination, which consists of decision trees. Each tree depends on an independent sample, and all the trees have the same random vector distribution value in a forest. To effectively evaluate the impact of DDoS attacks on the network situation, Li et al. 17 computed the indicators representing the network situation in each layer, and then the indicators are fused with Dempster-Shafer (D-S) evidence theory to evaluate the impact. Fu et al. 18 improved the optimal fuzzy gray model using modified gray model (GM) (1,1) with residuals, and the optimal fuzzy gray model is used to the prediction of network security situation assessment. The Markov model has important applications in network security situation assessment. Schemes 19,20 fully consider the interaction between the attacker and the defender, and proposed a network security assessment model based on the Markov decision process and game theory. To solve the problem that hidden Markov model (HMM) parameters are difficult to configure, Li and Li 21 proposed an improved situational assessment method based on HMM, which establishes HMM by obtaining the observation sequence and combines the improved simulated annealing (SA) algorithm with the Baum-Welch (BW) algorithm. HMM parameters are optimized, and the security situation value of the network is obtained by the method of quantitative analysis, which more accurately reflects the security situation of the network. Liu and Liu 22 used attack graphs to describe the causality of attack behaviors and combined the HMM to establish the probability mapping between the observation sequence and the attack state. Moreover, the Viterbi algorithm was used to calculate the maximum probability state transition sequence. Li and Zhoa 23 pointed out that the network evaluation time period is greatly affected by human, and the HMM state transfer matrix and observation symbol matrix are often determined empirically. To solve the above problems, they used sliding time window mechanism to extract observation values, and the hybrid multi-population genetic algorithm is adopted to train the HMM model parameters to improve accuracy. Although the method based on KR performs well when analyzing security problems on small dataset with low dimensionality, its evaluation efficiency is relatively low when dealing with massive high-dimensional data. The PR-based network security situation assessment methods assume that the network security situation results can be obtained according to the degree of matching data. PR-based methods divide different security situation levels by learning the characteristics of data and use the data to match each of the divided results, thus obtaining the network security situation. To obtain a global optimal solution, Shi and Chen 24 proposed a twin support vector machines (SVM) model for command information system security situation sample data learning and parameter estimation, so as to evaluate the command information system security situation. Gao et al. 25 proposed an artificial fish swarm algorithm to optimize the information system security risk assessment model of SVM. The proposed method used the artificial fish swarm algorithm to optimize the penalty coefficient C and kernel function of SVM. The experimental results show that the method has high accuracy and convergence speed. Song et al. 26 proposed an information security situation assessment model based on genetic algorithm to optimize weights and thresholds of BPNN. Compared with the standard BPNN, BPNN optimized by genetic algorithm (GA-BP) neural network has lower simulation error and better fitting effect. Li et al. 27 and Dong et al. 28 use cuckoo search algorithm to optimize BPNN neural network parameters to avoid BP falling into local extreme values, thereby improving training speed and evaluation accuracy. Compared with genetic-BPNN algorithm, it has training time, error, and accuracy. Even better, Luo and Liu 29 used the rough intensive reduction attribute to take the membership of the samples calculated by fuzzy method as the input of neural network and the expert value as expected output of the network to improve training speed and accuracy. Zhang et al. 30 proposed a situation assessment method based on deep autoencoding network for the dependence of BPNN methods on label data. The deep autoencoder (AE) is used as the basic unit to construct a deep AE network, and the deep AE network is trained with expert experience and hierarchical evaluation methods to form a model with the ability to accurately evaluate the input situation data. However, the classification results are mostly obtained through machine learning techniques, and the middle part of the algorithm is difficult to follow. SAE AE is an unsupervised learning algorithm driven by input data, which performs feature extraction on data through self-supervision without labels, thus resulting in data with reduced dimensionality, that is, less but more important features. The self-encoding neural network maps the input data to the hidden layer to realize data encoding. Then, the corresponding decoded data are obtained by mapping the encoded data, and the decoded data are regarded as the output data. The encoder is composed of input layer, hidden layer, and output layer, in which the learning process from input layer to hidden layer is called encoding process, while from hidden layer to output layer is called decoding process, as shown in Figure 1. From a general perspective, the input layer is 3 1 , and the output layer is y = x = ½x 1 ; x 2 , . . . , x n T 2 R n 3 1 . We define the weight matrix from the input layer to the hidden layer as W ðW 2 R n 3 d Þ, the bias as b = ½b 1 ; b 2 ; . . . ; b d T 2 R d 3 1 , the weight matrix from the hidden layer to the output layer as W 0 ðW 0 2 R n 3 d Þ, and the bias as The output of self-encoding hidden layer can be expressed as where f 1 represents the activation function of the hidden layer. ReLu, Sigmoid, Tanh, etc. can be selected according to the specific application. The output of the self-encoding output layer can be expressed asŷ In the process of training the neural network, to reduce the parameters that need to be trained in the model, the following constraints are usually given At this point, it means that the learning model should contain three sets of parameters: W , b, b 0 , and the parameter u = fW ; b; b 0 g adjustment of the learning model is mainly realized by minimizing the error function arg min 1. When f 2 selects Sigmoid as the activation function, the error function of the self-encoding network can be expressed as 2. When f 2 selects a linear function as the activation function, the error function of the selfencoding network can be expressed as The overall error function can be expressed as The BP training is carried out using the stochastic gradient descent method combined with the error function to update the network parameters. The rules for parameter update are defined as follows (where h represents the learning rate) SAE 31 is an improved research based on self-coding network, which is a kind of network constructed by connecting several ordinary self-encoder successively. As shown in Figure 2, it consists of several layers of self-encoding networks and a Softmax layer. The training of stack self-encoding includes the following steps: 1. Input the original data, use an AE to train the input data to get the corresponding network parameters, encode the original data through the trained AE network, and take the output result after encoding as the output result of the first hidden layer; 2. Take the output in Step 1 as the input and continue to use the training method in Step 1 to optimize and update the network parameters of this layer. Repeat this step until the last hidden layer is trained; 3. Take the output in Step 2 as the input and use the label corresponding of the original input data to train and optimize the network parameters of Softmax layer; 4. Calculate the loss cost function of all hidden layers and Softmax layers, and the partial derivative function value of each parameter in the network; 5. The initial network parameters calculated in Steps 1, 2, and 3 are taken as the initialization parameters of the whole network. Meanwhile, the loss cost functions and partial derivatives of the parameters obtained in Step 4 are used to calculate the updated network parameters and realize the parameter optimization of the whole network. BPNN The Error BPNN is also called BPNN, as shown in Figure 3. As a supervised learning algorithm, BPNN mainly uses the error function generated by the actual output value and the expected output value for BP, which adjusts connection weight and threshold parameters of neurons in each layer of the network. The training of network will be stopped and relevant parameters of the network will be saved by iterating the network with input data until the error function is reduced to the allowable range of the network. The specific training steps of BPNN are as follows: 1. Initialize the network. Assume that the input vector is where net ðlÞ i is the input of the ith neuron in the lth layer, and f ðÁÞ is the activation function of the neuron. The non-linearity of neural network is mainly reflected in the selection of its activation function. When the linear activation function is adopted, the multi-layer neural network is equivalent to the complex linear function formed by the combination of multiple linear functions. In the process of selecting the activation function, a non-linear function can be taken to make the neural network have certain non-linear capability. 3. Calculate the error function of the output layer and the hidden layer. Given the training sample, let m = fðxð1Þ, yð1ÞÞ, ðxð2Þ, yð2ÞÞ, . . . , ðxðmÞ, yðmÞÞg and dðiÞ be the expected output generated by the input xðiÞ. BP algorithm adopts the gradient descent method to adjust the weight parameters of each hidden layer neuron to ensure that the actual output of the neural network is close to the expected output. With batch update method, for the given training sample m, the error function is defined as where EðiÞ is the training error of a single sample Sample population error Proposed model Network structure Due to high dimensionality and complexity of data, existing evaluation methods based on neural network use multi-layer and multi-neuron networks. However, these methods are not efficient. In this article, we propose a network security situation evaluation method based on SAE and BPNN. SAE is an unsupervised learning algorithm, which is mostly used in data denoising, sparse high-dimensional data dimensionality reduction, and so on. In network security situation assessment field, the indicator data are high dimensional and sparse. We use SAE to reduce data dimensionality while ensuring that there is no information loss in indicator data and combine BPNN to conduct network security situation assessment. Meanwhile, we select the commonly used security situation assessment methods, such as SVM and NB, for auxiliary verification. The experimental results show that the method has fast convergence rate in training phase and high accuracy in evaluation phase, which is convenient for administrators to understand the network security status accurately. SAE-BPNN uses the coded data output from the last hidden layer of SAE network as the input of BPNN, which not only can ensure the non-linear relationship of data but also can reduce the dimension of input data, as shown in Figure 4. The SAE-BPNN algorithm The specific process of the SAE-BPNN evaluation method is as follows (see Figure 5). Indicator data extraction and normalization pro- cessing: most of the NSSA data are generated in the form of network traffic, alarm logs, and so on. It is necessary to execute perception data extraction according to the indicator system and then normalize the indicator data according to the corresponding normalization criteria. First, the combination formula of hidden layer and output layer is deduced according to the self-coding formula in section ''SAE'' The number of layers of SAE is determined by the information loss rate. The formula of single layer information loss rate is as follows The loss rate of n-layer comprehensive information is as follows where x ij represents the jth input value of layer i network and y ij represents the jth output value of layer i network. The number of network layers can be determined according to the loss value range. Finally, the number of SAE layers and the output results of N layers are determined by the loss range h n will be used as the input of BPNN for the next step of calculation. 3. BPNN situation assessment: input the data after non-linear dimensionality reduction processing and its corresponding label into BPNN, thus obtaining the optimal model through multiple iterations and evaluating the security situation. h ð0Þ = h n will be used as the input of BPNN for the next step of calculation Experimental environment We conduct experiments on a machine equipped with NVIDIA TITAN XP GPU, with Ubuntu 18.04 operating system, Python 3.6, and PyCharm Community 2017.3. Meanwhile, we use TensorFlow 1.4.1, Keras library, and machine learning library scikit-learn for model training. Experiment dataset To verify the validity of the SAE-BPNN algorithm, we select the Coburg Intrusion Detection Dataset-001 (CIDDS-001) 32 of Coburg University of Technology as the research object. CIDDS is an evaluation dataset created based on an anomaly network intrusion detection system. The basic idea behind CIDDS is to use OpenStack to create tagged stream-based datasets in a virtual environment. The network topology of CIDDS-001 dataset is divided into internal network and external network, as shown in Figure 6. The internal environment includes multiple clients and typical servers, such as e-mail servers and Web servers. Network attacks contain denial of service (DoS), brute force attacks, and port scans. Since the origin, target and timestamp of the attack being executed are known; it is easy to mark the recorded NetFlow data. CIDDS-001 dataset has a total of 14 attributes, as shown in Table 1. In this experiment, CIDDS-001 dataset's Week 2 external stream data are selected for analysis, and the external stream data flow attack on the second day is shown in Table 2. The relevant information can be extracted from Table 2. Three attacks are initiated before 12 o'clock and after 12 o'clock. Therefore, the data stream after 12 o'clock is selected for training, and the data stream before 12 o'clock is used for testing; the ratio of training set to validation set is 2:1. Classification of the normalization scheme of this experimental indicator system: 1. The maximum value of the six types of indicators (e.g. data stream duration, number of used protocols, number of source addresses, number of destination addresses, number of network ports, and type of data stream) are within a certain range. The normalization scheme uses the extreme value method AttackID Attack ID (All traffic data belonging to the same attack carry the same attack ID) 14 AttackDescription Attack parameter information (e.g. the number of attempts to guess passwords for SSH brute force attacks) Figure 6. CIDDS-001 network topology. where x i is the current value of the indicator, andx i is the value after normalization of the indicator. 2. The number of transmitted packets, the number of transferred bytes, and the amount of suspicious data have a large amount of variation. As a result, their maximum value cannot be determined. Therefore, the inverse cotangent function method is adopted in the normalization schemẽ where x i is the current value of the indicator, andx i is the value after normalization of the indicator. Experiment results Dimension reduction part. We first need to determine the number of hidden layers in SAE. Then according to the SAE-BPNN evaluation process, we normalize the indicator data and use SAE to perform data dimensionality reduction on the training model, and the experimental parameter setting is given in Table 3. First, we select the number of hidden layers for SAE. Considering the memory space occupied by the data storage after dimensionality reduction, the analysis is performed according to the theoretical space proportion, actual test proportion, and data proportion of the actual data store. The theoretical storage space of data is shown in Table 4, and the actual storage space of data in file storage is shown in Table 5. Note that the data are a float-64 type, and each one of the data occupies 8 bytes. The initial dimensionality is 9. Data footprint = ( dimension 3 number of data pieces) 3 unit data occupies storage space. For example, 1000 data samples of 9-dimensional data take up 1000 3 9 3 8 byte = 72,000 bytes. To compare the occupancy of data storage, the self-coded SAE of Layers 1, 2, and 3 is used to encode and reduce the dimension of indicator data, respectively. The input dimension is 9 and the output dimension is set to 4. Specifically, SAE input at Layer 1 to hidden layer is 9-4, the second layer is 9-7-4, and the third layer is 9-7-6-4. As shown in Figure 7, we can find that during the actual data storage process, the data are stored in the excel file. When the data are reduced from 9 to 4 dimensions, the average storage space of 1000 pieces of data is reduced from 73,728 bytes to 65,536 bytes, saving nearly 15% of storage space. Figure 8 shows the Loss caused by constructing Layers 1, 2, and 3 SAE hidden layer. According to the analysis results, it can be seen that when the indicator data are coded for dimensionless reduction, the Loss value is close to 0 when the number of SAE hiding layers is 1 and the number of iterations epoch is 400, which indicates that after the dimensionless reduction of SAE at Layer 1, the output data can better restore the input data, and the information integrity rate of the input data is close to 100%. When the number of hidden layers of SAE is 2 or 3, the Loss value tended to be stable when epoch = 200 times, but the Loss value remained above 0.2. Meanwhile, the SAE hidden layer output data are equivalent to the feature information that loses more than 20% of the original input data. Through experimental analysis, Layer 1 SAE is finally selected to encode the indicator data, and SAE iteration times are selected as 600. Evaluation. The data after SAE dimensionality reduction are input into BPNN, and the BPNN parameter settings are shown in Table 6. To verify the effectiveness of SAE + BPNN, BPNN and SAE are used for comparison to evaluate the network security situation. The test experiment selects the external data stream from 9 am to 12 am on the Tuesday of second week of CIDDS-001 dataset. There are three attacks between 9:46 and 9:48, 10:14 to 10:30, and 11:33 to 12:00, and the experimental comparison results are listed in Figure 9. From Figure 9 we can see that although SAE can roughly determine the attack situation during the process of network security situation assessment, the evaluation results fluctuate relatively much. Moreover, BPNN can accurately detect the situation of attack, but there is a false positive within 120-140 min. SAE + BPNN can accurately determine the time of attack and its evaluation accuracy is the most accurate, which can exactly identify the attack time. Evaluation performance analysis. Except the comparison experiment conducted by combining SAE with BP, we also select SVM and NP to analyze the security situation and verify the effectiveness of the proposed method. From Table 7 we can easily find that the proposed method has a certain improvement in terms of accuracy as compared with BPNN, and the combination of SAE, NB, and SVM also improves the evaluation accuracy. Meanwhile, from Table 8 we can see that the running time of the methods after applying SAE dimensionality reduction is less than that of the methods BP, NB, and SVM without dimensionality reduction. Conclusion In this article, we propose a network domain security situation assessment method based on SAE-BPNN. First, the proposed method extracts the indicator data of network domain and normalizes them. Then, SAE is used for dimension reduction and feature extraction. Moreover, the network security situation value will be calculated by BPNN algorithm, which can evaluate the network domain security situation quantitatively. Finally, through a series of comparative experiments, we proved that the proposed method based on SAE and BPNN can accurately evaluate the security situation of network domain. And this method has the ability to reduce the dimensions of input data while preserving useful features of the data, which can reduce the storage overhead and computing resources and improve the evaluation efficiency. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Natural
6,640.2
2020-11-01T00:00:00.000
[ "Computer Science" ]
Research on B2C Online Marketing Mode based on Multimodel Fusion and Intelligent Big Data Analysis Method B2C online marketing mode is the development trend of future marketing. The key to improve the efficiency of such a mode is to make efficient use of the current large-scale data and mining the corresponding potential value. To control the cost of B2C online marketing mode, this paper analyzes the UJRP model and proposes a hybrid bat difference algorithm (BADE). Experimental results are verified the effectiveness of the proposed method in diversity and cost control. Furthermore, we utilize multimodel fusion strategy (linear weighted fusion) to achieve better performances in B2C online marketing on cost than BADE method. Finally, in the random and diverse online marketing environment, such an improved method can provide the decision-makers of B2C online marketing mode with more flexible choices. Introduction At present, there are generally two channels for the formation of an online marketing mode. One is the offline traditional marketing mode (such as physical stores), and the other is directly born on the Internet platform (such as Amazon and Tmall). No matter which channel forms the online marketing model, its business model is inseparable from Internet e-commerce B2B and B2C. E-commerce originated in Europe and America. In the initial stage, the B2C online marketing mode is difficult due to the lack of resources and relevant technical knowledge. Fundamentally speaking, people's understanding of it in this period is only a supplement to the traditional offline marketing model and only exists as an offline subsidiary. e online marketing model is still in a period of fragmentation and accumulation and has not yet formed a scale effect. Compared with B2B (business to business), which relies on the exchange of products and information between enterprises through the Internet, B2C (business to customer) has unique advantages for the terminal retail link of industry and commerce. B2C is what we often call the online commercial retail model of selling products directly to consumers [1]. For example, in China, through traditional B2C websites such as Tmall, customers can not only make online orders and appointments through the network but also realize the order binding, online payment, and other consumption behaviors of the third-party trading platform. With the increasing number of data, B2C has increasingly become an important branch to promote the development of the world's e-commerce industry and has brought new opportunities to all walks of life, especially the development and transformation of the traditional manufacturing industry [2]. B2C online marketing mode should be said to be a new business form born under the continuous penetration and drive of Internet technology after the development of traditional industrial and commercial industries to a certain period. Although it is an emerging model, it still has its inherent shortcomings. e operation and development of the B2C online marketing mode is always inseparable from the traditional marketing mode and the offline resources of suppliers [3]. Today, with the rapid development of all walks of life, how to make up for their own shortcomings is the focus of future development. e joint replenishment problem on e-commerce with operational efficiency loss is an extended research based on the joint replenishment problem (JRP), which applies the joint replenishment problem with uncertain demands (UJRP) to the B2C online retail industry, e purpose is to coordinate the purchase frequency of different commodities by grouping, so as to better realize the coordinated operation of "purchase, storage, and sales" of online retailers, so as to effectively reduce the total cost of system operation [4]. erefore, combined with the operation practice of B2C online marketing mode, this paper puts forward an integrated optimization model of multiproduct procurement operation considering the operation efficiency and loss as the starting point, which integrates and optimizes the core links of the three supply chains of collection, storage and sales, so as to effectively reduce the overall operation cost of the system and realize an efficient and accurate B2C online marketing mode combined with big data analysis. Under the background of "Internet +" and big data, from the perspective of data analysis, explore the construction of an efficient utilization system of B2C online marketing mode, with a view to promoting the quality of offline marketing mode in the information environment, enriching the practice of B2C online marketing mode, and accelerating the informatization process of this emerging mode. Related Work e core content of the B2C online marketing mode is how to efficiently use the massive data generated online. With the improvement of the industry ecological chain and the intensification of industry competition, in order to improve the service level and reduce the loss of platform operation efficiency, the strategy of multicommodity joint procurement and operation is widely used in the B2C e-commerce industry. More scholars have also extended JRP to the field of coordinated operation of the supply chain, focusing on the coordination and optimization of the overall production operation of the supply chain with inventory control as the core [5]. Literature [6] studies the joint procurement strategy under the three-level supply chain. It can be seen that JRP gradually expands to the vertical integration and optimization of the supply chain by combining more realistic assumptions and integrating the upstream and downstream operations of the supply chain [7]. However, relevant research still focuses on traditional manufacturing industries. In recent years, the deep adjustment of global economic patterns has promoted the optimization and upgrading of industrial structure in various countries. Many scholars have also begun to pay attention to the application of JRP in more industries [8]. Cui et al. [9] proposed an integrated optimization model combining multiproduct procurement and distribution for the supply chain structure of one distribution center with multiple retailers, which has strong practical guiding significance in the fast-moving industry; in the agricultural and sideline and pharmaceutical sales industries, the quality and quantity of products are easy to change with the promotion of the purchase and marketing process. Qin et al. [10] studied the multiproduct joint pricing and inventory control when the quality and quantity of fresh agricultural and sideline products decline at the same time over time. However, at present, there are few JRP studies based on B2C e-commerce. e existing relevant studies mostly assume that the third-party platform is a free service provider, ignoring the important impact of the platform on the actual operation of online retailers [11]. Another bottleneck in the research of such problems is the design of an efficient solution algorithm [12]. JRP has been proved to be an NP hard problem [13]. In solving the model, heuristic and metaheuristic algorithms [14,15] are mainly used, such as the Rand method [16], power, and genetic algorithm (GA) [17]. For some problems requiring exact solutions, heuristic algorithms have advantages in optimization speed and accuracy of solutions [18], but for JRP, some metaheuristic algorithms have more advantages [19]. Wang et al. used a differential evolution (DE) algorithm to solve the JRP model of fuzzy structure [17]. e algorithm shows good robustness and efficiency in solving the JRP model. However, with the increase of the problem scale, the accuracy and robustness of the DE algorithm are weakened. More researchers try to introduce intelligent algorithm into the solution [19]. Ba algorithm (BA) is an algorithm based on swarm intelligence. Inspired by the echolocation behavior of bats, it simulates the optimization and search process as the process of individual movement and prey search of population bats [20]. At first, the algorithm was mainly used to solve the structural design and optimization problems (in the engineering field). Due to its characteristics of simple implementation and few parameters, the algorithm is gradually applied to solve NP hard problems in other fields, such as production scheduling problem, optimal location problem, and pattern recognition problem [20], but the robustness of Ba is not expected. erefore, this paper proposes a two-stage hybrid bat difference algorithm (BADE), which is different from some hybrid algorithms. Such a method does not destroy the inherent evolution process of the algorithm. e two algorithms are combined to carry out rough search and fine search respectively for different types of decision variables to produce a joint effect, which integrates the advantages of the two algorithms to a certain extent. Method e relationship between B2C online marketing mode and traditional marketing mode can be summarized in Figure 1. Computational Intelligence and Neuroscience e unified marketing model focuses on the offline marketing system and has the service experience and guarantee advantages of facing consumers directly, which is very important in the product supply chain. Although the online marketing model is fierce, it can not completely replace the role of traditional offline marketing stores, especially in terms of offline product supply and service experience guarantee. e online marketing model lacks this immediate and consistent ability. e marketing of packaged and combined products of online marketing mode, such as group competition, package price, and special customization often depends on the offline marketing mode. Often, the offline traditional marketing model also has an online marketing strategy. is choice and cooperation will eventually form the viscosity of consumers to marketing products. In this closed-loop model, offline traditional marketing, online marketing, product suppliers, and consumers are indispensable components. Model Cost. e classical JRP model mainly studies the multiproduct procurement decision-making scheme under the determined demand. e demand assumption is relatively simple [21,22]. It is assumed that the annual fixed demand frequency of commodity i is D i , the basic replenishment cycle of each business is t and the joint replenishment frequency of commodity i is k i . e inventory cost in the purchase cycle can be expressed as h i , the corresponding formula is defined as follows: e corresponding ordering cost can be expressed as formula (2), which includes primary ordering cost and secondary ordering cost. Main ordering cost refers to the fixed cost generated by each order. is cost has nothing to do with the grouping of joint orders. e secondary ordering cost needs to consider the variable cost in each ordering process. en, the total cost of the JPR model is the sum of inventory cost and ordering cost, which can be expressed by the following formula: With the help of mobile Internet technology and big data analysis, the BC2 online marketing environment is more complex and changeable, and the needs of customers are more fragmented and randomized. UJRP introduces the assumption of random demand based on JRP. Yes, this problem is closer to the needs of real life. Due to the uncertainty of actual demand, URJP adds shortage cost C s to measure this uncertainty [23]. However, this problem needs to be based on certain assumptions, that is, the demand in B2C online marketing mode is independent, identically distributed, and subject to Gaussian distribution, and the maximum inventory level RI in the observation period can be expressed as follows: Based on this assumption, the out of stock cost C s can be expressed as follows: Where f(z i ) is the cumulative distribution function of the corresponding demand for commodity I, and f(z i ) is the probability density function of the corresponding demand for commodity i. en, the total cost of the UJRP model includes the sum of ordering cost, inventory holding cost, and shortage cost. Model Design. As a population-based algorithm, the Ba algorithm is inspired by the echolocation behavior of bats, which simulates the optimization and search process as the process of individual movement and prey search of population bats. DE algorithm is a heuristic random search algorithm based on group difference, which has zero sign robustness in solving such problems. Bade algorithm combines the global optimization ability of BA and the robust characteristics of DE to form a two-stage algorithm. Firstly, BA is used for rough search to generate the corresponding fitness value of the current optimal bat location machine, and then the output value can be directly used as the initial input of the DE stage. In the next stage, DE plays an important role to update the best value [24]. erefore, BADE method retains the advantages of BA and DE, and the effect is better than that of a single method. For the UJRP model, minimizing the total cost is our goal. When solving, the value of the objective function is the value of the fitness function. e detailed model flow chart is shown in Figure 2. Specifically, the algorithm can be divided into the following steps: (1) Initialization. e population position is initialized by means of uniform distribution, and the rough search starts after the initialization is completed. x t+1 (3) Differential Variation. e bat position vector set obtained in phase 1 is used as the DE population for mutation operation. e specific mutation strategy is shown in the following formula: where pbest i,j (t) is the optimal chromosome at the current time, F is the preset mutation operator, and V ij (t + 1) is the chromosome after mutation. e mutation strategy can ensure that the direction of gene mutation is closer to the optimal chromosome. (4) Cross Recombination. Each dimension is compared according to formula (10) to construct a new chromosome. Figure 2: e framework of the model. Options. e selection operation is carried out according to the greedy rule, and the fitness (x i (t)) of each chromosome is calculated. If the fitness of the constructed new chromosome is less than that of the parent, the parent chromosome is replaced by the child chromosome, and the current optimal fitness pbest is output. If the pbest is less than gbest, the gbest is updated. Multimodel Fusion Strategy. e linear weighted fusion strategy [25] is to weighted average the prediction results of a single model. By giving different weights to the prediction results of a single model classifier (the sum of the weights is 1), multiple single model prediction results are fused in order to obtain better results. e linear weighted fusion scheme can highlight the contribution of the single model classifier with better prediction results to ensure that the fusion strategy can better improve the final results. Specifically, for every single model, multiply its model weight by its sample probability and sum the values obtained by all single models to obtain the multimodel fusion results. In practical application, the multimodel fusion strategy is generally assigned manually according to experience and the effect of every single model, so its effect depends more on personal experience. Different weights are given to different models, and the fusion formula is defined as follows: where weight i is the model weight and prob i is the scheme output value. Case Analysis For the research on the actual B2C online marketing mode, the experimental data are set with reference to the previous research and real case data, as shown in Table 1. Based on the preliminary trial calculation of the initial parameter data, we can further determine the sample search space of the decision variables k i and z i . Here, k i represents the joint replenishment frequency of commodity I based on the purchase cycle, and k i belongs to [1,5]. Combined with the transaction characteristics of the online marketing mode, the types involved in the joint replenishment process are richer, so the frequency selection range should be wider. For JRP and UJRP models, GA, BA, and DE are also used as comparison algorithms to solve them. e comparison results are shown in Table 2. e results in Table 2 show that in solving the JRP model, compared with the four algorithms, DE and BADE have better robustness, but the variance of the DE algorithm is the smallest, the occurrence frequency of optimal value is the highest, and the performance is outstanding. When solving the UJRP model, the robustness of GA and BA algorithms (compared with solving the JRP model) is significantly weakened, but DE and BADE algorithms still show good stability. It is worth noting that in terms of searching from the optimal value, the GA algorithm appears premature when solving the two models, and the search performance is the worst; the BA algorithm only solves the JRP model, and the optimal value is similar to DE and BADE algorithm; For UJRP, bade algorithm is the best to solve the optimal value. It can be seen that bade algorithm has good solution potential for the UJRP model. For the managers of the BC2 online marketing model, the uncertainty of the online marketing model is high, and Computational Intelligence and Neuroscience 5 the diversity of feasible solutions is very important. e above-given mainly evaluates the robustness and optimization ability of the algorithm from the perspective of the optimal cost. Next, we will further compare and analyze the diversity of the solution of the algorithm by investigating the diversity of product replenishment frequency (i.e., k i ) in the feasible scheme. In the previous evaluation results, BADE and DE algorithms are selected for comparative analysis. Taking 6 products as an example, 20 groups of feasible schemes with the optimal cost function value within the error range in the solution process are randomly selected. Figures 3 and 4 are the comparison results of the diversity of feasible schemes of JRP and UJRP models, which are presented in the form of a scatter diagram. e more values of k i can be taken, the more frequent the point fluctuations in the figure, which can reflect the diversity of schemes to a certain extent. We only draw commodities with replenishment fluctuations, ignoring those that have not changed. Figures 3 and 4 show the fluctuation of bade algorithm and DE algorithm. rough comparison, it can be found that k 1 , k 4 , k 5, and k 6 in bade scheme have more volatility, while the DE algorithm has only k 1 , k 4, and k 5 . In Figures 5 and 6, for the UJRP model BADE algorithm, the volatility of k 4 and k 5 is obviously stronger than that of the DE algorithm, and the fluctuation range of k 4 is larger. According to the comprehensive courseware, the diversity of BADE algorithm is obviously stronger than the DE algorithm for JRP and UJRP models. is also means that BADE method has stronger adaptability to practical problems. It can better optimize and adjust the cost of the B2C online marketing model with randomness. In order to further improve the new capability of the model, the linear weighted fusion strategy is used to fuse the output of bade and DE methods, and the final effect is verified on UJRP models with different problem sizes. e results are shown in Table 3. e following conclusions can be drawn from Table 3. For small scale, the difference between the optimal cost and average cost of BADE method is significantly larger than that of the linear weighted BADE method. e improved BADE method can control the cost well, make the average cost closer to the optimal cost, and show good stability. With the increase of the problem scale, the improved BADE method also has obvious advantages in variance and small fluctuation, which shows that this method still has good solution potential in the large-scale online marketing mode. However, this model fusion also brings computational complexity. e improved BADE method does not have Computational Intelligence and Neuroscience advantages in running time. In general, the improved BADE algorithm still has obvious advantages in terms of performance and is still effective. Conclusion Under the B2C online marketing mode, to reduce the cost problem of B2C online marketing, in this paper we focus on the core part of online marketing and optimize the total cost of the system. rough the comparison between the JRP model and URJP model, a hybrid bat difference algorithm (BADE) is designed to solve different types of decision variables. By combining the advantages of BA and DE algorithms, such a method can achieves a good performance on cost control and fluctuation diversity. Multimodel fusion strategy is further added to the proposed model to leverage the potential of all methods. With such a strategy, we can obtain better results than BADE method on different scales of URJP problems. Data Availability e data that support the findings of this study are available from the author upon reasonable request. Conflicts of Interest e author declares that there are no conflicts of interest.
4,739
2022-07-30T00:00:00.000
[ "Computer Science", "Business" ]
Ab initio theory of plasmonic superconductivity within the Eliashberg and density-functional formalisms We extend the two leading methods for the \emph{ab initio} computational descrip tion of phonon-mediated superconductors, namely Eliashberg theory and density fu nctional theory for superconductors (SCDFT), to include plasmonic effects. Furth ermore, we introduce a hybrid formalism in which the Eliashberg approximation fo r the electron-phonon coupling is combined with the SCDFT treatment of the dynam ically screened Coulomb interaction. The methods have been tested on a set of we ll-known conventional superconductors by studying how the plasmon contribution a ffects the phononic mechanism in determining the critical temperature (\tc). Our simulations show that plasmonic SCDFT leads to a good agreement between predict ed and measured \tc's, whereas Eliashberg theory considerably overestimates the plasmon-mediated pairing and, therefore, \tc. The hybrid approach, on the other hand, gives results close to SCDFT and overall in excellent agreement with exper iments. I. INTRODUCTION Superconductors that do not fit into the standard BCS class, have opened interesting routes for alternative pairing mechanisms, whose applicability is still under debate [1][2][3] . Developing a first-principles method for the accurate calculation of the critical temperature (T C ) would not only clarify the microscopic mechanisms of superconductivity, but also contribute to the search for higher temperature superconductors. It has long been suggested that the key to high temperature superconductivity might be a purely electronic mechanism, that directly exploits the Coulomb repulsion between the electrons to provide their pairing. Many investigations have addressed the role of paramagnetic spin fluctuations in iron-based 4-6 and copper-oxide high temperature superconductors 2,7 . Other proposals, instead, have focused on the effective attraction 8 appearing in the dynamically screened Coulomb interaction due to the exchange of excitons [9][10][11][12] or plasmons [13][14][15] . In particular, the plasmon mechanism has been extensively investigated arguing that it could induce or significantly enhance superconductivity in many and very different classes of systems. These include perovskitic oxides [16][17][18] , metalchloronitrides 19,20 , organic superconductors 21,22 and light-element systems such as lithium metal and high pressure hydride superconductors [23][24][25][26] . For conventional superconductors, calculations of T C are commonly based on Eliashberg theory [27][28][29][30] . This is, in principle, a comprehensive theory of the superconducting state, including both electron-phonon and Coulomb effects. The usual application of Eliashberg theory to realistic systems is, however, oversimplified 31 in that the Coulomb interaction is assumed not to favor Cooperpair formation, and is reduced to a single parameter µ * 30-32 . The standard Eliashberg framework, thus, is not suitable for a quantitative description of superconductivity supported by electronic mechanisms. Unlike Eliashberg theory, the extension of density functional theory to superconductors 33 (SCDFT) does not involve any semi-empirical approximation for the Coulomb interaction, and enables calculations of T C entirely from first-principles. Nevertheless, SCDFT was formulated to address conventional superconductivity 34,35 , so that it employs a static screening of the Coulomb repulsion 36 . Recently, a generalization of SCDFT for applications to plasmonic superconductivity has been proposed 23,37 . However, in this theory plasmonic effects are included in the superconducting state, but neglected in the normal state. In this work we extend Eliashberg theory (Secs. II) and SCDFT (Sec. III) to provide ab initio calculations of plasmonic effects on the superconducting properties of real materials. In both frameworks retardation effects in the phonon-mediated and screened Coulomb interactions are treated on the same footing by keeping their characteristic frequency dependence. By applying these methods in Sec. V, we study how the plasmon contribution affects phonon-induced superconductivity for a set of materials representing the main families of conventional superconductors. and anomalous (off-diagonal, F ) components describing, respectively, single-particle electronic excitations and Cooper pairs. The matrix Green's function is determined via the Dyson's equation: whereḠ 0 is the normal-state Green's function of the noninteracting electron system andΣ(k, iω n ) =Σ c (k, iω n ) + Σ ph (k, iω n ) is the electron self-energy associated with the screened Coulomb and phonon-mediated interactions. G 0 can be constructed from the Kohn-Sham (KS) states |k ≡ |kl and eigenvalues ε k of density functional theory (DFT) in the usual form where τ {0,...,3} are the Pauli matrices and the energy ε k is measured relative to the chemical potential. The pairing mechanism is dominated by the phononmediated interaction that, being retarded, overcomes the (almost instantaneous) Coulomb repulsion between the electrons. Since the phonon energy scale, set by the Debye frequency ω D , is much smaller than the electronic Fermi energy E F , the method relies on Migdal's theorem to treat the electron-phonon interaction accurately to order ω D /E F . The key approximation consists in retaining forΣ ph (k, iω n ) only the diagram for the dressed phonon exchange by the self-consistently dressed electron propagator (ḠW ). Due to the absence of an analogous theorem, the treatment of the Coulomb interaction is much harder. On the other hand, the possibility of a Coulomb enhancement of the T C is neglected. Coulomb effects are largely accounted for by normal state parameters, i.e., the electron and phonon quasiparticle energies, ε k and ω qν , and the screened electron-phonon coupling g kk ν . In addition, there remains a static screened Coulomb repulsion W (k, k ), which counteracts superconductivity. Within the Eliashberg approximation, the phonon and Coulomb contributions to the electron self-energy read, respectively, as and (4) Following the standard practice 30 , the anisotropic electron-phonon coupling λ k,k (iν n ) in Eq. (3) is defined by the spectral representation in terms of the Eliashberg function where N (0) is the electronic density of states at the Fermi level. Eq. (4) includes the subtraction of the exchangecorrelation potential of KS-DFT, v xc , so that the resulting Coulomb self-energy is purely off-diagonal. This prevents one from double counting Coulomb effects in the normal state, which are already included in the KS band structure ε k enteringḠ 0 . The total self-energy is more conveniently rewritten in terms of three scalar functions given by the coefficients of the Pauli matrix representation forΣ: These are the mass renormalization function Z(k, iω n ), the energy shift χ(k, iω n ), and the order parameter φ(k, iω n ). Through the Dyson's equation (1), the calculation ofḠ is reduced to solving three coupled equations for Z, χ and φ. In particular, the function ∆(k, iω n ) = φ(k, iω n )/Z(k, iω n ) plays the role of the superconducting energy gap, whereas the quantity χ(k, iω n ) leads to a shift of the chemical potential, which little affects the formation of the superconducting state. By neglecting χ, the equations of interest for the τ 0 and τ 1 components of the Eliashberg self-energy take the form where Note that, since retardation effects in the Coulomb repulsion are disregarded, Z(k, iω n ) is entirely determined by the phonon-mediated interaction 30 . Moreover, the Coulomb contribution to φ(k, iω n ), given by the second term of Eq. (9), is frequency independent. Several approximations are commonly employed in order to reduce the workload involved in solving the Eliashberg equations (8) and (9). Essentially, since the superconducting pairing occurs mainly within an energy window ∼ ω D around the Fermi surface, the equations are simplified by averaging over k and k in the electronic states on the Fermi surface as follows: Although being quite accurate for phonons, this approximation is not justified for the Coulomb interaction, which may remain large for energies up to E F . In practice, following the arguments of Morel and Anderson 40 , W (k, k ) Figure 1. Left: Eliashberg superconducting gap function φ for bulk Nb in the full dynamical (blue), weak-coupling dynamical (red) and static (black) Coulomb approach. Right: Eliashberg mass renormalization function Z and its decomposition (for the dynamical case) into Coulomb Z c and phononic Z ph components. All the quantities are computed in the low temperature limit. in Eq. (9) can be replaced by a strongly-reduced pseudopotential µ * , with an energy cutoff ω c ∼ 10ω D , which effectively accounts for the Coulomb scattering of electrons far from the Fermi surface. The Morel-Anderson pseudo-potential is defined by the expression where µ ≡ N (0) W (k, k ) F S . In most applications, however, µ * is treated as a semi-empirical parameter fitted as to reproduce the experimental critical temperature. With the above mentioned approximations, the Eliashberg approach involves solving numerically the following isotropic equations: A. Plasmonic extension of Eliashberg Theory We go beyond Eq. (4) for the Coulomb self-energy by assuming theḠW approximation, i.e., we still neglect vertex corrections, but introduce the dynamical screening of the Coulomb interaction through the frequencydependent dielectric function. Hence, we consider the self-energy expression where the Coulomb potential W k,k (iν n ) is obtained from the symmetrized dielectric function GG (q, iν n ) as Here, Ω is the unit cell volume, G are reciprocal lattice vectors and q is the difference k − k reduced to the first Brillouin zone. The symmetric form of the dielectric function is defined by For numerical convenience, we rewrite W k,k (iν n ) in the form are the matrix elements of the bare Coulomb interaction and is the spectral function of the electronic polarization. Note that the second term on the right hand side of Eq. (18) is formally equivalent to the spectral representation of the anisotropic electron-phonon coupling (Eq. (5)). The screened Coulomb potential in its spectral representation can be separated into a static and a dynamical part, where the latter, given by incorporates plasma oscillations. We approximate Eq. (20) by its average taken over the corresponding surfaces of constant energy, ε, in k-space as It should be observed that Eq. (22) is a generalization of Eq. (11) for the conventional isotropic Eliashberg theory. By using Eqs. (22) and (23) for the Coulomb interaction in the expression for the self-energy, we obtain the following Coulomb contributions to the Eliashberg functions Z and φ: The influence of these terms on T C can be easily seen by considering that in the simple BCS limit one has , where 1+λ comes from the electron-phonon Z term in the Eliashberg equations. Here, the dynamical contribution to the anomalous kernel φ c (given by the second term of Eq. (25)) enhances the Coulomb repulsion µ between the electrons in the energy scale of the plasmon frequency ω pl . On the other hand, since ω D ω pl , the effective Coulomb repulsion decreases from the original value µ * , favoring superconductivity (higher T C ). This effect, however, is counteracted by the Coulomb correction Z c to the effective mass, which adds up to the phononic term (1 + λ), contributing to the reduction of T C . Fig. 1 shows the Matsubara frequency dependence of the mass renormalization Z and gap function φ for bulk Nb. The inclusion of retardation effects in the Coulomb interaction leads in φ to large negative tails at high energy. Since the high-energy gap function is negative, the plasmonic coupling serves as an effective attraction, which, according to Eq. (25), increases the value of φ at the Fermi level. This effect is less pronounced when Z c is included. The right panel of Fig. 1 shows the decomposition of Z into phononic and Coulomb contributions. Z ph has a peak at low frequency with energy width of the order of the Debye energy and above ω D converges to 1. Z c , instead, which is non-zero only in the dynamical approach, decays at the plasmonic energy scale. As evident from Fig. 1, the main difficulty in solving the Eliashberg Eqs. (24) and (25) is related to the fact that the integration (both in ω n and ε) has to be performed on a huge energy scale, and therefore cannot be tackled by brute force computation. Just to give an indication, at T = 2.8 K, the number of Matsubara points within the plotted energy range is of the order of 70 thousand, and reaching a tight convergence would require an even larger energy window of several hundred eV. To overcome this slow convergence problem in the numerical implementation of the equations we have adopted the following strategies: i) We have used a logarithmic ε integration mesh. This allows for a dense discretization at low energy, where variations in the func-tions have to be accounted for more accurately, but extends up to arbitrarily large energies with relatively few additional points. ii) Similarly, we have adopted a nonhomogeneous mesh of Matsubara points. Since Matsubara frequencies are fixed by the temperature, a non-linear mesh can be obtained by pruning points and redistributing their weight. The resulting Matsubara mesh at 2.8 K is indicated by orange ticks in the left panel of Fig. 1. iii) The dynamical Coulomb interaction itself depends on the (bosonic) Matsubara frequencies and on the energy. When computing the interaction from first principles, a huge computational cost is associated with the calculation of the matrix elements of the dielectric function at high frequency, with respect to the KS states at high energy. For this reason, we have introduced high-energy cutoffs in ω n and ε (typically of the order of 50-100 eV). Above this energy, the dielectric function of the material is replaced with that of the homogeneous electron gas in the plasmon-pole approximation. The parameters which enter the latter are fitted to the explicitly computed values of S(ε, ε , ω) at the cutoff, so to ensure a good overall match to the actual interaction. This approach not only reduces the numerical cost in computing the interaction, but also allows for the analytical integration of the Matsubara frequencies from the cutoff energy to infinity. We point out that these techniques do not introduce additional errors in the method. However, they involve convergence parameters that have to be carefully chosen in order to achieve the correct numerical result. III. DENSITY FUNCTIONAL THEORY FOR SUPERCONDUCTORS Density functional theory for superconductors (SCDFT) is an extension of conventional DFT for ab initio calculations of material-specific properties in the superconducting state 33 . The theory includes the superconducting order parameter χ sc (r, r ) as an additional density. The corresponding non-interacting KS system then reproduces, in principle exactly, both the normal density and the superconducting order parameter of the real system. In the so-called decoupling approximation (on which Eliashberg theory is also based), the KS system is fully determined by solving the BCS-like gap equation where E k = ε 2 k + |∆ s k | 2 are the KS excitation energies and β is the inverse temperature. The kernel of the equation consists of a diagonal part, Z k = Z ph k , and a nondiagonal part, K k,k . Z ph k plays the role of the renormalization function in the Eliashberg equations, whereas K k,k = K c k,k + K ph k,k , which includes both Coulomb and phonon-mediated effects, is responsible for the binding of the electrons in Cooper pairs. Compared to Eliashberg theory, SCDFT features two major advantages: (i) the treatment of the Coulomb repulsion does not resort on any empirical parameter µ * , (ii) all the Matsubara frequency summations are evaluated analytically in the construction of the exchange-correlation (xc) kernels. As in Eliashberg theory, phonon dynamics is properly included, but at the same time the gap equation retains the form of a static BCS equation. Hence, Eq. (26) allows one to account for the full anisotropy of materials at a low computational cost. However, the accuracy of the method is bound by the quality of the available functionals. Making a connection to many-body perturbation theory, approximate xc kernels have been derived from approximations for the xc self-energy operator, via the Sham-Schlüter equation in Nambu space. The first SCDFT functional by Lüders, Marques and co-workers (LM) 34,35 employed theḠ s W approximation for the self-energy in the statically screened Coulomb repulsion and phonon-mediated interaction. By construction, this functional neglected higher order processes included in Eliashberg theory by the self-consistent dressing of the electron Green's function in theḠW self-energy. The LM approximation, thus, was not validated by Migdal's theorem, which made it of questionable accuracy for treating electron-phonon coupling effects. To solve this issue, Sanna, Pellegrini and Gross (SPG) 41 have recently introduced a parametrization of the functional based on the electron-phonon Eliashberg self-energy for a simplified (Einstein) phonon spectrum. The new SPG kernels, give superconducting transition temperatures and gaps in excellent agreement with experiments 42-47 , while still having a simple analytic form. Further extensions and applications of the LM functional have addressed the description of superconductivity in the presence of magnetic fields 48,49 and in real space 50 , the inclusion of spin-fluctuations contributions to the pairing 51,52 and the treatment of the dynamical screening of the Coulomb interaction 20,[22][23][24]37 . In a first attempt to introduce plasmonic effects in SCDFT, Akashi and Arita 23,37 have proposed a dynamical correction to the pairing kernel K c k,k by retaining the frequency dependence of the Coulomb interaction at the RPA level in the exchange anomalous self-energy. The method, implemented in the multipole plasmon approximation, has given a systematic increase of the T C by 10−20% in compressed sulfur hydrates H 2 S and H 3 S, and by a factor of 2 in Al and Li under pressure. A. Plasmonic SCDFT with mass term The results of plasmonic Eliashberg theory for Nb ( Fig. 1) suggest that the Z c term, stemming from the diagonal part of the dynamical Coulomb self-energy, should play a major role in determining T C . Z c can be viewed as the Coulomb counterpart of the mass renormaliza- Figure 2. Left: SCDFT gap function ∆s for bulk Nb in the dynamical (blue), weak-coupling dynamical (red) and static (black) Coulomb approach. Right: SCDFT Z kernel and its decomposition (for the dynamical case) into Coulomb Z c and phononic Z ph components. All the quantities are computed in the low temperature limit and using the SPG phononic functional. tion enhancement, 1 + λ, that corrects the BCS predictions to their strong coupling values in Eliashberg theory 30,53 . Accordingly, it is expected to be relevant for strong electron-plasmon interactions. As discussed above, the recently developed SCDFT scheme for plasmonic superconductivity neglects this contribution by assuming a null diagonal kernel Z c and can, thus, be regarded as a weak-coupling plasmonic theory. In this section we propose a more general SCDFT approach, which also includes plasmonic corrections to the mass enhancement. By following a procedure analogous to that presented in Ref. 54 for the treatment of the electron-phonon coupling, we construct the SCDFT plasmonic kernels from theḠ s W self-energy in the screened Coulomb potential. In the isotropic approximation (Eqs. (22) and (23)), we obtain the following expressions: where the quantity I(ε, ε , ω) is defined in terms of the Fermi and Bose distribution functions f and b by In Fig. 2 we show the energy dependence of the calculated KS gap function and kernel Z for bulk Nb. As in Eliashberg theory, plasmonic contributions enhance the high-energy negative gap. The effect is much more pronounced in the weak-coupling approach, within which the value of the KS gap at the Fermi level almost doubles compared to the static and full dynamical cases. The kernels Z c and Z ph on the right panel of Fig. 2 have the shape of two over-imposed peaks. The sharper peak is strongly temperature dependent, and occurs very close to the Fermi level, at |ε| < 10 −4 eV . This energy scale is not directly related to that of the couplings, but arises from the constraint that the KS system should reproduce the interacting normal and anomalous densities 41 . On the other hand, the broader peak decays on the energy scale of the interactions, i.e., at the plasmon energy in Z c and at the Debye energy in Z ph . IV. HYBRID ELIASHBERG By virtue of Migdal's theorem 27,30 , the F W approximation for the anomalous self-energy in Eliashberg theory describes the phonon-mediated pairing very accurately. On the other hand, there is no a priori indication that the F W scheme improves over F s W for the treatment of plasmonic effects. Here, we consider a hybrid Eliashberg-SCDFT theory in which the Coulomb part of the pairing self-energy is in the F s W form, where F s is the KS Green's function which reproduces the superconducting order parameter in the Eliashberg approximation. Since the KS system has the same anomalous density of the interacting system, the following equality holds 34,35,41 : where F s (ε, iω n ) = ∆ s (ε) ω 2 n + E 2 with E = ∆ 2 s + ε 2 . Using the Eliashberg Green's function F (ε, iω n ) = φ (ε, iω n ) /Θ (ε, iω n ) to compute χ sc and evaluating the Matsubara frequency summation on the right hand side of Eq. (31), yields: The obtained F s is then used as an input for the Eliashberg equation which determines the Coulomb gap func-tion, i.e., Eq. (25) is replaced with Table I. Electron-phonon (λ) and electron-plasmon (Z c 0 ) coupling strengths for the test set of materials with associated experimental critical temperatures T exp C . A. Material set In order to assess the accuracy of the methods discussed above, we have investigated the effect of the electron-plasmon coupling on the transition temperatures of a set of conventional superconductors. Our set includes experimentally well-characterized systems chosen to cover a wide range of properties, i.e., elemental (Al, Sn, Ta, Nb and Pb) and binary phonon-mediated superconductors (TaC, ZrN, V 3 Si and CaC 6 ), ranging from weak to strong coupling. To keep the entire procedure ab initio, all the calculations have been performed at the theoretical lattice parameters obtained by means of the PBE functional 55 . The dielectric function entering the dynamical Coulomb kernel has been calculated within the random phase approximation (RPA), using the fullpotential LAPW code Elk 56 . Fig. 3 shows the electron 542 149 352 35 355 227 88 355 779 112 67 145 13 152 120 29 115 264 25 49 36 4.9 45 32 energy loss spectra in the low-q limit for the chosen set of materials. For both Al and Sn, which are free electronlike metals, one observes, similarly to the homogeneous electron gas, a single pronounced plasmon peak, centered respectively at 16 and 14 eV. All the other systems show a more complex spectrum, varying from the two-peak structure of V 3 Si, to the broad distributed structures of TaC. Apart from the low-energy plasmons of CaC 6 and ZrN, the main plasmonic structures are located at energies above 10 eV. The electron-phonon and electron-plasmon coupling strengths for the chosen materials are summarized in Tab. I, together with the experimental T C 's. The electron-phonon coupling is expressed in terms of the BCS-like coupling constant λ defined as the static limit of λ (iν n ) in Sec. II. The electron-plasmon coupling, being strongly energy dependent, cannot be reduced to a simple isotropic parameter, and is thus represented by the energy integrated quantity Z c 0 , defined as Z c (ε, iω n ) of Eq. (24) computed at the Fermi level (ε = 0) and ω n = 0. B. Eliashberg In Fig. 4 and Tab. II (columns A-C, L) the experimental values of T C for the chosen set of materials are compared to the values calculated within the Eliashberg approach from Eqs. (24) and (25) by employing the static and dynamical screening of the Coulomb interaction. For the dynamical case, the weak-coupling results obtained by neglecting the plasmon-induced mass renormalization term Z c are also shown. It is evident that the static approximation gives better values for T C , whereas the plasmonic theory systemati-cally overestimates the experimental data. The inclusion of Coulomb retardation effects in theḠW approximation yields predicted temperatures that are on average two times bigger than the corresponding experimental values. Notably, the discrepancy between theory and experiments becomes huge when the term Z c is neglected. Since the inclusion of dynamical screening effects in the Coulomb interaction brings the theory a step closer to being exact, one would expect an improvement in the calculated values of the critical temperature. The apparent worsening of the results can be traced back to the neglect of vertex corrections in the Coulomb selfenergy diagram 57,58 and/or the breakdown of the RPA for W. Regarding this point, we should mention that going beyond RPA by using linear-response time-dependent DFT 59,60 within the adiabatic local density approximation in the calculation of the dielectric function does not improve significantly the quality of the results. These aspects will require further investigations. As a matter of fact, our ab initio treatment of the static Coulomb interaction in Eliashberg theory appears to be very accurate, confirming previous results along these lines 25,61,62 . C. SCDFT In this section we present the results obtained within the SCDFT framework. As for Eliashberg theory, we consider static, plasmonic weak-coupling (Z c = 0, Eq. (28) for K c ) and strong-coupling (Eqs. (27), (28) By using the phononic LM functional, the SCDFT results obtained within the static approximation for the screened Coulomb interaction underestimate the experimental values by an average error of 35%. The inclusion of plasmonic effects in both the normal and the superconducting state yields even lower T C 's, with an average error of 43%. On the other hand, if only the plasmonic contribution to the superconducting pairing is accounted for (i.e., Z c = 0), the theoretical results systematically overestimate the experimental data by a factor of 2. In spite of these deviations, plasmonic SCDFT with the phononic LM functional gives closer T C 's to experiments compared to Eliashberg theory. As already mentioned, the SPG functional improves over the LM approximation and is comparable in accuracy to conventional Eliashberg theory in describing electron-phonon effects 41 . Employing this functional together with the static Coulomb kernel gives results in good agreement with the experiments. The agreement worsens considerably by including plasmonic effects in the weak-coupling approximation, as this leads to a sizable increase of the predicted T C 's. Nevertheless, adding the plasmonic renormalization mass factor Z c suppresses the T C 's values and increases the overall accuracy. The average percentage error in this latter dynamical approach is less than 20%. For the chosen set of materials, this approximation turns out to be the most accurate, as reported in Tab II. However, it should be noticed that all the theoretical results have a non negligible intrinsic error due to the approximations made in calculating the phonon spectral function. For this reason it is not possible to precisely rank in accuracy the different methods. Nevertheless, we can say that plasmonic effects can be safely incorporated in the SCDFT scheme, as they introduce a relatively weak correction to the phonon-induced T C , which appears to be consistent with the experimental results. In Sec. V B we have mentioned that the failure of plasmonic Eliashberg theory could be ascribed to the RPA screening or the neglect of Coulomb vertex corrections. The higher accuracy of plasmonic SCDFT, which employs the same Coulomb propagator, indicates that the RPA is not the main source of error. On the other hand, plasmonic SCDFT relies on theḠ s W approximation for the Coulomb self-energy, whereas Eliashberg theory amounts to the fully self-consistentḠW . This leads us to speculate that vertex corrections to the Coulomb selfenergy might be mostly cancelled by the self-consistent dressing of the KS electron Green's function inḠ s W . D. Hybrid Eliashberg From the results of plasmonic Eliashberg theory is evident that the F W approximation for the anomalous Coulomb self-energy significantly overestimates the T C . Since Eliashberg theory is a routinely used method for the prediction of superconducting properties, this appears as a major drawback. A viable alternative is the hybrid Eliashberg-SCDFT approach proposed in Sec. IV. This, in fact, employs the F s W approximation, which better describes the plasmonic contribution to the superconducting pairing. The T C 's calculated for our test set of materials are collected in Tab. II (columns from J to K) and compared to the experimental data in Fig. 6. Consistently with all the previous weak-coupling calculations, the results without the plasmonic mass term tend to overestimate the T C . In this case the overestimation is, on average, by about 60%, considerably improving over Eliashberg theory. On the other hand, the fully dynamical approach leads to predicted temperatures that are very close to the experiments. | is the deviation from the experimental TC. For each method and approximation we indicate the average percentage error av. %|err|=(100 |err|/T exp C ), the average error (av. |err|), and the maximum error (max |err|) over the material set. VI. CONCLUSIONS We have presented an extension of Eliashberg theory and SCDFT to include the dynamical screening of the Coulomb interaction. Our analysis points at the importance of the plasmonic mass terms, which largely counterbalance the effect of the plasmon-mediated attraction in the Cooper pair. The computational cost associated with the inclusion of the frequency-dependent Coulomb interaction is made affordable by employing an energyresolved isotropic approximation and by setting nonlinear energy and frequency integration meshes. A hybrid Eliashberg-SCDFT scheme is also formulated, which combines the ME approximation for the electron-phonon coupling with the SCDFT treatment of the dynamically screened Coulomb interaction. The accuracy of the approximations employed in the different methods has been assessed by calculating the plasmon contribution to the critical temperature for a set of classic superconductors. Our simulations show that the SCDFT plasmonic kernels, combined with the phononic SPG functional, yield a good agreement between predicted and measured critical temperatures. Dynamical corrections turn out to be small but not negligible, being of the order of 10-15% of T C . Eliashberg theory, although accurate in the static limit of the screened Coulomb interaction, when plasmonic effects are included leads to a large overestimation of T C , by an average factor of 2. Dynamical Coulomb effects can be included in Eliashberg by adopting the hybrid approach, which gives results close to SCDFT and overall in excellent agreement with experiments.
7,116.2
2020-07-25T00:00:00.000
[ "Physics" ]
Ultra-Deep Sequencing of Intra-host Rabies Virus Populations during Cross-species Transmission One of the hurdles to understanding the role of viral quasispecies in RNA virus cross-species transmission (CST) events is the need to analyze a densely sampled outbreak using deep sequencing in order to measure the amount of mutation occurring on a small time scale. In 2009, the California Department of Public Health reported a dramatic increase (350) in the number of gray foxes infected with a rabies virus variant for which striped skunks serve as a reservoir host in Humboldt County. To better understand the evolution of rabies, deep-sequencing was applied to 40 unpassaged rabies virus samples from the Humboldt outbreak. For each sample, approximately 11 kb of the 12 kb genome was amplified and sequenced using the Illumina platform. Average coverage was 17,448 and this allowed characterization of the rabies virus population present in each sample at unprecedented depths. Phylogenetic analysis of the consensus sequence data demonstrated that samples clustered according to date (1995 vs. 2009) and geographic location (northern vs. southern). A single amino acid change in the G protein distinguished a subset of northern foxes from a haplotype present in both foxes and skunks, suggesting this mutation may have played a role in the observed increased transmission among foxes in this region. Deep-sequencing data indicated that many genetic changes associated with the CST event occurred prior to 2009 since several nonsynonymous mutations that were present in the consensus sequences of skunk and fox rabies samples obtained from 20032010 were present at the sub-consensus level (as rare variants in the viral population) in skunk and fox samples from 1995. These results suggest that analysis of rare variants within a viral population may yield clues to ancestral genomes and identify rare variants that have the potential to be selected for if environment conditions change. Introduction Rabies virus (RABV) is one of the most deadly pathogens known and is able to infect a wide variety of mammalian hosts. RABV is present on all continents except for Antarctica and has reservoirs in terrestrial species as well as bats (Chiroptera). Although vaccination and antibody therapy is effective in treating known exposures to RABV, an estimated 55,000 human deaths occur annually mostly in developing countries [1]. RABV is a member of the Lyssavirus genus, family Rhabdoviridae. The genome is composed of negative-sense single-stranded RNA, about 12 kb in size which codes for five proteins-nucleoprotein (N), phosphoprotein (P), matrix protein (M), glycoprotein (G) and polymerase (L). Like other RNA viruses, RABV has a high mutation rate due to the high error rate of the polymerase, thus populations of RABV exist as a mutant swarm, or quasispecies [2]. RABV evolution is believed to be driven predominantly by purifying selection and RABV is not known to recombine [3][4][5][6]. Different RABV variants are associated with different reservoir hosts and geographical locations. Typically, interspecies transmission of rabies virus from reservoir to non-reservoir host produces a single fatal spillover event; secondary transmission has rarely been observed [7]. For example, a bat variant may infect and cause disease in skunks, but it does not transmit efficiently within the skunk population and skunks would be considered a ''dead-end host'' for this variant. The exception to this would be the case of cross-species transmission (CST) where the variant from one species adapts to transmission by a new species [8]. For example, in 2001, bat variant rabies adapted to transmission within the skunk population in Flagstaff Arizona [9], and in 2009 this variant adapted to transmission by foxes [7]. These events demonstrate the capacity of rabies virus for CST which may lead to increased exposure of humans to the pathogen and increase the geographical range of the virus. Greater than 90% of North American rabies cases occur in wildlife [7,9], and striped skunks (Mephitis mephitis) serve as the most frequent source of terrestrial rabies cases in California [10]. Rabies in striped skunks was first documented in California in 1899 and skunk rabies has been considered enzootic since the 1950s [11]. The Northern Pacific coast region (which includes Humboldt Co.) is unusual in that this is the only region of CA where large numbers of gray foxes (Urocyon cinereoargenteus) are known to be infected with the skunk rabies variant [11]. In 2009, the number of rabid foxes in Humboldt County infected with the CA skunk variant increased 356% from an average of 1-2 per year in the preceding 15 years to 7 in the latter months of 2008 to 38 in 2009 (Annual Reports from California Department of Public Health, Veterinary Public Health Section). In 2009, only 2 skunks were reported rabid in Humboldt County suggesting that rabies infections in foxes had fundamentally shifted from a typical pattern of spillover from skunks to foxes to one resulting from fox-to-fox transmission. The reported numbers underestimate the extent of the outbreak since additional foxes exhibiting unusual or aggressive behavior were euthanized but not tested (S. Chandler, USDA, personal communication). This epizootic of rabies in Northern California raised concerns not only because the primary species involved was gray foxes (Urocyon cinereoargenteus) and not striped skunks which are the terrestrial reservoir species in this region, but also because this led to a significant spike in attacks by rabid animals on humans and their pets [12]. The apparent sustained fox-to-fox transmission in this outbreak suggests that CST occurred and enabled this epizootic. We hypothesized that molecular changes in the viral genome would be associated with this event. While phylogenetic data support that rabies viruses have jumped species boundaries historically [5], it is rare and has never been subject to comprehensive genetic analysis at the intra-host population level. To test our hypothesis and better understand the evolution of rabies, we applied deep-sequencing to 44 unpassaged rabies virus samples from the Humboldt epizootic. Sequence data were generated by two different platforms (Illumina and 454) and by three different commercial services to determine reproducibility. For 40 of the samples, approximately 11 kb of the 12 kb genome was amplified and sequenced using the Illumina platform (the remaining 4 samples were sequenced using the 454 platform only). Average coverage was 17,4486 and this allowed characterization of the rabies virus population present in each sample at unprecedented depths. Rabies virus tissue samples The tissue samples used in this study were obtained from the archived collection of California Department of Public Health, Viral and Rickettsial Disease Laboratory (CDPH-VRDL). Gray foxes (Urocyon cinereoargenteus) and striped skunks (Mephitis mephitis) displaying symptoms of rabies were submitted for rabies testing in the Humboldt Co. Public Health Laboratory between March 2009 and January 2010. Brain tissue samples that were laboratory confirmed to be infected with rabies virus were forwarded to CDPH-VRDL for genetic characterization. Other earlier skunk and fox tissue samples from Humboldt Co. were also available from the CDPH-VRDL archives. As part of routine rabies surveillance in California, the VRDL genotypes rabies-positive samples received from local public health laboratories by RT-PCR and performs sequence analysis on RT-PCR products using forward primer 1066 deg 59-GARAGAAGATTCTTCAGRGA-39 and reverse primer 304 targeting a portion of the nucleoprotein (N) gene as described in Trimarchi and Smith (2002) and Velasco-Villa, et al. (2006) [13][14][15]. Approximately 1 gram of brain tissue from foxes and skunks infected with the California skunk rabies virus variant were placed in TRIzol LS Reagent (Invitrogen, Carlsbad, CA) and sent to LLNL for further analysis. RNA was extracted from the tissue sample following the manufacturer's protocol. Primer design Approximately 11 kb of the 12 kb rabies virus genome was amplified using degenerate primers (Table S1). Primers were designed to be as sensitive to target strain variants as possible, while still being specific enough to not cross-react with non-targets. Sensitivity was achieved by targeting regions of high sequence similarity, identified through a Multiple Sequence Alignment (MSA) of the target sequences. Specificity was achieved by targeting regions that do not appear to be similar to any other organisms, determined by searching a database of known genome sequences. Primer candidates were selected based on the combined results of the MSA and sequence searches. This technique is a modified version of the approach outlined in Slezak et. al. [16], which accommodates degenerate primer design for diverse target genomes, and places a lower relative priority on primer uniqueness as compared to other known genomes. For rabies virus, which lacks perfect primer-length conservation around the genomic regions of interest, it was necessary to identify degenerate primers for many non-conserved primer regions. From the identified primer candidate regions, which included both perfectly conserved regions and degenerate regions, individual primer pairs were selected which provided overlapping coverage of the DNA being sequenced. Final checks were performed which helped avoid hybridization problems such as primer dimerization. RT-PCR, cloning, and sequencing Reverse transcription was performed using random hexamers and the Superscript III RT reverse transcriptase kit (Invitrogen). The rabies virus cDNA templates were amplified using the Phusion polymerase kit (New England BioLabs, Ipswich, MA), following manufacturer's instructions. PCR conditions consisted of 98uC for 30 s, followed by 40 cycles of 98uC for 15 s, 64uC for 20 s, and 72uC for 1.2 min. The final cycle was 72uC for 10 min. A plasmid control was generated to determine the error rate of the PCR and sequencing steps as described previously [17]. PCR products were prepared for sequencing using the QIAquick PCR Purification kit (Qiagen, Valencia, CA). Sequencing of an aliquot of a subset of 40 samples was carried out by Eureka Genomics, Hercules, CA using an Illumina Genome Analyzer IIx. Another aliquot of the same samples plus an additional 4 samples were set for 454 sequencing at the Brigham Young University DNA Author Summary Understanding the role of genetic variants within a viral population is a necessary step toward predicting and treating emerging infectious diseases. The high mutation rate of RNA viruses increases the ability of these viruses to adapt to diverse hosts and cause new human and zoonotic diseases. The genetic diversity of a viral population within a host may allow the virus to adapt to a diverse array of selective pressures and enable cross-species transmission events. In 2009 a large outbreak of rabies in Northern California involved a skunk rabies virus variant that efficiently transmitted within a population of gray foxes, suggesting possible adaptation to a novel host species. To better understand the evolution of rabies virus that enabled this host jump, we applied deep-sequencing analysis to rabies virus samples from the outbreak. Deepsequencing data indicated that many of the genetic changes associated with host jump occurred prior to 2009, and these mutations were present at very low frequencies in viral populations from samples dating back to 1995. These results suggest deep sequencing is useful for characterization of viral populations, and may provide insight to ancestral genomes and role of rare variants in viral emergence. Sequencing Center. Sequencing was performed as described previously [17,18]. For all samples sequenced by Illumina (pairedend read technology), overlapping read pairs (ORPs), generated by combining short fragment libraries with long sequencing reads, was used to reduce sequencing errors and improve rare variant detection accuracy. Quality filtering procedure was also described in [17,18]. Read mapping to reference The open source read mapping software SHRiMP2, which was shown to have high read mapping sensitivity [19] was chosen for the tool's ability to map as many reads as possible in the face of individual errors within each read [20]. All rabies reads were initially mapped to GenBank rabies reference sequence GI:260063801. This reference sequence was used as the common coordinate system for comparing samples and identifying coding frames. Based on a later observation that this newly sequenced rabies virus genome could differ by approximately 9% relative to our selected previously sequenced reference fox rabies sequence, we checked to see if observed error rates would increase by introducing random mutations at 9% of the control reference sequence, however, no noticeable increase in error rates were observed, suggesting that read mapping parameters were able to tolerate this rate of divergence. The Binomial error model defines the expected number of non consensus bases that should occur given the assumed PCR and sequencing error rate for a given number of observed reads, using a preset P-value (set to 0.01 with a Bonferonni correction). Nonconsensus base calls were made when the number of reads with the rare variant exceeded the expected count threshold [17]. The sequencing data used in this study including reads and the analysis files used to make all base calls is available at NCBI's archive BioProject # PRJNA216100. Consensus agreement between sequencing runs Data analysis include 44 samples sequenced using 454 across 10,330 genome positions and 40 samples sequenced using Illumina across 10,379 genome positions. The minimum coverage cutoff was 506. After quality filtering, the mean coverage for the sequencing data was 9806 for 454 data and 17,4486 for Illumina ORP data, the median coverage was 7776 for 454 data and 15,7586 for Illumina ORP data (Fig. S1). In total, 10,451 positions of the rabies genome were sequenced by either 454 or Illumina, of these, 10330 positions (98.8% of 10,451 loci) were covered by both platforms, though not necessarily for all samples; an additional 36 positions (loci 5199-5206, 5215, 5216, 5218-5220, 5224, 9542-9563) were covered only by 454, and 85 positions (loci 183-267) were covered only by Illumina, also not necessarily for all samples. The few disagreements in consensus base calls between 454 and Illumina were resolved either by taking the base call with far superior coverage or omitting the base call from data analysis entirely. In most cases the disagreement was due to low coverage of the loci by one platform (just above the 506 cutoff) compared to coverage by the other platform (.5006). Hence the final consensus sequences of the two data sets contain no disagreement and can be considered accurate with high confidence. Rare variant detection To differentiate rare variants from sequencing errors, methodologies were developed to measure and control for sequencing and PCR errors and described in Fig. S1 legend [17,18]. Briefly, all mismatched read pairs in the ORPs were identified as sequencing errors and removed from analysis. Erroneous matching read pairs in the plasmid control were used to estimate the overall PCR error rate [17,18]. Rates of these two types of errors were then combined in a position-dependent bionomial error model to make variant calls. The evolutionary history was inferred by using the Maximum Likelihood method based on the Tamura-Nei model. The tree with the highest log likelihood (-14363.5660) is shown. The percentage of trees in which the associated taxa clustered together is shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically as follows. When the number of common sites was ,100 or less than one fourth of the total number of sites, the maximum parsimony method was used; otherwise BIONJ method with MCL distance matrix was used. The tree is drawn to scale, with branch lengths measured in the number of substitutions per site. The analysis involved 32 nucleotide sequences. All positions containing gaps and missing data were eliminated. There were a total of 9669 positions in the final dataset. Bootstrap values (percentage from 500 replications) are shown for the relevant nodes. Evolutionary analyses were conducted in MEGA5 [28]. Samples in Table S2). Variation across samples in consensus sequence Among all 10,451 genome positions sequenced, 243 positions contained more than one consensus nucleotide across the samples, and 4 of these positions showed 3 different consensus nucleotides across the samples. These consensus-level variations across the samples occurred in all five genes of the rabies genome as well as four intergenic regions (Fig. S2). The intergenic regions tended have higher rates of consensus-level mutations compared to the 5 genes, with the region between G and L being the most variable (0.06 mutations per nucleotide, Table 1 The intergenic region between G and L remained the most variable, followed by the intergenic region between P and M. The M protein had the highest rate of consensus-level variation across all samples, but the G protein had the highest rate of consensuslevel variation in the outbreak samples. Consensus sequence reconstruction in the genome coding regions N protein. Sequence data was obtained from amino acid 76 through the end of the N protein coding region. One amino acid substitution, F80L, differentiated the 2003-10 samples from the 1995-96 samples ( Table 2). The samples from 2000 also had an F at residue 80 but differed from the other Humboldt samples at residue 106 with D replaced by G. A subset of the 2009-10 samples from southernmost region of the outbreak area and collected early in the outbreak (foxes 5, 2, and 20) had an N to S substitution at site 119. P protein. Phylogenetic analysis grouped samples primarily according to date, with limited grouping according to geography. Only one amino acid change, R30K, differentiated the 2003-10 samples from 1995-96 samples ( Figure S3). This residue lies in a conserved region of the protein [21,22]. The samples from 2000 differed from the 1995-96 at two sites, R86K and E156G, and from the outbreak samples at amino acid 30 with an R at this site rather than a K. Sequence data for the P protein was available for other CA skunk samples in GenBank including one from neighboring Trinity Co. collected in 1997 (V650 CASK); all had an R at site 30, thus the R30K change is unique to the 2003/10 outbreak. , the G open reading frame had low coverage due to difficulty obtaining PCR products for this region ( Figure S1). The primers for this project were designed prior to availability of genomic sequence for the CA skunk variant, thus making primer design especially problematic for variable regions of the genome. This likely impacts the number of subconsensus variants detected in this region. Phylogenetic analysis grouped samples according to date and geography ( Figure 3). The sequence data obtained from samples collected between 1995-1996 differed from the 2003-10 samples by four amino acid changes in the G protein; L12Q, D427G, P485L, and S501P, and each involved a change in polarity ( Table 2). One amino acid change, G428S, characterized samples obtained from the southern region versus from the northern region. Samples from Arcata were also defined by nucleotide changes and samples from the southern region of Humboldt Co., Loleta, Fortuna, and Hydesville, grouped together as did most of the Eureka samples ( Figure 3, Figure S5). The exceptions were Fx44 from mid Eureka and Fx34 from just east of Eureka, which consistently grouped with the Arcata samples (Figures 1 and 2). Figure S6). Skunk 2 from 1995 differed from all other samples at site 69 where a Y was present rather than an H ( Table 2). Consensus sequence reconstruction in the genome noncoding regions Reads from the noncoding regions were concatenated and consensus data from 454 and Illumina sequencing were compared (Fig. S7) Figure 5). In general, mutations in the consensus sequence could result either from selection for de novo mutations that occurred during host infection, or enrichment of sub-consensus variants that originated from the transmitting host. Since it is difficult to determine de novo mutations without complete sampling of an outbreak, we focused on those consensus mutations that 1) showed inversion between the historical and outbreak samples and 2) led to changes in the amino acids. Historical samples were defined as those collected between 1995 and 2000. Eleven loci were found to have such amino acid inversions, their frequency distributions for the historical and outbreak haplotypes are shown in Figure 5 A and B respectively. Although the frequencies of these two haplotypes sum up to 100% at a given loci in all samples (except for outlier entries colored white indicating no data and black indicating absence of these two haplotypes and presence of a third haplotype), the dominant haplotype tended to be near 100% (values below 100% have a median of 99.86%) and the minor variant had extremely low presence (median 0.15%, therefore could not be visualized if only one heatmap was presented). The outbreak haplotypes (shown in black letters at the bottom of the heatmaps) began as low frequency variants (0.04% to 6.03%) in the historical samples and rose sometime between 2003 and 2009 to become the dominant haplotypes (94%-100%) in most of the outbreak samples (Fig. 5B); while the historical haplotypes (shown in red letters in the heatmaps) went from being dominant (98%-100% presence) in the historical samples to low level variants (0.05%-1.95%) in most of the outbreak samples and even completely disappeared in seven of the outbreak samples (Fig. 5A). These data highlight the dynamic evolution of the rabies genome over time. Among the 11 inversion loci, 6 were found in G, 2 found M and 1 in each of N, P and L. Two pairs of inversion loci, (4593, 4596) Data from Illumina sequencing were used to test the hypothesis that sub-consensus variants that were later enriched to become consensus could be detected at higher frequencies in the historical samples than those sub-consensus variants that did not. Loci shown in Figure 5 where inversion of the dominant and minor variants occurred between the historical and later samples are referred to as ''inversion sites''. The frequencies of sub-consensus variants at the inversion sites were compared to that of variants at non-inversion sites present in the 1995 samples (but not enriched to the consensus level in the 2003/10 samples) to determine if those variants at inversion sites were present in higher numbers in the pre-outbreak samples. The average frequency of sub-consensus variants at inversion sites was slightly but significantly higher than that of sub-consensus variants at non-inversion sites (p = 0.03, twosample t-test one-tailed). Samples of individuals from relatively early in the outbreak (prior to July1, 2009) had increased likelihood to have the 1995 consensus haplotype detected at the inversion sites as a sub-consensus mutation, although the difference fell short of statistical significance (p = 0.08). Some amino acid inversion sites were also associated with other parameters such as geographic location. Nonsense mutations generate defective RNAs that may or may not be functional. Stop codon mutations were shown to be maintained within the dengue virus populations, leading to altered viral fitness and thus influencing transmission dynamics [23]. Likewise, it has been shown that human respiratory syncytial virus mutants lacking the G gene are still able to form infectious particles in vitro [24]. Across all samples and genome locations sequenced by Illumina, 5146 variants were detected at 2302 genomic locations, or 22% of the 10451 positions sequenced. The frequency of these variants range from extremely rare (0.02% at Fx5 genome location 2130, high detection sensitivity due to high coverage of .196,0006, see Figure S1) to very common (38% at Fx 40 genome location 1021, measured at 32% by 454). The mean frequency of the variant pool is 0.3%, indicating that the bulk of the variants detected are ultra rare. To examine where in the genome mutations are most likely to occur, only those mutations that occur at .1% in a sample or occur in multiple samples with a cumulative frequency .1% are retained for further analysis. There are 248 loci with such high occurrence/frequency variants (''one-percent variants'', Figure S9), their distribution across the coding and non-coding regions of the rabies genome ( Figure S10) are very similar to that of the 243 mutations occurring at the consensus level ( Figure S2). In fact, 58 of these one-percent variants occur at loci with consensus-level mutations. Discussion Although rabies virus has jumped species multiple times in the past [5,8,9,25], the event is relatively rare and deep genome sequence analysis has never been applied to examine the role of Table 1. Consensus-level mutations found in 5 coding and 4 non-coding regions of the rabies genome. Table 2. Amino acid changes according to genomic position, sample collection date, and location. Outbreak samples from South Humboldt County include those from Hydesville, Fortuna, Loleta, and Eureka. Samples from North Humboldt County include those from Arcata, Trinidad, and Patrick's Point State Park. 1 No sequence data was obtained for the first 76 amino acids of the N gene. 2 Hydesville and Fortuna samples have S residue at this site. 3 Fx 32 has an E residue at this site. doi:10.1371/journal.pntd.0002555.t002 intra-host viral populations in such an event. Importantly, the rabies outbreak samples collected by the CDPH were accompanied by epidemiologically important documentation such as exact date and location for most of the samples. Additionally the CDPH's ongoing surveillance efforts provided a unique repository of samples for previous rabies host jumping events, which failed to be efficiently transmitted within the gray fox population. Next generation sequencing technology has recently been used to examine viral heterogeneity of rabies genomes present in infected tissues but has not yet been optimized for detection of rare genotypes (less than 1%) [26]. Deep genome sequencing of these recent and past samples allowed us to define the viral mutational dynamics that were associated with a skunk rabies virus variant that efficiently transmitted within a population of gray foxes, suggesting possible adaptation to a novel host species. Historically, the skunk rabies virus variant present in Humboldt Co. has been detected in foxes more frequently than in any other region of the state but not until 2009 has transmission shifted so disproportionally to the fox population [11]. Our data indicate that the outbreak haplotype responsible is able to be transmitted readily by both skunks and foxes since no genetic changes viral sequence differentiated the skunks and the foxes from the epizootic. These results are similar to those from a study describing the genetic changes associated with the Arizona CST events in that the rabies genotype from the donor species (bats) could not be differentiated from that found in the recipient species (skunks or foxes) [8]. All of the Humboldt samples had an unusual sequence, ETGL, as the final four amino acids at the carboxyl end of G protein. A recent study demonstrated that the last four amino acids in this region impact the virulence of the virus [27]; if the final sequence is ETRL then the virus is attenuated due to induction of neuronal apoptosis. According to this study, virulent, wild type RABV haplotypes have QTRL as the terminal sequence, and do not cause apoptosis of the host cell. The impact of the ETGL haplotype on viral virulence is unclear, although it did not perceptibly impact virulence in skunks and foxes. No amino acid changes were unique to the 2003 samples as compared to the 2009/10 outbreak samples from the Eureka area and further south ( Table 2); however distinctive nucleotide changes were present in the noncoding and coding regions. Temporal and genetic data indicate that the outbreak began in south Humboldt Co. and spread north to Arcata and three amino acid changes characterized samples from Arcata area and further north ( Table 2). Whether or not these genetic changes contributed to the explosive increase in fox rabies that occurred primarily in Arcata during 2009 would require further study (Fig. 2). It seems likely that a subset of the foxes infected during 2009/10 were part of a fox-to-fox transmission cycle, with limited skunk-tofox transmission occurring as well. This is supported by the fact Although relatively few amino acid changes were associated with the 2003/10 host jump, it is possible that one or more of these changes may have been required for efficient transmission of the virus in the local gray fox population. Despite an extensive search of the rabies virus literature, none of these amino acid changes were described by other studies as being associated with a change in viral phenotype. Transmission studies using reverse genetics are required to identify which genetic changes are responsible for increased transmissibility. Information from this type of analysis may provide important information on the risk of a similar host jump occurring in other regions, including regions where gray foxes overlap with populations of mesocarnivores that have threatened or endangered status. Both consensus and deep-sequencing data indicate that the haplotype associated with sustained fox-to-fox transmission during the 2009 outbreak occurred prior to 2009 since several nonsynonymous mutations that were present in the consensus sequences of skunk and fox rabies samples obtained from 2003-2010 were present at the sub-consensus level (as rare variants in the viral population) in skunk and fox samples from 1995 ( Figure 5). Analysis of the Illumina ultra-deep sequencing data supported the hypothesis that variants that were later enriched to become consensus could be detected at higher frequencies than variants that did not. In particular, all of the mutations that distinguish the 1995/96 haplotype from the 2003/10 haplotype were present as rare variants in Fx 31, the only 2003 sample for which there is deep sequencing Illumina data available. These results suggest that analysis of rare variants within a viral population may yield clues to ancestral haplotypes and identify rare haplotypes that have the potential to be selected for if environment conditions change. Figure S1 Coverage of the rabies genome by the two sequencing platforms. Red: Illumina-Eureka genomics. Blue: 454. Each trace corresponds to coverage for one sample. Color bars at the bottom denote locations of the 5 proteins in the rabies genome: N, P, M, G, L (from the left to right). Illumina data was generated using overlapping read pairs (ORP). ORP analysis is a new method that assesses genome position specific sequencing error. The approach uses paired-end sequencing to sequence a single DNA fragment twice. Sequencing is initiated once from each end of the DNA fragment to produce two distinct sequencer reads. In other applications paired-end sequencing uses larger fragment sizes to ensure that each read generated from the same DNA fragment covers different parts of the molecule to recover more of the original fragment. ORP uses shorter DNA fragments and longer read lengths to maximize the number of bases in the DNA fragment, which are sequenced twice. The redundant sequencing means reads with base calls that disagree with their overlapping pair are recognized as errors and are discarded to effectively lower the sequencing error rate. A position specific base call supported by a read pair can still disagree with the consensus base call leading to detection of rare variants. ORPs provide an important benefit over the alternative of adding higher sequencer coverage since the detected mismatches between read pairs give an empirically derived sequencing error rate, which is specific to each sequencer run. shown. The percentage of trees in which the associated taxa clustered together is shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically as follows. When the number of common sites was ,100 or less than one fourth of the total number of sites, the maximum parsimony method was used; otherwise BIONJ method with MCL distance matrix was used. The tree is drawn to scale, with branch lengths measured in the number of substitutions per site. The analysis involved 44 amino acid sequences. The coding data was translated assuming a Standard genetic code table. All positions containing gaps and missing data were eliminated. There were a total of 262 positions in the final dataset. Evolutionary analyses were conducted in MEGA5. (DOC) Figure S4 Phylogram constructed from M gene amino acid sequence. The evolutionary history was inferred by using the Maximum Likelihood method based on the JTT matrix-based model. The tree with the highest log likelihood (-631.6601) is shown. The percentage of trees in which the associated taxa clustered together is shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically by applying Neighbor-Join and BioNJ algorithms to a matrix of pairwise distances estimated using a JTT model, and then selecting the topology with superior log likelihood value. The tree is drawn to scale, with branch lengths measured in the number of substitutions per site. The analysis involved 46 amino acid sequences. The coding data was translated assuming a Standard genetic code table. All positions containing gaps and missing data were eliminated. There were a total of 202 positions in the final dataset. Evolutionary analyses were conducted in MEGA5. (DOC) Figure S5 Phylogram constructed from G gene amino acid sequence. The evolutionary history was inferred by using the Maximum Likelihood method as described for Fig. S3. The tree with the highest log likelihood (-1510.8247) is shown. The analysis involved 44 amino acid sequences. The coding data was translated assuming a Standard genetic code table. All positions containing gaps and missing data were eliminated. There were a total of 490 positions in the final dataset. Evolutionary analyses were conducted in MEGA5. (DOC) Figure S6 Phylogram constructed from L gene amino acid sequence. The evolutionary history was inferred by using the Maximum Likelihood method as described for Fig. S3. The tree with the highest log likelihood (-3901.1361) is shown. The analysis involved 38 amino acid sequences. The coding data was translated assuming a Standard genetic code table. All positions with less than 95% site coverage were eliminated. There were a total of 1322 positions in the final dataset. Evolutionary analyses were conducted in MEGA5. (DOC) Figure S7 Phylogram generated using nucleotide sequences from the noncoding regions. The evolutionary history was inferred by using the Maximum Likelihood method as described for Figure 1. The analysis involved 43 nucleotide sequences. All positions with less than 95% site coverage were eliminated. There were a total of 862 positions in the final dataset. Samples and regions are labeled as previously described. Figure S9 Genomic distribution of rabies variants found by Illumina sequencing. Not all rare variants were included for this graph, only those variants (N = 248) that occurred at above 1% in an individual sample or have a cumulative frequency of .1% over all samples were displayed here. Colorbar on the right indicates the frequency of these rare variants. Variants occurring at above 1% in an individual sample appears as green to red pixels, and variants that occur at lower frequencies in an individual sample but occur in multiple samples such that their cumulative frequencies is .1% appear as blue vertical streaks. Table S2 Sample metadata. Sample collection date and location (city or county) are listed. In many cases the street location was obtained for a sample and this information was used for placement of the sample in Figure 2. Supporting Information (DOC)
7,981.4
2013-11-01T00:00:00.000
[ "Biology" ]
Pilot Sequence Design for mmWave Cellular Systems With Relay Stations in the Presence of Blockage Due to short wavelength and weak diffraction ability, millimeter-wave (mmWave) signals are highly susceptible to blockage, which results in significant degradation in received signal power. As a possible solution for overcoming the blockage problem in millimeter-wave communication systems, the deployment of a relay station (RS) has been considered in recent years. In this paper, we discuss the problems to be considered in a relay-assisted mmWave cellular system based on orthogonal frequency division multiplexing. We describe a frame structure and a pilot-based training method to achieve efficient RS selection during blockage. In addition, a method designed to overcome the inter-symbol interference problem caused by different symbol time offsets of pilot signals received from adjacent RSs in the relay-assisted mmWave cellular system is discussed. Then, we propose two different types of pilot sequences that allow a mobile station to distinguish among the pilot sources in multi-cell multi-relay environments: pilot signals based on the Zadoff-Chu sequence (PS1) and pilot signals based on the m-sequence (PS2). The correlation property of PS2 is derived and compared with that of PS1 and another sequence (Gold sequence). Simulations are performed using a blockage model to verify the properties, constraints, and advantages and disadvantages of the proposed pilot sequences in RS-assisted mmWave cellular systems. and coverage problems in mmWave communication systems. When an MS is connected to an RS in the same cell and moves around within the cell, it does not require a handover procedure. The potential benefits of deploying RSs in mmWave networks have been studied [16]- [18]. Xie et al. [16] demonstrated that RSs can be effectively used in mmWave cellular networks to help alleviate blockages and provide lineof-sight (LoS) links when blockage occurs. With the assistance of RSs, more LoS links are expected and the network signal-to-noise ratio (SNR) or signal-to-interference-noise ratio (SINR) performance can be improved significantly. Lan et al. [17] proposed a deflecting routing scheme to improve the effective throughput by sharing time slots for the direct path with the relay path in mmWave wireless personal area networks. Biswas et al. [18] investigated the coverage probability and transmission capacity of relay-assisted outdoor mmWave networks using stochastic geometry tools. Yang and Xiao [19] studied the impact of the beamwidth and self-interference coefficient on maximum achievable rates for a two-hop relaying mmWave system. The basic concept of relay-assisted mmWave networks has been extended to either improve the performance or to reduce the computational complexity [20]- [25]. Abbas and Hamdi [20] examined the impact of employing multiple RSs and larger arrays on the overall performance. Belbase et al. [21] proposed a two-way relay scheme to double the spectral efficiency by accomplishing bi-directional data exchange in two time slots, as opposed to a one-way relay scheme where bi-directional data exchange between two end users requires four time slots. Xue et al. [22] proposed a joint source and relay precoding design scheme for mmWave systems with multiple antennas. The rate maximization problem with the per antenna power constraints is solved while taking into account the computational complexity and sparse characteristics of mmWave channels. Jagyasi and Ubaidulla [23] proposed device-to-device (D2D) relaying and low-complexity mmWave system architecture to alleviate the blockage problem in mmWave bands and improve user experience consistency. Wu et al. [24] discussed two-hop D2D relaying for mmWave cellular networks when infrastructure relay is not available. The coverage probability and spectral efficiency of relay-assisted mmWave cellular networks are derived when the D2D links are implemented in either uplink mmWave or uplink microwave bands. Deng et al. [25] proposed a lowcomplexity architecture design technique for relay-assisted mmWave communication systems to reduce the number of RF chains while mitigating the effect of residual loopback self-interference. Another important matter in relay-assisted mmWave networks is to find an optimal location for the fixed or mobile RS [26]- [28]. Sakarellos et al. [26] investigated the optimal placement of radio fixed relays in mmWave dualhop networks when different types of relays are employed. Kong et al. [27] proposed a new method (AutoRelay) for autonomous mobile relays, such as drones and self-driving cars, to determine the optimal position accurately and quickly. Sanchez and Alonso [28] proposed a two-hop relay architecture using mobile relay technology for high-speed trains with long-term evolution (LTE) and mmWave bands. Thus, RS deployment can be a possible solution for the blockage problem in mmWave cellular systems. However, to the best of our knowledge, studies on the design of a training sequence that allows an MS to find an optimal RS in relay-assisted mmWave cellular systems have not yet been reported. The first problem to be considered when designing a training sequence is the number of possible IDs to be generated in a relay-assisted mmWave cellular system. In this system, the BS/RS should forward data to an adjacent RS/BS with an LoS link to the MS whenever a blockage occurs between the BS/RS and MS. Then, the adjacent RS/BS forwards the data to the MS. Implementation of this concept would require the MS to monitor the channel conditions of adjacent RSs/BSs in case the blockage occurs on the serving link. This would require the RSs and BSs to periodically transmit training signals with their node IDs by sweeping their transmitter (Tx) beams. The source of the serving link could be either a BS or RS. Because the MS needs to find the channel conditions of adjacent RSs (or BSs) in a multicell environment, the training signals transmitted from the RSs and BSs would have to contain information on their identity (ID) (Cell ID and RS ID), unlike traditional cellular systems where repeaters/relays do not have their ID. In relayassisted mmWave cellular systems, the MS would need to receive the data through the optimal (aligned) beam of the selected RS, unlike a traditional cellular system. The training sequence would have to provide a large number of different sequences because the number of required training sequences increases proportionally to the multiplication of the number of cells (BSs) and the number of RSs in a cell. In 5G NR, there are 1,008 different physical cell identities (PCIs) [9], [10]. Accordingly, the training signal should have the capability to generate a large number of IDs and have a low correlation to enable MSs to distinguish different sequences in multi-cell multi-relay environments. The training sequence in a relay-assisted mmWave cellular system based on OFDM can be transmitted in either preamble or pilot format. In the preamble format, only a training sequence is transmitted as in synchronization signal block (SSB) in 5G NR [9]. However, in relay-assisted mmWave cellular systems, blockage on the serving link may occur anytime. Thus, the pilot format would be more effective, because the MS would need to monitor the channel conditions of adjacent RSs/BSs while data transmission takes place in the serving link. In addition, the processing time for RS selection and beam alignment would be shorter when the pilot format is used, because channel monitoring can be performed using pilots in OFDM symbols. If the preamble format were to be used, the processing time would be much longer because the period between preambles (SSBs) is much longer compared with the OFDM symbol period [10]. For example, if the link reestablishment is performed using the preamble (SSB) defined in 3GPP specifications, the processing time will be several hundred milliseconds. Note that the processing time is proportional to the preamble period and the preamble period ranges from 5ms to 160ms depending on channel condition. However, in untethered virtual reality (VR), excessive latency more than 15ms can cause motion sickness [6]. The time delay, 15ms, is much shorter than the time required for link re-establishment in the preamble-based approach. However, the use of the pilot format would cause the pilot signals received from adjacent RSs (or BSs) to experience different symbol time offset (STO) due to the different distances. Although the same subcarriers would be assigned to the pilots for all RSs (or BSs) to reduce interference in the data subcarriers, different STO may generate significant inter-symbol interference (ISI). Because the MS would have to distinguish the sequences from the pilots, we need to consider a method to overcome the ISI problem caused by multiple RSs with different STOs, in addition to the well-known ISI problem caused by a multipath channel [30]. Our approach to address these problems starts with the design of a frame structure, which enables an MS to monitor the channel condition using the pilot signals received from adjacent RSs (or BSs) in a relay-assisted mmWave cellular system. Next, we develop a method to overcome the ISI problem caused by pilots from multiple RSs in different locations. This leads us to propose two different types of pilot sequences, which can generate a large number of IDs: PS1 and PS2. Here, PS1 and PS2 are pilot signals based on the Zadoff-Chu (ZC) sequence and m-sequence, respectively [31]. The correlation property of PS2 is derived and compared with that of PS1 and the Gold sequence (GS). Simulations are performed to verify the properties, constraints, advantages, and disadvantages of the sequences. The remainder of this paper is organized as follows. Section II describes a system model for RS-assisted mmWave cellular systems. The operational concept, frame structure, and synchronization problems for pilot-based RS-assisted mmWave cellular systems are discussed. Section III describes the two different types of sequence (PS1 and PS2) for RS-assisted mmWave cellular systems. The correlation property of PS2 is also derived and compared with that of PS1 and GS. Section IV presents an evaluation of the performance of the proposed pilot sequence using a simple model of a pilot-based mmWave cellular system with a one-hop relay. Conclusions are drawn in Section V. II. SYSTEM MODEL MmWave signals are highly sensitive to blockage effects compared with low-frequency radio frequency (RF) signals. The blockage can be caused by relatively static obstacles such as buildings and mountainous terrain, or by mobile users such as walking people and vehicles [1], [2]. Two blockage models were proposed by the 3GPP study group on mmWave channel models [6]. Model A, which represents a stochastic-based model, and Model B, a model based on the channel and spatial properties, respectively [32], [33]. In this study, we used Model B for blockage modeling and simulation because it is geometricbased and easier to control the number of blockers and their distances. Fig. 1 illustrates the operational concept of a pilot-based RS-assisted mmWave cellular system. The figure shows one BS and one RS for simplicity. It is assumed that a dedicated link is established between the BS and RS through wireless or wired backhaul. Although only one BS and one RS are shown, the concept could easily be extended to multi-cell multi-relay environments. In Phase 1, an LoS link is assumed to exist between the BS and MS, and the MS is served by the BS. The RS is sweeping its transmit (Tx) beam to transmit pilot signals in case the blockage occurs between the BS and MS. In this phase, the BS and RS play the roles of serving source and beam sweeping source, respectively. The serving source transmits data and pilot signals simultaneously whereas the beam-sweeping source transmits only pilot signals for possible blockage. As shown in the figure, the pilot signals in the serving source and all beam-sweeping sources are allocated on the same subcarriers to avoid interference between the data and pilots. In Phase 2, blockage occurs on the serving link between the BS and MS. Then, the MS starts receive (Rx) beam sweeping to find an optimal RS and corresponding Tx/Rx beams using the pilot signals received from adjacent RSs/BSs. Comparing the signals received from adjacent BSs/RSs, the MS selects the link with a highest power. In Phase 3, the MS receives data from the selected RS with the corresponding Tx/Rx beam. In this phase, the RS and BS play the roles of serving source and beam-sweeping source, respectively. In Phase 4, the blockage occurs on the serving link between the RS and MS. The MS starts Rx beam sweeping to find an optimal serving source (BS/RS) and corresponding Tx/Rx beam. If the BS is selected as an optimal node with the corresponding Tx/Rx beam, it returns to the scenario in Phase 1. Fig. 2 illustrates the structure of the framework of the pilot-based RS-assisted mmWave cellular system depicted in Fig. 1. In this figure, the system is assumed to be operated in time division duplexing (TDD) mode with one RF chain for all nodes (BS, RS, MS). The first and second frames (rows) show the signals transmitted from BS to MS and from RS to MS, respectively. The third and fourth frames (rows) show detailed versions of the first and second frames (rows). In the first slot, the BS and RS have the roles of serving source and beam sweeping source, respectively. In the second slot, the roles of the RS and BS are exchanged. The serving beam period and beam-sweeping period take place exclusively and alternately in time. The serving beam period is composed of multiple downlink (DL) and uplink (UL) data transmission periods. The beam-sweeping period is composed of multiple pilot transmission periods and link setup periods. During the pilot transmission period, the beam-sweeping sources transmit their pilot signals in different beam directions. In the link While the MS receives data from the serving source (before blockage occurs), the MS is synchronized to this source. The MS not only receives pilot signals from the serving source but also from adjacent RSs/BSs for channel monitoring while it receives data from the serving source. However, because the MS is synchronized to the serving source, the pilot signals received from adjacent beam-sweeping sources are not synchronized to the MS. The pilot signals experience different STOs because of the different locations of the RSs/BSs. Thus, discontinuities may occur in the pilot signals received from beam-sweeping sources during the fast Fourier transform (FFT) window, which causes ISI. The ISI may degrade the performance of optimal node selection with the corresponding Tx/Rx beam, because the MS is synchronized to the serving source. To avoid this problem, the MS can perform node selection after synchronizing to all adjacent beamsweeping sources. However, the synchronization process is computationally intensive and requires a significant amount of operational time. This study proposes a simple yet effective method for pilotbased RS-assisted mmWave cellular systems to circumvent the synchronization problem that arises during channel monitoring for beam-sweeping sources. The discontinuity within the FFT window of the MS, synchronized to the serving source, is caused by a discontinuous phase on the symbol boundary and the STOs among the received pilot signals. The STO effect cannot be easily compensated in a real environment because of different propagation delays from different beam-sweeping sources. To solve the synchronization problem, two different concepts are used when designing the frame structure and pilot signal. First, each beam sweeping source maintains its Tx beam direction during the subperiod of DL data transmission in the serving source such that the discontinuity caused by beam switching in beamsweeping sources can be avoided during the sub-period of DL data transmission. As illustrated in Fig. 2, the RS/BS pilot beams are maintained during the sub-period of BS/RS DL data transmission. The MS performs Rx beam switching while the beam-sweeping source maintains its Tx beam direction. Second, the pilot signals in beam-sweeping sources are designed to have a continuous phase on the symbol boundary during the sub-period of DL data transmission. The cyclic prefix (CP) is normally used to avoid the ISI problem caused by a multipath channel [30]. However, even with the CP, a discontinuous phase may occur on the boundary of an OFDM symbol in a pilot-based RS-assisted mmWave cellular system. The continuous phase can be obtained by a cyclic shift of the OFDM symbol by the amount corresponding to the CP length. Then, although STOs exist in the pilot signals received from the beam-sweeping sources, the discontinuity does not appear within the FFT window of MS. Fig. 3 depicts an example of time-domain pilot signals received from adjacent beam-sweeping sources with different STOs, when the MS is synchronized to the serving source. In a normal mode, discontinuities can be observed within the FFT discontinuity does not occur. If the system were to be operated in the normal mode, the orthogonality among the subcarrier frequency components would also be destroyed, resulting in inter-channel interference (ICI). The performance of DL data transmission can be significantly degraded by the effect of ICI. The proposed method can reduce the effect of ICI as well as ISI in pilot-based RS-assisted mmWave cellular systems. Fig. 4 depicts the signal-to-interference ratio (SIR) on DL data subcarriers when the value of STO varies. Here, the FFT size, pilot spacing, and CP length are set to 4096, 32, and 288, respectively. The figure shows that, in the normal mode (discontinuous phase), the SIR decreases significantly as the STO increases, due to the ICI effect. However, the SIR remains unchanged when the proposed method (continuous phase) is used. Next, a system model for the proposed pilot-based mmWave is described. As shown in Fig. 2, the frame consists of a data transmission period, pilot transmission period, and link setup period. In this study, we focus only on the pilot transmission period because various conventional techniques can be used in the other periods. In the pilot transmission period, the pilot signal on the k-th subcarrier of the beam-sweeping source in an OFDM system is given by and N F are the subcarrier index, pilot subcarrier index, pilot sequence index corresponding to k P , pilot subcarrier set, sequence length, beam ID (BID), number of BIDs, node ID (NID), number of NIDs, and FFT size, respectively. Furthermore, c and N C are cell ID (CID) and number of CIDs, respectively. [s] n denotes the n-th element of pilot sequence vector s. The pilot signal of the serving source can be also given by (1) except that other subcarriers are used for data. In this study, it is assumed that there exist multiple BSs (cells) and multiple RSs in a cell. The RSs in the same cell have the same CID but different NIDs. NID 0 is allocated to the BS. Thus, the node can be either an RS or BS. The signal received by the i-th Rx beam from the beam-sweeping source and serving source is given in (2) on the bottom of this page. In (2), h, l, and L h denote the channel coefficient, channel tap index, and the number of taps, respectively, and the superscript s denotes the serving source. η Tx and η Rx denote Tx and Rx beam-forming gain, respectively. In addition, σ c,q , σ S , and W denote the STO of the beam-sweeping source, STO of the serving source, and noise, respectively. In the RS beam-sweeping period, the signal in (2) is given by the pilot signals received from the beam-sweeping sources (q ≥ 1) and serving source (BS: q = 0). In the BS beam-sweeping period, the MS is served by the RS. In this period, only BSs are considered as potential beam-sweeping sources because this study is only concerned with one-hop relays. To select a target node, the MS performs correlation between the received signal and pilot sequence as follows: where the term e j2πmk/N F is multiplied to compensate for the effect of STO in the frequency domain. When the MS is synchronized to the serving source, the STO value m, estimated by the conventional synchronization technique, approximates σ S . When the received signal is multiplied by this term, the STO effect caused by the serving source is compensated for. However, the STOs caused by beam sweeping sources generate discontinuous phases within the FFT window of the MS unless the proposed compensation method is used. Finally, the target node can be selected by determining the parameters that maximize the correlation function as follows: whereĉ,q,b, andî denote the estimated CID, NID, BID, and Rx beam index, respectively. Because the BID and Rx beam index can easily be identified by the transmission time as in 5G NR, we develop a pilot design technique which enables us to estimate the CID and NID in a multicell multi-relay environment. This technique is discussed in Section III III. THE PROPOSED PILOT SEQUENCES FOR RS-ASSISTED mmWave CELLULAR SYSTEMS In this section, we describe two different types of pilot sequences, PS1 and PS2, for an OFDM-based mmWave cellular system with one-hop relays. Specifically, PS1 and PS2 are pilot signals based on the ZC sequence and msequence, respectively. Both of these sequences are widely used for preamble and pilot design owing to their low correlation property. PS1 is generated by allocating CID and NID to the parameters of the ZC sequence to provide a large number of IDs. PS2 can be considered a new sequence based on the m-sequence to provide a large number of IDs with low crosscorrelation. PS1 is generated by mapping CID and NID to a root index and cyclic shift of a prime-length ZC sequence, respectively, as follows: Here, 0 < r c < N , 0 ≤ v < N , and 0 ≤ q ≤ N /G . Z , r c , andr c are the ZC sequence, root index of the ZC sequence corresponding to CID c, and modulo inverse of r c , respectively. G is the parameter for phase rotation to distinguish sequences among different NID q. Other parameters are defined in (1). The pilot sequence vector s in (6) is allocated to the subcarriers as given in (1). The phase rotation with a slope qG is converted into the cyclic shift with a spacing r c qG because of the property of the ZC sequence. In PS1, N C and N N become N − 1 and N /G , because CID and NID are distinguished by the root index and cyc46lic shift, respectively. However, because the STO has the effect of a linear phase rotation in the frequency domain, the STO can produce an ambiguity in NID detection. To avoid this situation, it is necessary to specify the parameter G such that its value is sufficiently large to cover the phase rotation caused by STO. When G = 1, N N is equal to N − 1. However, N N decreases when G increases. The cross-correlation value of PS1 becomes zero if two nodes are in the same cell (different NIDs, same CID), and 1/ √ N if two nodes are in different cells (different CIDs) regardless of their NIDs. The m-sequence has been widely used for preamble and pilot design owing to its good auto-correlation property. However, it has limited ability to distinguish different sequences because of its poor cross-correlation property. The GS is often used for applications in multiple-access communication systems because it can provide a large set of sequences with enhanced cross-correlation properties. The GS is obtained by selecting preferred pairs of m-sequences and their combinations. Unlike the m-sequence (two-valued), the crosscorrelation function of GS is three-valued. PS2 is proposed to reduce the cross-correlation value further while maintaining a large set of sequences. PS2 is generated by multiplying two sequences, obtained by the DFT of one m-sequence, with different cyclic shifts as follows: where N = 2 n − 1, Here, p and P are the m-sequence and the DFT of the msequence, respectively. The pilot sequence vector s in (7) is allocated to the subcarriers as given in (1). Different values of cyclic shifts (d 0 , d 1 ) are assigned to P, depending on the values of CID and NID (c, q). Because PS2 is obtained by the multiplication of two different P and P * (DFTs of the msequence) with different cyclic shifts, it is not affected by the ambiguity problem in CID and NID detection. Because any values of (d 0 , d 1 ) can be used as long as d 0 = d 1 , the number of available sequences in PS2 becomes (N −1)N . The number of NIDs (N N ) mapped to each CID becomes (N − 1)N /N C . Next, the correlation property of PS2 is analyzed. The correlation function of PS2 is defined as Ignoring c and q for notational convenience, the correlation function of PS2 can be re-written as × p n 0 e −j2πn 0 (v+d 0 )/N p n 1 e j2πn 1 (v+d 1 )/N ×(P v+d 2 ) * P v+d 3 If n 1 and n 3 are replaced with n 1 = n 0 + δ 0 and n 3 = n 2 + δ 1 , respectively, (9) can be expressed as Here, the case with δ 0 = 0 is removed, because is 0 when δ 0 is 0. The use of the ''shift-and-add property'' of the m-sequence [34] enables us to replace the term p n 0 p n 0 +δ in by p n 0 +D δ . Note that the ''shift-and-add property'' states that multiplication of an m-sequence and its own cyclic shift is another m-sequence. Here, D δ denotes the amount of shift caused by multiplying an m-sequence by its shifted version with an offset δ ranging from 1 to N − 1. For all shifts (δ), there exists a unique integer D δ such that the relationship holds. Then, in (10) can be expressed as Furthermore, ϒ in (10) can be re-written as If n 3 is replaced with n 3 = n 2 + δ 1 , (12) can be expressed as Substituting (11) and (13) into (10), it can be re-written as When both a and b are zero, |A| becomes N − 1. However, when a is not equal to b, |A| 2 in (14) can be expressed as When (δ 0 − δ 0 ) %N is equal to τ ranging from 1 to N − 1, (15) is expressed as (16), shown at the bottom of this page. Here, % represents the modulo operation. Note that N − τ is not considered for the value of δ 0 because D (δ 0 +τ ) %N becomes D 0 (out of range). For notational convenience, D δ 0 +τ is used for D (δ 0 +τ ) %N in the following equations. To simplify in (16) further, the following proposition and corollaries are made. Proof) See Appendix C Because the variable δ 0 in of (16) ranges from 1 to N −1, the range of D δ 0 is from 1 to N − 1. Furthermore, because τ is a constant integer ranging from 1 to N − 1, the range of (D δ 0 − D δ 0 +τ ) %N is from 1 to N − 1. According to Proposition 1, (D δ − D δ+τ ) %N has a different value for a different δ 0 . Note that the value N − τ is excluded from the range of δ 0 in the summation term of , and (D δ − D δ+τ ) %N cannot be N − τ (Corollary 2). Thus, the range of (D δ − D δ+τ ) %N is The first term on the right-hand side becomes zero because a is an integer number. The second term becomes one. The third term can be expressed by e j2π (N )a/N e j2π(−τ )a/N where the first term becomes one for any integer value of a. Thus, in (23) can be simplified as Then, using (23), |A| 2 in (16) is given by Using (10) and (24), the correlation property of PS2 is given by Here, the first condition, ''a = 0 & b = 0,'' corresponds to the case of the same (d 0 , d 1 ) and (d 2 , d 3 ), i.e., the same pilot sequences. The correlation between the same pilot sequence of PS2 becomes ((N ) 2 − 1)/N . The second condition, ''a = 0&b = 0&a = b,'' occurs when d 0 and d 1 are different from d 2 and d 3 , respectively, and b is different from d 2 − d 0 . The maximum cross-correlation of PS2 occurs when the pilot sequences have different (d 0 , d 1 ) and (d 2 , d 3 ), and is given by (N + 1) 3/2 /N . Otherwise, the cross-correlation of PS2 becomes (N + 1)/N . Fig. 5, which shows an example of the correlation function of PS2 when N = 63, compares the analytical solution in (21) and the simulation result as a function of the sequence index (condition). As can be seen in (21), the correlation function of PS2 is three-valued. The first condition corresponds to the case of the same pilot sequences. In this case (auto-correlation), a peak occurs at the sequence index and its value becomes one after normalization with the maximum value. In PS2, the maximum cross-correlation value (0.1289) is obtained when the second condition is satisfied. A small cross-correlation value (0.01612) is obtained when the third condition is satisfied. Because the analytical solution and simulation result are almost identical, the lines in the figure are indistinguishable. Fig. 6 compares the correlation functions of PS1, P2, and GS for N = 63 and N = 127. Here, a cumulative distribution function (CDF) is used to compare the distribution of correlation values for all possible sequence indices. As can be seen in this figure, the maximum correlation value of PS2, (N + 1) 3/2 /N , is smaller than the maximum cross-correlation value of GS, (2 (log 2 (N +1)+2)/2 + 1), and slightly larger than the maximum correlation value of PS1, 1/ √ N . These results are summarized in Table 1, which indicates that the analytical and simulation results are almost identical for all three sequences (PS1, PS2, and GS). The maximum correlation values of PS1 and PS2 are significantly smaller than that of GS when the same number of sequence lengths is used. In terms of the number of available sequences, PS2 and GS can provide (N − 1)N and (N + 1)N different sequences, respectively. PS2 can generate a slightly smaller number of sequences than GS. However, the number of available sequences in PS2 and GS become similar as the sequence length N increases. On the other hand, the number of available sequences in PS1 is significantly smaller than that of PS2 because of the ambiguity problem in NID detection. However, PS2 does not experience the ambiguity problem when STOs are present. Moreover, GS cannot have sequence lengths with a degree (primitive polynomial) equal to multiples of 4, whereas there is no restriction on the length of PS1 and PS2. Thus, PS2 is suitable for relay-assisted cellular systems, which require a large number of IDs and low cross-correlation. IV. SIMULATIONS The performance of the proposed pilot sequence is evaluated using a simple model of a pilot-based mmWave cellular system with a one-hop relay. The 5G NR specification is used for the baseline model of transmission and reception [32]. Simulation parameters are summarized in Table 2. A uniform rectangular array (URA) of 16 antenna elements is used for the transmitter and uniform linear array (ULA) of eight elements for the receiver. UMi is used as the pathloss model and model B is used for blockage modeling. The performance of the beam-sweeping period and data transmission period are evaluated using the frame structure in Fig. 2. Fig. 7 shows the signal strengths received from the BS and RS when the scenario (one BS and one RS) in Fig. 1 is applied. Here, an RS is assumed to be placed at a distance of 60 meters from the BS. The RS is assumed to have a gain of 30dB to overcome the pathloss between the BS and RS link. A single blockage is considered for simulation and the distance between the transmitter and blockage is changed randomly. The results in Fig. 7 show that the received signal power decreases as the distance between BS/RS and MS increases. When a blockage occurs between the BS(RS) and MS, 15−20 dB of power loss occurs in the BS(RS)-MS link. Fig. 1). The MS is assumed to be located 40 meters and 20 meters away from BS and RS, respectively. The pilot spacing (ρ) in the frequency domain is set to 32 subcarriers and the length of the pilot sequence is 127. Thus, the number of available pilot sequences for PS1 and PS2 is 127 × 126. It is also assumed that STO does not exist between the RS and MS. Here, two different scenarios are considered: ''one beamsweeping source'' and ''two beam-sweeping sources.'' In the case of one beam-sweeping source, one RS (Phase 2 in Fig. 1) and in the case of two beam-sweeping sources, two RSs, of which the signals have the same power, are assumed to exist. The detection probability is obtained by correlating the received signal with reference pilot sequences and finding the sequence index with the largest correlation value, as given in (3) and (4). The detection is declared to be ''successful'' when the detected sequence index is correct. Fig. 8 shows that the performance of PS1, PS2, and GS is similar when only one beam-sweeping source is used. However, when two beam-sweeping sources are used, they interfere with each other. Because the cross-correlation property improves in the order of GS, PS2, and PS1 ( Fig. 6 and Table 1), the probability of detection becomes higher in the same order (GS, PS2, and PS1). Although it is not shown here, the tendency of detection probability is similar when NID > 2. In Fig. 8, the detection probability and the number of available pilot sequences are obtained under the assumption that STO does not exist between the RS and MS. Fig. 9 shows the detection probability of PS1 in the presence of STO because PS1 experiences the ambiguity problem when STO exists. Note that the sequence length (127) divided by the number of NIDs (2) is 63.5. Thus, G is 63 when the number of NIDs is 2. Fig. 9 shows that the sequence detection is correct when σ = 0 or 32, but wrong when σ = 33, in the case that the number of NIDs 2 and ρ is 32. The ambiguity in NID detection occurs when σ is larger than the maximum tolerable STO. When the pilot spacing (ρ) and the number of NIDs are set to 16 (8) and 2, respectively, the maximum tolerable STO is ±65 (131). Fig. 9 indicates that the sequence detection fails when σ is larger than the maximum tolerable STO. The ambiguity problem decreases as the number of NIDs or pilot spacing is reduced. Thus, the number of NIDs in PS1 decreases significantly when the range of STO increases. For example, when the pilot spacing and the maximum STO are 32 and 32, respectively, the number of available NIDs is 2 in PS1. For the case of 15 NIDs, the detection probability approaches 0 when σ is 19. The maximum tolerable STO is 18 in this case. Fig. 10 shows the detection probabilities of PS2 and GS in the presence of STO for the same scenario of ''one beamsweeping source'' as in Fig. 9. The figure shows that PS2 and GS do not experience the ambiguity problem even with a large STO (σ = 2,000), corresponding to almost a half symbol. However, it can cause a discontinuous phase within the FFT window, resulting in ISI. Fig. 10 compares the performance of the proposed method (continuous phase) with that of the normal mode (discontinuous phase) when different STOs are used. Clearly, the performance degradation (discontinuous phase) caused by STO (σ = 2,000) can be compensated for by the proposed method (continuous phase), which obtains a gain of approximately 6 dB. In addition, the number of NIDs in PS1 and GS is not affected by the STO value. 11 shows the BER performance at the different phases in Fig. 2 when the proposed approach is successfully implemented. Here, it is also assumed that the MS is located 40 meters and 20 meters away from BS and RS, respectively. The best performance is obtained in Phase 1 where the MS is served by the BS. The performance in Phase 2 degrades by approximately 20 dB where a blockage occurs on the link between the BS and MS. In Phase 3, the gain is approximately 14 dB where the MS is served by the RS. The performance degradation in Phase 4 is approximately 15 dB where a blockage occurs on the link between the RS and MS. When the MS is again served by the BS (Phase 1), the gain is approximately 21 dB. The BER performance in Phase 1 is 6 dB higher than that in Phase 3 because the power received from the BS is 6 dB greater than that from the RS as shown in Fig. 7. To compare the proposed technique with the conventional technique, we consider the existing cellular system where an RS is regarded as another BS with different CID. In the existing cellular system, the RS will follow the conventional initialization procedure to reestablish the link whenever a blockage occurs. Initial synchronization is achieved using the SSB defined in 3GPP specifications [35], [36]. The SSB is repeated after specific time (ranges from 5ms to 160ms depending on channel condition). When a blockage occurs, the sweeping source will transmit four symbols of SSB during the SSB period. As defined in NR specification [32], the BS (or RS) can transmit the SSB on multiple transmit beams and the MS receives the signal using one beam. Here, we consider the best scenario for the conventional technique, where the SSB is transmitted simultaneously from all Tx beams. This procedure is repeated for all Rx beams in the MS. Thus, the time required for reestablishing the link in the conventional technique will be O(T SSB × N Rx ). Here, N Tx , N Rx , T SSB and T sym represent the number of Tx beams, number of Rx beams, SSB duration, and symbol duration. However, in the proposed technique, the link is reestablished using the received pilot signals, designed for RSs in mmWave cellular systems. Since the pilot signals are transmitted continuously from the sweeping source in the proposed technique, the required time for cell and beam search will be O(N Tx × T sym × N Rx ). Fig. 12 compares the time required for cell and beam search in conventional and proposed techniques. The parameters in Table 2 are used for simulation in Fig. 12. Here, it is assumed that a blockage is occurred at T SSB /2. From this figure, it can be seen that the time required for cell and beam search increases linearly for the conventional technique while it remains constant (N Tx × N Rx × T sym = 16 × 8 × 8.334µs = 1.0667ms) for the proposed technique. Thus, the proposed technique can significantly reduce the time for link reestablishment when a blockage occurs. On the other hand, the computational complexity increases as the number of IDs increases, because the MS performs correlation operation for all possible IDs of adjacent BSs and RSs. V. CONCLUSION This study proposes the operational concept and framework of a pilot-based RS-assisted mmWave cellular system to alleviate the blockage problem. Two different types of pilot sequences (PS1 and PS2), which can generate a large number of IDs with a low correlation, are proposed to allow MSs to distinguish the pilot sources in multi-cell multi-relay environments. PS1 was shown to have the smallest correlation and highest detection probability when the pilot signals transmitted from adjacent RSs/BSs arrive at the MS with small STOs. However, the detection probability of PS1 may decrease significantly when large STOs exist, due to the ambiguity in node detection. PS2 was proposed to increase the number of distinguishable IDs in the presence of STOs. PS2 with continuous phase was shown to experience no ambiguity problem with respect to node detection even with a large STO. Although pilot sequences were designed for mmWave cellular systems with one-hop relays in this study, the sequences could be used for any pilot-based cellular system that requires a large number of IDs with a low correlation. APPENDIXES APPENDIX A: PROOF OF PROPOSTION 1 From the ''shift-and-add property'' of the m-sequence, another m-sequence can be obtained by multiplying an m-sequence by its shifted version with an offset δ as follows: p n−D δ+τ p n−D δ+τ +δ = p n+D δ −D δ+τ . (A.1) If it is assumed that (D δ − D δ+τ ) %N can have the same value for two different values of δ (f and g) , p n+D g −D g+τ , and p n+D f −D f +τ must be the same and the following relationship must be hold (''shift-and-add property'' of the m-sequence). where D f +τ is D g+τ + , and f is g + θ . If D g+θ (= D f ) is D g+θ +τ , (19) is true. However, D g+θ cannot be D g+θ+τ VOLUME 8, 2020 within the range of τ from 1 to N − 1, because D δ has a different value for all shifts δ (''shift-and-add property'' of the m-sequence). Therefore, the assumption and (A.3) cannot be true, and the proposition is correct.
9,678.2
2020-01-01T00:00:00.000
[ "Computer Science", "Business" ]
Limited association between disinfectant use and either antibiotic or disinfectant susceptibility of Escherichia coli in both poultry and pig husbandry Background Farm disinfectants are widely used in primary production, but questions have been raised if their use can select for antimicrobial resistance. The present study examined the use of disinfectants in poultry and pig husbandry and its contribution to the antibiotic and disinfectant susceptibility of Escherichia coli (E. coli) strains obtained after cleaning and disinfection. On those field isolates antibiotic susceptibility was monitored and susceptibility to commonly used active components of farm disinfectants (i.e. glutaraldehyde, benzalkoniumchloride, formaldehyde, and a formulation of peracetic acid and hydrogen peroxide) was tested. Results This study showed a high resistance prevalence (> 50%) for ampicillin, sulfamethoxazole, trimethoprim and tetracycline for both production animal categories, while for ciprofloxacin only a high resistance prevalence was found in broiler houses. Disinfectant susceptibility results were homogenously distributed within a very small concentration range. Furthermore, all E. coli strains were susceptible to in-use concentrations of formaldehyde, benzalkoniumchloride and a formulation of peracetic acid and hydrogen peroxide, indicating that the practical use of disinfectants did not select for disinfectant resistance. Moreover, the results showed no indications for the selection of antibiotic resistant bacteria through the use of disinfectants in agricultural environments. Conclusion Our study suggests that the proper use of disinfectants in agricultural environments does not promote antibiotic resistance nor reduce E. coli disinfectant susceptibility. Electronic supplementary material The online version of this article (10.1186/s12917-019-2044-0) contains supplementary material, which is available to authorized users. Background Biocidal products are frequently used chemicals with the aim to inactivate microorganisms [1] harmful to human or animal health. Biocides used for veterinary hygiene purposes are applied to disinfect materials and surfaces associated with the housing or transportation of animals. They play a crucial role in preventing and controlling the transmission of infections within and between herds, which is an important aspect of on-farm biosecurity. Despite the increasing use of disinfectants, bacteria seem to remain susceptible to these disinfection products when used correctly. Their in-use concentrations are normally far above the minimum inhibitory concentration (MIC) of wildtype isolates [2], as opposed to antibiotics for which MICs are generally closer to concentrations used in practice. Furthermore, as disinfectants generally contain more than one type of active component each with a different antimicrobial mode of action [1] and as they have no specific microbial target, the development of resistance at the level of in-use concentrations is thought to be highly unlikely [3,4]. However, in practice, disinfectants can be found at lower concentrations due to underdosing, or due to residual organic debris as a result of insufficient cleaning, or due to dilution by remaining rinsing water. Under such conditions, bacteria are exposed to subinhibitory disinfectant concentrations, which could lead to a selection of strains with a reduced susceptibility to disinfectants [5]. Moreover, concerns have been raised about a possible selection of antibiotic resistant bacteria through the use of disinfectants. The emergence of reduced susceptibility of bacteria to antimicrobials (disinfectants and antibiotics) induced by disinfectants has been demonstrated in vitro. Laboratory-based adaptation experiments have shown that step-wise exposure of initially susceptible bacteria to subinhibitory concentrations of benzalkoniumchloride, chlorhexidine, triclosan and some commercial disinfectants may lead to decreased susceptibility to either antibiotics or disinfectants [6][7][8][9]. Recent studies investigated the disinfectant susceptibility of bacteria isolated from live-stock and its environment [10][11][12][13][14] or evaluated the correlation [2,15] or association [16] between antibiotic resistance and a decreased susceptibility to disinfectants. However, in marked contrast to the in vitro reports, no evidence that the use of disinfectants selects for antimicrobial resistance under practical conditions was found. Furthermore, there are only few studies on the susceptibility of bacteria isolated from livestock environments after cleaning and disinfection and most studies on disinfectant susceptibility examined minimum inhibitory concentrations (MICs) but did not evaluate the lethal effects of the disinfectants by determining the minimum bactericidal concentration (MBC). Therefore, the current study aimed at filling these gaps by examining the use of disinfectants in poultry and pig husbandry and its contribution to the antibiotic and disinfectant susceptibility of Escherichia coli (E. coli) isolates. Biosecurity The scores of the different categories of the biocheck scoring system are listed in Table 1. The average external and internal biosecurity scores for broiler farms were 66.9 (range 54.0-78.0) and 61.0 (range 40.0-80.0), respectively and for pig farms 69.0 (range 57.0-87.0) and 65.9 (range 46.0-88.0), respectively. Cleaning and disinfection practices Descriptive results of the different cleaning and disinfection protocols carried out at the 25 broiler farms and the 21 pig nursery units are listed in Table 2. Results showed that the most complete cleaning protocol, consisting of dry cleaning followed by soaking (with water), cleaning with a cleaning product and rinsing of the cleaning product is more applied at the broiler houses compared to the pig nursery units. The greatest variation in disinfection protocols was seen in broiler houses. For the pig nursery units, disinfection was always applied by the farmer with 1 disinfectant Antibiotic susceptibility All these E. coli isolates were tested for their susceptibility to 14 antibiotics. Their antibiotic resistance prevalence is shown in Fig. 1. Disinfectant susceptibility Selected isolates Antibiotic resistance prevalences of the selected isolates for disinfectant susceptibility testing are available in Fig. 2. Antibiotic resistance profiles of the E. coli strains isolated from the same farm differed. Additional file 1 shows this in more detail. MIC and MBC results Results of the MICs and MBCs of the selected 57 broiler and 61 pig E. coli field isolates for the tested disinfectants are given in Fig. 3 and Fig. 4, respectively. For benzalkoniumchloride, MICs of 0.027 g/L and 0.013-0.027 g/L were found for E. coli isolates isolated from broiler houses and pig farms, respectively. The MBCs were 0.027-0.053 g/L for isolates from broiler houses and ranged from 0.013 to 0.053 g/mL for isolates from pig nursery units. The MICs and MBCs for glutaraldehyde ranged between 1.25 and 2.5 mL/L for isolates of both sectors. For formaldehyde, a MIC of 0.046-0.093 mL/L was found for isolates from broiler houses while MICs for isolates from pig nursery units ranged from 0.046-0.185 mL/L. The MBC was 0.093 mL/L and between 0.046-0.185 mL/L for isolates from broiler houses and nursery units, respectively. The MICs and MBCs for D50 were between 1.25-5 mL/L and 1.25-2.5 mL/L for isolates of broiler and pig farms, respectively. Most of the MICs and MBCs were the same, demonstrating the bactericidal effect of the active components at the lowest concentration that inhibited growth. Evaluation of MIC and MBC results After visual examination of the MIC and MBC histograms for both animal species, it was not possible to set a cut-off value separating the E. coli field isolates into a disinfectant-susceptible and -resistant population as there was no bi-modal distribution. Association between disinfectant use and antibiotic resistance prevalence In broiler production, significant negative associations were found between the use of peracetic acid and hydrogen peroxide and ampicillin, ciprofloxacin and tetracycline resistance (Table 3). No significant associations were found for the other active components and antibiotics. In pig production, no significant associations between the use of active disinfectant components and antibiotic resistance were found. Association between disinfectant use and disinfectant susceptibility All E. coli isolates showed a similar susceptibility to the active components (formaldehyde, benzalkoniumchloride, glutaraldehyde and formulation of peracetic acid and hydrogen peroxide), hence no indications for disinfectant resistance were found and no statistical analysis could be performed. Biosecurity and cleaning and disinfection practices Results of the overall biosecurity at the sampled broiler farms were in line with those of previous Biocheck.UGent Table 3 Odds ratios (OR) of significant associations (P-value) between the use of active components of disinfectants and antibiotic resistance in broiler production Used [17]. Results for the overall biosecurity at the sampled pig farms, were slightly better (average biosecurity level of 68 versus 61) due to better external and internal biosecurity scores. One of the most important sub-categories of internal biosecurity, i.e. to reduce the risk of pathogen spreading within herds, is the cleaning and disinfection (C&D) score. In the current study, the latter score was comparable to the average Belgian C&D score for both animal categories (52 vs. 56 for broiler farms and 54 vs. 48 for pig farms) indicating that the sampled farms are representative for the average Belgian farm. More importantly, these results indicate that substantial improvements at the level of internal biosecurity and more specifically in cleaning and disinfection, can still be made. The most frequently used active components of disinfectants in both animal species are a combination of QACs and glutaraldehyde, while formaldehyde and a combination of peracetic acid and hydrogen peroxide were also commonly used. This is consistent with a recent study of our group by Maertens et al. (2018) [18] on C&D in Belgian poultry production and supports our choices of active components tested for their susceptibility. Antibiotic susceptibility The E. coli field isolates from the sampled broiler farms showed very high resistance for ampicillin, sulfamethoxazole, ciprofloxacin, trimethoprim and tetracycline, which is in line with the report by CODA-CERVA for E. coli isolates from Belgian broilers in 2015 [19]. The common use of the corresponding antibiotic classes (penicillins, sulfonamides, fluoroquinolones and tetracyclines) in broiler production in Belgium [20], is in line with these high resistance levels. In the CODA-CERVA report [19], very low resistance to ceftazidime and cefotaxime (4.6%) were found, again corroborating our findings. The E. coli field isolates from the sampled pig nursery units, showed very high resistance to sulfamethoxazole, trimethoprim, ampicillin and tetracycline. In general, penicillins, the combination of sulphonamides with trimethoprim and tetracyclines are the most commonly used classes of antibiotics in pigs [21] which are strongly correlated to the resistance level [22]. Slightly lower resistance levels to ampicillin, sulfamethoxazole and tetracycline were found in E. coli from Belgian pigs in 2015 [19]. Disinfectant susceptibility Overall, the MICs and MBCs of the susceptibility tests did not indicate disinfectant resistance as these values showed a homogeneous distribution and no remarkable differences in either parameters were found between the isolates, which is in agreement with Oosterik In other studies, MICs were found for benzalkoniumchloride of either 32 mg/L for 52.6% of the E. coli isolated from retail meats [23] or between 8 and 32 mg/L in avian pathogenic E. coli [11]. [11] or 3250 mg/L [24]. In general, the relative bactericidal order of the active disinfectant components (benzalkoniumchloride > formaldehyde > glutaraldehyde) is also similar to that reported in both the latter studies [11,24]. Small variations between results of susceptibility studies exist which can be attributed to the difference in bacteriological methods (broth dilution vs. agar dilution), media (TSB vs. MHB) and plate material (polypropylene vs. polystyrene) [25]. Therefore, standardisation of the MIC and MBC determination for disinfectants is needed to be able to survey these susceptibilities. In addition, it would be interesting to collate data from worldwide sources in a public database allowing to identify the distribution and set cut-off values. When comparing the MICs and MBCs with in-use concentrations of the respective active components in veterinary disinfection products (e.g. in Virocid ® or CID20 ® ), it was found that the MIC and MBC values for benzalkoniumchloride and formaldehyde were considerably lower assuming that the recommended concentrations of veterinary disinfection products are high enough to reduce the bacterial flora with 5 log colony forming units (CFU). In contrast, the MICs and MBCs for glutaraldehyde were much higher than the glutaraldehyde concentration used in veterinary disinfection products (e.g. in Virocid ® or CID20 ® ). For the latter active disinfectant component, the use of a nutrient-rich medium like TSB in the MIC and MBC assays could be the reason for the high MICs and MBCs due to the reaction of glutaraldehyde with constituents of the growth medium [26]. Moreover, in the latter cross-sectional study glutaraldehyde was never used independently and is as far as we know always used in combination with QACs (e.g. benzalkoniumchloride) which has a synergistic biocidal effect (Maris, 1995). Finally, a commercially product (D50 ® ), being a formulation of peracetic acid and hydrogen peroxide, was also tested in the current study. Our research group already demonstrated a MBC to D50 of 1% (10 mL/L) for Enterobacteriaceae isolates, although E. coli was not included in this previous study [12]. The MIC and MBC results for D50 in the current study were lower (between 1.25 to 5 mL/L) compared to the results of Luyckx et al. (2017). As this formulation of peracetic acid and hydrogen peroxide is a ready-touse disinfectant for veterinary disinfection purposes, it can be concluded that the recommended concentration of 0.5% (5 mL/L) is just sufficient to kill our field isolates. With the exception of the commercial disinfectant D50, single active components of disinfectants were used because the knowledge in case of reduced susceptibility to active components is the basis for understanding reduced susceptibility to commercial disinfection products, which are in most cases combinations of active components. However, none of the field isolates survived in-use concentrations of formaldehyde, benzalkoniumchloride and formulation of peracetic acid and hydrogen peroxide, which indicates that the proper use of disinfectants under practical conditions gives no indications for the selection for disinfectant resistance. Association between disinfectant use and antibiotic resistance prevalence Previously, in vitro studies have shown an increase in antibiotic MICs after repeated sub-culturing of bacteria in subinhibitory concentrations of commercial disinfectants [7,9] or active components [6]. Several other studies have found an association between decreased disinfectant susceptibility and antibiotic resistance. Still, these are results of non-standardized in vitro tests which do not provide information about the possible relation between disinfectant use and antibiotics resistance under practical conditions. Therefore, the effect of disinfectant use on antibiotic and disinfectant susceptibility of E. coli isolated from environmental samples after C&D was investigated in the current study. No significant positive associations were found between the use of active disinfectant components and antibiotic resistance. Remarkably, significant negative associations were found between the use of peracetic acid and hydrogen peroxide containing disinfectants and ampicillin, ciprofloxacin and tetracycline resistance in broiler production. These results suggest that the use of disinfectants containing this combination of active components would select for more susceptible E. coli bacteria. In literature, recent correlation studies performed with similar active components investigated associations between biocide susceptibility and antibiotic susceptibility; for peracetic acid and hydrogen peroxide containing disinfectants, no correlation between antibiotic resistance and MICs for peracetic acid and hydrogen peroxide containing products [27] or even a negative correlation between the susceptibility to hydrogen peroxide and antibiotic resistance to bramycin and aztreonam has been found [15] , which is in line with our results. Nonetheless, a biological explanation for these observations is lacking. Furthermore, only 2 out of 25 broiler farms used a peracetic acid and hydrogen peroxide containing disinfectant. Therefore, future research on a larger number of farms and with a greater diversity in disinfection applications is warranted to further investigate these associations. Conclusions As the E. coli field isolates showed a comparable antibiotic resistance profile with previous antibiotic resistance studies on fecal E. coli and because the disinfectant susceptibility results were homogenously distributed, it can be concluded that the E. coli strains found after C&D did not survive disinfection due to resistance but were still present due to inadequate C&D. Furthermore, all E. coli field isolates from broiler houses and pig nursery units were susceptible to in-use concentrations of formaldehyde, benzalkoniumchloride and formulation of peracetic acid and hydrogen peroxide, indicating that the proper use of disinfectants under practical conditions did not select for disinfectant resistance. Finally, the results of this study showed that there are no indications for the selection of antibiotic resistant bacteria through the use of disinfectants in agricultural environments. Selection of farms Belgian broiler and pig farms were randomly selected from the Belgian Identification and Registration (I&R) database by generating a list of random numbers via Excel which were linked to the farm list. The only selection criterion for broiler farms was that the flock contained at least 10, 000 animals to be representative for the average practice situation. For pig farms the selection criteria were 'farrowto-finish' or 'feeder-to-finish' types, and required the presence of piglets, sows and fattening pigs. A total of ca. 100 and 120 randomly selected broiler and pig farms respectively were invited by e-mail to participate. About a week later farmers were contacted by telephone and were asked whether they were willing to participate. Twenty-five broiler houses (flock size between 13,500 and 50,900 chicks) and 21 pig farms (pig nursery units consisted of 54 to 936 piglets) were visited once between March 2015 and July 2016. During these visits samples were taken and the farmer was interviewed face-to-face using a standardized questionnaire. Questionnaire design The questionnaire consisted of open and closed questions and covered several aspects regarding flock and herd characteristics, biosecurity, cleaning and disinfection practices and antimicrobial consumption. Completion time for the questionnaire took about one and a half hour. Collection of flock and herd data For broilers, data were collected regarding flock size, flock slaughter age and flock slaughter weight, as well as the yearly average flock size, average number of flocks and average slaughter weight. Questions for the sampled pig nursery units concerned the number of weaner pigs, age and weight when entering the nursery units, and age and weight at relocation to the fattening unit. The questionnaires developed for this study are provided in additional files (see additional files 2 and 3). Quantification of biosecurity status Evaluation of the biosecurity status in the broiler farms and pig herds was obtained using a previously defined questionnaire Biocheck.Ugent® available as an online tool: http://www.biocheck.ugent.be/biocheck.php (Biocheck.Ugent poultry: version 2.1; Biocheck.Ugent pigs: version 2.0). After putting the data into the Biocheck.Ugent tool, the external and internal biosecurity scores and their appropriate sub-categories were calculated and summarized into a report. The overall score was calculated as the mean of the external and internal biosecurity score. Cleaning and disinfection practices Questions regarding the applied cleaning and disinfection protocol were also asked and listed in additional files (see Additional files 2 and 3). For every sampled poultry or pig farm, the used disinfectants were recorded and the presence or absence of active components were listed into a Microsoft Excel spreadsheet (Microsoft, 2016) via a binary system. These active components of disinfectants were quaternary ammonium compounds (QACs), glutaraldehyde (GA), formaldehyde (F), peracetic acid (PA), hydrogen peroxide (H 2 O 2 ) and other components (e.g. chlorine and potassium peroxymonosulfate). Quaternary ammonium compounds and glutaraldehyde (QACs-GA) and hydrogen peroxide and peracetic acid (PA-H 2 O 2 ) were listed together as these active components are generally combined. Quantification of antibiotic use Data on antibiotic use for group treatments at the sampled animal houses were also obtained via prescriptions and order forms. For each group treatment, the product name, the amount of administration and the age (days) and weight (kg) of the treated animals were recorded. Quantification of drug use was done by determining the treatment incidence (TI 100 ) defined as the number of treatment days per 100 days or the % of treatment days [28]. The following formula was used to calculate the TI 100 per production round: In this equation, the Defined Daily Dose (DDD) is the nationally determined average maintenance dose per day and per kg animal of a specific antibiotic, the total animal amount is calculated as the number of animals multiplied by the average weight of the animals at the moment of treatment and the 'number of days at risk' is the duration of the production period considered. The Long Acting factor (LA factor) is used for long acting products and takes a longer duration of action into account [29]. [30] showing the highest percentage of swab samples positive for E. coli after cleaning and disinfection at 12 sampling locations), resulting in 48 swab samples per broiler house. A surface of 625 cm 2 was swabbed whenever possible. Since the surface of the drinking cups was smaller than 625 cm 2 , five drinking cups were sampled with the same sponge stick. For pig nursery units, in total four pens were sampled. At each pen, six different sampling locations were swabbed: floor, concrete wall, synthetic wall, feeding trough, drinking nipples and pipes, resulting in 24 environmental swab samples per pig nursery unit. A surface of 625 cm 2 was swabbed whenever possible. Since the pens of a pig nursery unit contains a drinking unit ranging from 1 to 10 nipples, a maximum of 2 nipples per pen was swabbed whenever possible and analysed as one sample. Sampling of pig nursery units was also based on previous work from our group by Luyckx et al. (2016) [31]. Detection and isolation of Escherichia coli After sampling, swabs were transported to the lab in a cool box with ice packs. Upon their arrival in the lab (± 2 h after sampling), 10 mL of Buffered Peptone Water (BPW, Oxoid, CM0509, Basingstoke, Hampshire, England) was immediately added to each sample, homogenized by a Masticator (IUL instruments, S.A., Barcelona, Spain) and incubated for 24 h at 37°C for enrichment of E. coli. After incubation, 10 μL of the enriched BPW fraction was plated on Rapid'E. coli 2 agar plates (Biorad, 356-4024, Marnes-la-Coquettes, France) and incubated at 44°C for 24 h. From positive Rapid'E. coli 2 plates purified isolates were obtained and stored at − 80°C on brain heart infusion (BHI, Oxoid, CM1032) supplemented with 15% (v/v) glycerol. To check the inoculum concentration and purity, 10 μL from the positive control well was transferred in 10 mL demineralized water and thoroughly mixed prior to transferring 100 μL of the inoculum to a PCA-plate, spread with a Drigalski spatula and incubation at 37°C. Escherichia coli antibiotic resistance profile For each isolate and each antimicrobial substance, the MIC was read and converted in binary qualitative values (wild type, further referred to as susceptible (S) and non-wild type further referred to as resistant (R)) based on the epidemiological cut-off values (ECOFF) (R: MIC > ECOFF, S: MIC ≤ ECOFF) defined by EUCAST (https://mic.eucast.org/Eucast2/). For azithromycin no ECOFF was available in the EUCAST-database so the cut-off 16 mg/L used by EFSA [32] was applied. Disinfectant susceptibility testing Isolate and disinfectant selection For each sampled poultry house and pig nursery unit three (if available) E. coli isolates from distinct sampling locations and with the highest number of antimicrobial resistances were selected in order to study the possible decreased disinfectant susceptibility in the more antibiotic resistant population. A total of 57 poultry and 61 pig isolates were examined. Based on the results of the questionnaire and on research from our group by Maertens et al. (2018) [18], active components most frequently occurring in disinfectants used in the sampled poultry houses and pig nursery units were selected, being: alkyldimethylbenzylammoniumchloride (BKC, > 95%, Sigma Aldrich) which is a QAC, formaldehyde (F, 35% vol/vol in H 2 O, Sigma Aldrich), glutaraldehyde (GA, 50% w/v in H 2 O, Sigma Aldrich) and a chemically stable formulation of peracetic acid (PA, 55 g/L) and hydrogen peroxide (H 2 O 2 , 220 g/L) (D50®, CID LINES, Ieper, Belgium) as H 2 O 2 rapidly degrades into water and oxygen and PA can decompose to acetic acid and oxygen [1]. Inoculum preparation The selected isolates were cultured on PCA at 37°C for 24 h. Per agar plate, one colony was picked and used to inoculate 10 mL of Tryptone Soya Broth (TSB, Oxoid, CM0129) and grown at 37°C for 16 h to obtain fresh liquid cultures. Subsequently, liquid cultures were centrifuged at 5000 g for 10 min and the supernatant was discarded. The remaining pellet was resuspended in 10 mL Ringers solution (Oxoid, BR0052). Next, inocula were diluted with Ringer solution to an optical density at 600 nm (OD600) corresponding with a viable count of 1-5 × 10 8 CFU/mL. To control the inoculum concentration, enumerations on PCA were carried out by using a spiral plater (Eddy Jet, IUL instruments, S.A., Barcelona, Spain). Reproducibility of the data To check the reproducibility and repeatability of the assay, eight isolates were tested in triplicate, on two different occasions. From then on, each isolate was tested only once. Minimum inhibitory concentration (MIC) The MICs of each active component (BKC, F and GA) or given formulation (D50) for the selected isolates were determined with a broth microdilution method based on the method described by Knapp et al. (2015) [33]. A 96-well microtiter plate with U-shaped wells (Novolab, A19652) was filled with 50 μL TSB containing twofold dilutions of the active component or formulation. Fifty microliters of the field isolates (1-5 × 10 8 CFU bacterial /mL) were added to the TSB in the microtiter plate, resulting in a total volume of 100 μL. Final concentration ranges were as follows: 0.213-0.007 g/L BKC, 1.480-0.046 mL/L F, 20-0.625 mL/L GA and 20-0.125 mL/L D50. As a positive control, 50 μL of each bacterial suspension was added to 50 μL TSB without disinfectant. To check for possible contamination, wells without bacterial suspension and disinfectant served as blank. After inoculation, plates were incubated for 24 h in a shaking incubator (100 rpm) at 37°C. After incubation, the MICs were read. The MIC was defined as the lowest concentration of active components or formulation where no growth was visually observed. In every experiment the E. coli reference strains for antibiotic susceptibility (ATCC 25922) and disinfectant susceptibility (ATCC 10536) were used as controls. Minimum bactericidal concentration (MBC) After determining the MIC, 20 μL of the cell suspension in the microtiter plate was transferred to a new 96-well round-bottom microtiter plate filled with 180 μL DE broth for 5 min. Subsequently, 12.5 μL of each well was spotted on PCA-plates. Plates were incubated at 37°C for 24 h and the MBC was determined. The MBC was defined as the lowest concentration where no visible growth on the agar plate was observed (~5 log CFU reduction). Data analysis For both animal categories the antibiotic resistance prevalence and the accompanying 95% confidence interval was calculated for each antibiotic based on the standard error of the binomial distribution in Microsoft Excel (Microsoft, 2016). The association between active components used (absent = 0, present = 1) during disinfection and antibiotic resistance at each farm was tested by means of binary logistic regression analysis taking the corresponding antibiotic use (TI100) into account as co-variable. First, the independent variables ('use of QACs-GA', 'use of F', 'use of PA-H 2 O 2 ' and 'use of other active components') were tested univariable for all antibiotics (n = 13 by combining sulfamethoxazole and trimethoprim resistance). Those variables with univariable P-values of < 0.20 were retained for further analysis in a multivariable model. Subsequently, with the retained variables, a multivariable logistic regression model was constructed using the stepwise backward elimination procedure starting with the global model and gradually excluding all non-significant factors. Multivariate binary logistic regression models were used for each antibiotic. As multiple models were tested to evaluate the effect of the different active components on the different types of antibiotic resistance a bonferroni correction for multiple testing was performed. P-values ≤0.0038 (after Bonferroni correction) were considered as significant. All statistics were performed using SPSS Statistics 25.0 (IBM Corporation, Armonk, NY).
6,342
2019-09-02T00:00:00.000
[ "Biology", "Medicine", "Agricultural And Food Sciences" ]
Effects of vacuum annealing on the electron mobility of epitaxial La-doped BaSnO3 films Wide bandgap (Eg ∼ 3.1 eV) La-doped BaSnO3 (LBSO) has attracted increasing attention as one of the transparent oxide semiconductors since its bulk single crystal shows a high carrier mobility (∼320 cm2 V−1 s−1) with a high carrier concentration (∼1020 cm−3). For this reason, many researchers have fabricated LBSO epitaxial films thus far, but the obtainable carrier mobility is substantially low compared to that of single crystals due to the formation of the lattice/structural defects. Here we report that the mobility suppression in LBSO films can be lifted by a simple vacuum annealing process. The oxygen vacancies generated from vacuum annealing reduced the thermal stability of LBSO films on MgO substrates, which increased their carrier concentrations and lateral grain sizes at elevated temperatures. As a result, the carrier mobilities were greatly improved, which does not occur after heat treatment in air. We report a factorial design experiment for the vacuum annealing of LBSO films on MgO substrates and discuss the implications of the results. Our findings expand our current knowledge on the point defect formation in epitaxial LBSO films and show that vacuum annealing is a powerful tool for enhancing the mobility values of LBSO films. © 2018 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1063/1.5054154 Transparent oxide semiconductors (TOSs) are promising candidates for various future electronic devices such as transistors, solar cells, and display panels.1,2 For such applications, high electron mobility (μ) is essential, and this has been a great disadvantage for TOS materials since their mobility values are low compared to classical semiconductors. In this regard, perovskite La-doped BaSnO3 (LBSO) is gaining significant interest since its bulk single crystal exhibits a wide bandgap (∼3.1 eV) and a very high mobility of 320 cm2 V−1 s−1,3,4 which is comparable to that of doped single crystal Si (∼350 cm2 V−1 s−1).5 For this reason, there have been many attempts to utilize LBSO in thin films transistors. However, crystalline defects prevent the μ in LBSO films from reaching the single crystal value, and many studies were devoted to improving the crystal quality of epitaxial LBSO films.6–12 For example, since misfit dislocations occur at the film/substrate interface due to the lattice mismatch, buffer layers are commonly used to reduce the dislocations.13,14 To completely eliminate the film/substrate mismatch, Lee et al. grew LBSO films on single crystal BaSnO3 substrate.15 In addition, in our recent study, we fabricated LBSO films under an ozone atmosphere to reduce the amount of point defects.16 Unfortunately, while these approaches were successful in improving the mobility values in LBSO films (up to ∼120 cm2 V−1 s−1), they can significantly increase the fabrication cost, which may be a crucial issue in mass production systems at industrial scales. In large scale production facilities, designing a clever post treating process is often more economical than improving the quality of as-deposited samples. In this regard, very interesting experimental results were released in 2015 and 2018. In one study, N2 environment at 1000 ◦C was used to create oxygen vacancies in the LBSO films on SrTiO3 substrates, and the μ increased from 41 cm2 V−1 s−1 to 78 cm2 V−1 s−1.17 In the other study (same research group), oxygen vacancies were generated in the LBSO films on SrTiO3 substrates using H2 APL Mater. 7, 022507 (2019); doi: 10.1063/1.5054154 7, 022507-1 However, crystalline defects prevent the µ in LBSO films from reaching the single crystal value, and many studies were devoted to improving the crystal quality of epitaxial LBSO films. [6][7][8][9][10][11][12] For example, since misfit dislocations occur at the film/substrate interface due to the lattice mismatch, buffer layers are commonly used to reduce the dislocations. 13,14 To completely eliminate the film/substrate mismatch, Lee et al. grew LBSO films on single crystal BaSnO 3 substrate. 15 In addition, in our recent study, we fabricated LBSO films under an ozone atmosphere to reduce the amount of point defects. 16 Unfortunately, while these approaches were successful in improving the mobility values in LBSO films (up to ∼120 cm 2 V −1 s −1 ), they can significantly increase the fabrication cost, which may be a crucial issue in mass production systems at industrial scales. In large scale production facilities, designing a clever post treating process is often more economical than improving the quality of as-deposited samples. In this regard, very interesting experimental results were released in 2015 and 2018. In one study, N 2 environment at 1000 • C was used to create oxygen vacancies in the LBSO films on SrTiO 3 substrates, and the µ increased from 41 cm 2 V −1 s −1 to 78 cm 2 V −1 s −1 . 17 In the other study (same research group), oxygen vacancies were generated in the LBSO films on SrTiO 3 substrates using H 2 ARTICLE scitation.org/journal/apm forming gas at 950 • C, which further improved the µ up to 122 cm 2 V −1 s −1 . 18 According to these studies, oxygen vacancies can neutralize the negative charges at threading dislocations [19][20][21] and induce lateral grain growth at elevated temperatures. As removing oxygen ions (O 2− ) near threading dislocations decreases their thermal stability, oxygen vacancy doping creates a very strong driving force for lateral grain growths at high temperatures, which significantly increases the free propagation length of the carrier electrons. These results suggest that post treating of LBSO films can be just as effective as modifying the synthesis methods for improving the asdeposited crystal quality of LBSO films. In undoped BaSnO 3 films, vacuum annealing is commonly used to create oxygen vacancies and induce mobile charge carriers. 22,23 Since vacuum annealing process is much simpler than creating N 2 or H 2 forming gas environment, it can be an alternative method for inducing oxygen vacancy assisted grain growths in LBSO films. In addition, since the deposition of oxide takes place in a vacuum chamber, the film growth and post annealing can be combined into one process. There is one study that examined the effect of vacuum annealing on the electron transport properties of LBSO films, 24 but the reported µ values are too low (<4 cm 2 V −1 s −1 ) to validate vacuum annealing as an effective method for improving µ. For optimizing the post annealing process, it is important to understand point defect formation in LBSO films because the µ-enhancement in LBSO films begins from generating oxygen vacancies. In stoichiometric BaSnO 3 , the oxidation states of its constituents are Ba 2+ = [Xe], Sn 4+ = [Kr] 4d 10 , and O 2− = [Ne]. Since two of the constituents (Ba 2+ , O 2− ) exhibit the same orbitals with inert gases (Xe, Ne), stoichiometric BaSnO 3 crystals are believed to be thermodynamically stable, 25 and stoichiometric BaSnO 3 films do not intrinsically conduct electricity as all electrons form firmly bound states. The formation energy of oxygen vacancy in BaSnO 3 is high and can only be lowered by reducing the chemical potential of oxygen, 26 which can be achieved by lowering oxygen pressure during the film growth 27,28 or vacuum annealing at high temperatures. 22,23 Therefore, as-deposited BaSnO 3 films do not have sufficient oxygen vacancies to conduct electricity unless they are intentionally created. By contrast, oxygen vacancies are much more common in LBSO films even if sufficient oxygen is provided during the film growth. 18 In one of our previous studies, Sn 2+ states were detected from LBSO films fabricated under 10 Pa of O 2 , which implies the presence of oxygen deficiency. 16,27 These results suggest that the La-dopants may be promoting oxygen vacancy formations in BaSnO 3 films. However, the relationship between La-dopants and oxygen vacancy in BaSnO 3 has not been investigated in detail. In this study, we studied the effect of La-dopants on the formation of oxygen vacancy in LBSO films and investigated the feasibility of enhancing the µ of LBSO films using vacuum annealing. According to our results, vacuum annealing significantly increases the µ of LBSO films. We also found that La-dopants increased the oxygen vacancy vs. lattice oxygen (V O /L O ) ratio in as-deposited LBSO films and affected the vacuum annealing effect. The results of this study expand our current knowledge on the point defect formation in epitaxial LBSO films and show that vacuum annealing is a simple and effective method for enhancing the electron mobility of LBSO films. To confirm that vacuum annealing can induce grain growth in LBSO films, a 2% LBSO film (∼64 nm) was prepared and cut it into two pieces. We annealed one piece in air and the other piece in vacuum (<10 −2 Pa) at 750 • C for 1 h. Then, we measured the lateral grain sizes of the films before and after the heat treatments using the RSM (Fig. 1). While the (204) diffraction spot of the air annealed film was almost the same with that of the as-deposited film, the (204) diffraction spot of the vacuum annealed was two times more intense compared to the other two films (as-deposited and air-annealed). The lateral grain sizes (D) of the as-deposited, air annealed, and vacuum annealed LBSO films were 9.1 nm, 10.2 nm, and 22.3 nm, respectively. The small grain size change after air annealing is not surprising since the film was deposited at 750 • C, and no significant change in the microstructure was expected. On the other hand, the vacuum annealing substantially increased the lateral grain size, which is consistent with the H 2 forming gas experiment. 18 This shows that vacuum annealing can indeed be an alternative method for triggering oxygen vacancy assisted grain growth in LBSO films. In order to find the optimum vacuum annealing temperature, several 2% LBSO films with similar thicknesses (∼41 nm) and electrical properties were fabricated (supplementary Table S1), and they were annealed in vacuum (<10 −2 Pa) at different temperatures ranging from 650 • C to 800 • C for 30 min. With increasing annealing temperature, the D of the films increased gradually from ∼5 nm to ∼17 nm [ Fig. 2 The as-deposited film was cut into two pieces. One was annealed in air, while the other was annealed in vacuum. Almost no change in the RSM was observed from air-annealing. On the other hand, significant increase in the lateral grain size and peak intensity can be noticed from the vacuum annealed film. Unfortunately, electrical properties of the annealed samples could not be measured since they were cut from a 1 cm 2 square sample, and our probe station can only measure 1 cm 2 square coupons. was annealed in the vacuum [ Fig. 2 Once the films became thicker than ∼120 nm, both the asdeposited and vacuum annealed mobility values saturated at ∼84 cm 2 V −1 s −1 and ∼100 cm 2 V −1 s −1 , respectively. Both as-deposited and vacuum annealed D exhibited strong thickness dependence (supplementary Fig. S2a). We believe the grain growth is hindered by the lattice strain, which decreases with increasing thickness. 29 The highest mobility observed was 101.6 cm 2 V −1 s −1 from vacuum annealed 117 nm LBSO film (as deposited: ∼74 cm 2 V −1 s −1 , supplementary Fig. S2d), which is comparable to that observed in LBSO films with buffer layers. This confirms that vacuum annealing is an effective method for enhancing the mobility of LBSO films. To find the effect of La-dopants on the vacuum annealing process, we annealed LBSO films (average thickness: 43 nm) with varying [La 3+ ] at 725 • C in vacuum for 30 min. with [La 3+ ] while the vacuum annealed films exhibited higher n compared to the as-deposited films [ Fig. 3(a)]. The 0.1% LBSO film, which did not conduct electricity, also became electrically conductive. The absolute value of thermopower (S) decreased with [La 3+ ], which reflects the relationship between n and [La 3+ ] [ Fig. 3(b)]. 30 The µ Hall of the annealed LBSO films was dramatically improved compared with the as-deposited LBSO films [ Fig. 3(c)], but the lateral grain size enhancement exhibited a strong doping dependence. At low doping levels (≤0.55%), the grain sizes did not increase much after vacuum annealing. It is important to note that mobility improvements were observed from 0.1% to 0.55% LBSO films despite the small changes in their grain sizes. In epitaxial films, threading dislocations often exhibit negative charges and generate energy barriers (mobility edge). [19][20][21] Therefore, mobilities in epitaxial films can depend on the Fermi energy (E F ), which increases with the carrier concentration. Since vacuum annealing increases both n and D, the µ Hall improvements observed from 0.1% to 0.55% are mainly attributed to the E F shift from additional charge carriers because they did not show significant grain size changes. This also implies that mobility improvements observed at other doping levels could have been affected the increase in n (i.e., higher E F ). To confirm the effect of E F shift on µ Hall , we vacuum annealed a 2% LBSO (45.7 nm) film from 100 • C to 800 • C in sequence in steps of 100 • C (i.e., vac. anneal 100 • C → 200 • C →· · · → 800 • C) to generate additional charge carriers while minimizing the grain growth. During the sequence annealing, the carrier concentration of the film gradually increased, but its electron mobility remained almost unchanged (supplementary Fig. S5). The drastic drop in µ Hall at 800 • C is likely from structural damages due to high V O generation rate. The sequence annealing shows that the E F at n ∼ 0.5 × 10 20 cm −3 exceeds the mobility edge of the LBSO films on MgO substrates, and the vacuum annealed mobilities are not strongly affected by the Fermi energy shift if n > 0.5 × 10 20 cm −3 . This number is consistent with the H 2 forming gas experiment, where two posttreated LBSO films with the same carrier concentration of 1.1 × 10 20 cm −3 exhibited different mobility values due to structural differences. 18 According to our results, µ at low doping levels (0.1% and 0.55%) is dominated by the Fermi level. If n > 0.5 × 10 20 cm −3 , the Fermi level seems to be above the mobility edge, and µ is mainly affected by electron scatterings (supplementary Fig. S5). The sources of electron scattering are point defects (impurity, V O ) and grain boundary (threading dislocation). The vacuum annealing effects observed from 2% to 5% LBSO films suggest that grain boundary scattering dominates up to D ∼ 12 nm and point defect scattering dominates from D > 12 nm ARTICLE scitation.org/journal/apm we performed the X-ray photoelectron spectroscopy (XPS) measurement of the LBSO films [ Fig. 4(a): as-deposited, Fig. 4(b): vacuum annealed]. In case of oxygen in perovskite oxides, the lattice oxygen peak (L O ) is located at ∼529 eV. If oxygen vacancies are present, this peak shifts to ∼531 eV (V O ). 18,31,32 Another oxygen peak around 532 ∼ 533 eV (A O ) can emerge from chemically adsorbed oxygen from surface contamination by organic molecules. The source of chemically adsorbed oxygen is unknown, but we believe it is related to the status of the vacuum chambers (annealing, XPS) or storing conditions. For the XPS peak fitting, a convolution between Gaussian (70%) and Lorentzian (30%) was used. For each LBSO films, the full width at half maximum (FWHM) was constrained to be the same for all 3 oxygen peaks. The V O peak energies of the LBSO films increased after vacuum annealing, especially for films with higher [La 3+ ] [ Fig. 4(c)]. The V O /L O area ratio from the XPS of the asdeposited LBSO films increased gradually when the [La 3+ ] exceeded 1% (supplementary Table S2). Upon vacuum annealing, this ratio increased further for all films except for 7% LBSO film. The largest change in V O /L O was observed from 2% doped LBSO film. Since the solubility limit of La in LBSO was reported to be ∼5%, the oxygen vacancy reduction in 7% LBSO is likely attributed to the formation of La 2 Sn 2 O 7 , 33 which could be observed from the ceramic targets used to deposit the films (supplementary Fig. S6). These results show that La-dopants in epitaxial LBSO films affect not only the oxygen stability but also the thermal stability of the film. The XPS results show that vacuum annealing increases oxygen deficiency in the LBSO films, and oxygen vacancies in oxides normally provide additional conduction electrons. However, associating all changes in n to additional V O from vacuum annealing does not adequately explain our results. For example, according to the thickness dependence, the carrier concentration enhancement was greatly reduced with increasing thickness [Fig. 2(c)] although the oxygen vacancy generated from vacuum annealing would have been similar as the La 3+ doping levels were the same unless the chemical potential of oxygen depends on the lattice strain. In addition, in the [La 3+ ] dependence, the changes in n [ Fig. 3(a)] do not match the changes in V O /L O [Fig. 4(c)]. The implications of these phenomena can be very interesting as they suggest strain-dependent oxygen stability or oxygen vacancies not generating additional charge carriers. However, no firm conclusions can be drawn at this moment since the vacuum annealing effect on the activation of [La 3+ ] dopant is unclear. Although we demonstrated that the mobility boost in the optimally doped films (2%) after vacuum annealing is strongly related to the lateral grain sizes, we would like to note that there were a couple unusual instances. In supplementary Figs. S2(a) and S2(d), µ Hall of vacuum annealed 280 nm film is slightly smaller than that of vacuum annealed 117 nm film although their grain sizes are both >12 nm and their electron densities are similar (2.2 × 10 20 cm −3 and 2.1 × 10 20 cm −3 ). This is another indicator that suggests strain-dependent oxygen vacancy generation, but this hypothesis requires more experimental evidence. Furthermore, in Fig. 2, the LBSO films ARTICLE scitation.org/journal/apm vacuum annealed at 650 • C and 700 • C exhibit different mobility values despite having similar grain sizes and electron densities (Fig. 2). We believe these phenomena are related to the combination of all processes induced by vacuum annealing: increase in the carrier concentration, increase in the oxygen vacancy, and lateral grain growth. While these data make our study not perfect, we believe these results are still valuable as they emphasize the necessity and importance of understanding oxygen vacancy in LBSO films. Finally, the role of La-dopants in the oxygen vacancy formation mechanism in the as-deposited LBSO films is also vague [ Fig. 4(c)]. In this regard, we believe that the role of threading dislocation in point defect formation is important. For example, in case of unintentionally V O doped BaSnO 3−δ single crystal, vacuum annealing reduces the carrier concentration and therefore reduces the oxygen vacancy level. 34 This contradicts the behavior of epitaxial BaSnO 3 films, where vacuum annealing increases the carrier concentration. 22,23 Since the main structural difference between single crystals and epitaxial films is the presence of threading dislocations, it is plausible to think that they can promote point defect formation in epitaxial films. In the context of this research, since impurities often segregate between grains separated by dislocations, 35 one possibility is the segregation of La-dopants at threading dislocations, which is plausible since La 3+ ions can compensate the missing cationic charges at threading dislocations. [19][20][21] This scenario also explains the low dopant carrier activation rate observed in epitaxial LBSO films [ Fig. 2(a)]. In this case, La 3+ vacant sites in the grain interior may lose adjacent O 2− ions due to the lack of bonding electrons, but more work is required to confirm the role of threading dislocations as well as the activation of [La 3+ ] dopants. Furthermore, it will be very interesting to re-anneal the vacuum annealed LBSO films in ambient air to inject oxygen back and fill oxygen vacancies. Unfortunately, although MgO has excellent vacuum stability, its thermal stability in air is poor, [36][37][38] and such experiments could not be considered in this study. However, annihilation of oxygen vacancies in the vacuum annealed films could potentially reduce defect scattering and further enhance the electron mobility. In summary, we examined the effect of vacuum annealing on the electron transport properties of the epitaxial LBSO films on (001) MgO substrates. Lateral grain sizes and carrier concentrations of the LBSO films substantially increased after vacuum annealing whereas it remained almost unchanged after air annealing. We also found the oxygen vacancy vs. lattice oxygen (V O /L O ) ratio in the XPS O 1s spectra increases with vacuum annealing, proving that oxygen vacancy generation indeed provides the driving force for this process. The results of this study clearly show that vacuum annealing improves electron mobility of LBSO films, where the mechanisms vary from Fermi energy shift to oxygen-vacancyassisted-grain-growth depending on the doping levels. Unfortunately, we were not able to explain all observed phenomena in detail, but our results do highlight the necessity for more studies on the thermodynamic processes involved with vacuum annealing of epitaxial LBSO films. The vacuum annealing approach was very effective for films with small thicknesses. Therefore, it is a very good method for making LBSO film transistors since low thicknesses are desired for reducing the power consumption. We believe these results will be useful for designing low cost fabrication methods for high-mobility LBSO films or can be used to improve the carrier mobility of other perovskite stannates such as SrSnO 3 . See supplementary material for detailed XPS characteristics and RSM patterns of the LBSO films.
5,121.2
2019-02-01T00:00:00.000
[ "Materials Science" ]
The mass of a Lifshitz black hole It is well known that massive 3D gravity admits solutions that describe Lifshitz black holes as those considered in non-relativistic holography. However, the determination of the mass of such black holes remained unclear as many different results were reported in the literature presenting discrepancies. Here, by using a robust method that permits to tackle the problem in the strong field regime, we determine the correct mass of the Lifshitz black hole of the higher-derivative massive gravity and compare it with other results obtained by different methods. Positivity of the mass spectrum demands an odd normalization of the gravity action. In spite of this fact, the result turns out to be consistent with computations inspired in holography. Introduction The holographic description of d-dimensional strongly correlated, non-relativistic systems with anisotropic scale invariance and no Galilean symmetry has been studied long time ago [1]. This consists of a geometrical realization that involves a especial type of static d + 1-dimensional spacetimes, known as Lifshitz metrics. These read with t ∈ R, r ∈ R >0 , and dx 2 being the flat metric on R d−1 ; here, we will consider d = 2, so x ∈ R. The parameter z ∈ R is the so-called dynamical exponent, and ℓ is a length scale associated to the spacetime curvature. Despite having finite scalar curvature invariants, the spacetimes (1) with 0 = z = 1 are singular; they are geodesically incomplete for timelike geodesics ending at r = 0. For z = 1, in contrast, the metric (1) is locally equivalent to AdS 3 spacetime, and the case z = 0 corresponds to the space product R × AdS 2 . For z generic, spacetimes (1) enjoy scale invariance t → e zσ t , r → e −σ r , x → e σ x , with σ being an arbitrary constant. This scaling symmetry, together with the translations in t and x, generate the full isometry group. The cases z = 0 and z = 1 are of course especial, having 4 and 6 Killing vectors and generating the groups R × SL(2, R) and SL(2, R) × SL(2, R), respectively. For z arbitrary, the Killing vectors are and generate the nilpotent isometry algebra The geometric configuration that would holographically describe 2-dimensional Lifshitz-type systems with dynamical exponent z at finite temperature are 3-dimensional black holes that asymptote (1) at large r. This motivates the search for sensible models that admit such black holes as exact solutions. This is actually a hard problem due to the validity of Birkhoff-type theorems in a large variety of systems, precluding the existence of static black hole configurations of the type required. This is the reason why the construction of asymptotically Lifshitz black holes typically involves the introduction of exotic matter content or non-minimal couplings to the gravity sector. However, it turns out that, in 3 dimensions, there exists a remarkably simple model admitting Lifshitz black holes. This is given by the massive deformation of 3-dimensional Einstein theory with no additional fields. It was shown in [2] that, if one considers the parityeven massive 3D gravity proposed in [3], a static Lifshitz black hole solution with dynamical exponent z = 3 can be analytically constructed. While other models admitting Lifshitz black holes are known in 3 dimensions, these either include additional fields [4,5] or exotic gravity field equations [6]. This makes the simple instances of Lifshitz black hole scarce. An example of this is massive gravity itself, where it has been proven [7] that such static black holes only exist for z = 1 and z = 3. This is why the solution of [2] is particularly interesting. Simple scaling arguments show that the mass of the z = 3 Lifshitz black hole of 3D massive gravity -see (14) below-takes the form where η is a dimensionless coefficient, G is the Newton constant, r + is the horizon radius, L is the length of the segment in which x takes values, and ℓ is the length scale that appears in (1) and which relates to the scalar curvature of the black hole as follows It is usual to consider the black hole solution with the coordinate x ≡ ϕℓ being periodic with period 2πℓ. This of course breaks the scaling symmetry, making the isometry group to be R × SO(2) even asymptotically. Here, we will consider ϕ ∈ [0, 2π], namely L = 2πℓ. In the literature, different authors, using different methods to compute the conserved charges This results in that not all the machinery that we at hand when dealing with asymptotically maximally symmetric spacetimes can actually be successfully applied to the case of Lifshitz spacetimes. This led the people to consider many different methods, with different degree of success. In [8], for example, the author considered the Wald formula to compute the entropy and inferred the mass from the first law of black hole mechanics, having found (5) with η = −1/4. In [9], in contrast, the authors considered a method involving dimensional reduction and found η = 1/16. In [10], the value η = −1/4 was found by defining a holographic stress-tensor and computing the quasi-local energy. In [11], the authors adapted the Abbott-Deser-Tekin (ADT) approach [22] to spaces with non-constant curvature and found η = 7/8. In [12], the authors made a very interesting analysis of the Lifshitz black hole thermodynamics and showed that this was consistent with |η| = 1/4. The value η = +1/4 was found in [13] considering another adaptation of ADT. Here, by considering a robust method that dispenses with the analysis of the large-radius asymptotia and permits to deal with the problem in the strong field regime, we will show that the correct value for the mass of the z = 3 Lifshitz black hole of the massive 3-dimensional gravity is (5) with η = −1/4. In particular, this implies that the mass of the black hole is negative for positive G and, therefore, as usual in massive 3D gravity, one needs to consider the wrong sign of the Newton constant in order to make sense out of the Lifshitz background. Massive 3D gravity Let us begin by reviewing the 3-dimensional massive gravity theory and its solutions. The action of the theory is This theory exhibits two local degrees of freedom organized in a way that there is a massive spin-2 mode of mass m. At linearized level, and around maximally symmetric spaces, the theory coincides with the spin-2 Fierz-Pauli theory [3]. This implies that action (7) describes a ghost-free theory. At full non-linear level, the field equations take the form with In the infinite mass limit, m 2 → ∞, where the local degrees of freedom decouple, the theory reduces to 3-dimensional Einstein gravity. Being a quadratic-curvature theory, for generic values of λ and m the field equations (8)- (9) may admit two maximally symmetric solutions. That is to say, generically there exist two values of the effective cosmological constant; these are assuming m 2 ≥ λ. This means that the theory has two natural vacua, which can be either Minkowski or (A)dS spaces, depending on the range of parameters. The effective cosmological constants (10) give the curvature radius of the solution ℓ = 1/ √ −Λ ± ; ℓ 2 > 0 for AdS 3 . This is equivalent to say For Λ ± < 0, the theory admits asymptotically AdS 3 solutions, including Bañados-Teitelboim-Zanelli (BTZ) black holes [14] and other interesting solutions [15,16]. The theory also admits solutions (1) for arbitrary z provided the coupling constants take the values which in particular demands λℓ 2 < 0. Lifshitz black hole A remarkable surprise occurs at z = 3, where the theory admits an extra static black hole solution [2]. This happens on a curve in the parameter space where On this curve, the following black hole solution exists where t ∈ R and r ∈ R >0 . We consider ϕ periodic with period 2π. r + is an integration constant that represents the horizon location, and ℓ is given by Metric (14) is not locally conformally flat, so it is neither a solution of Einstein theory nor of conformal gravity. Furthermore, it is not a solution of the parity-odd Topologically Massive Gravity model. It has isometry group R × SO (2), generated by the Killing vectors ∂ t , ∂ ϕ . The spacetime described by (14) exhibits a regular event horizon at r = r + , provided r + > 0. This horizon shields a curvature singularity that exists at r = 0; there, the Ricci scalar invariant (6) together with other invariants like R µν R µν diverge. When r + = 0, metric (14) reduces to the Lifshitz space (1) with z = 3. For generic values of r + , the metric still asymptotes Lifshitz space (1) with z = 3 at large r, meaning that it is asymptotically, locally invariant under the rescaling t → e 3σ t, r → e −σ r, ϕ → e σ ϕ. Actually, the solution also exhibits such a scaling symmetry at finite r provided, in addition to rescaling the coordinates, one also rescales the parameter as r + → e −σ r + . This leaves the black hole metric invariant. On the one hand, this is consistent with the fact that all the curvature invariants of the Lifshitz black hole depend only on the ratio r 2 + /r 2 . On the other hand, this provides us with an argument to anticipate the functional dependence of the mass, this being given in (5). We will compute the mass explicitly below. Conserved charges Boundary charges in d-dimensional theory of gravity, as well as in a d-dimensional gauge theory, are usually understood as integrals of (d − 2)-form potentials of the free theory, obtained this by linearizing the solution around an appropriate background configuration. These conserved (d − 2)-forms are in correspondence with the so-called reducibility parameters of the background geometry. In [18], a closed (d − 2)-form for the fully interacting theory has been constructed. It admits a closed form in terms of a one parameter family of solutions to the fully interacting theory admitting one such reducibility parameter. Here, we will consider the method of [17,18] to compute the charges. This method is fully constructive and robust, and it can be easily adapted to the massive deformation of gravity theory in 3 dimensions. Applying it in the deep bulk region, we will compute the mass of the Lifshitz black hole for the fully interacting theory. The expression of the functional variation of the conserved charge associated to the Killing vector ξ is where δg µν = h µν is a perturbation around a solution g µν , and where k µν is the surface 1-form potential. In the case of the massive 3D gravity, this form is given by three different contributions, namely the first contribution being the one coming from the Einstein-Hilbert term: The other two contributions come from the higher-derivative terms in the action (7); they are [19] k µν (0,2) = ∇ 2 k µν (0,1) + with h µν = δg µν , δR = −R αβ h αβ + ∇ α ∇ β h αβ − ∇ 2 h, and h = h µ µ . As said, we will address the computation of the charges in the region of the space where the theory is fully interacting. To do so, we find convenient to take the phase space of metric in their near-horizon form. We will consider the near horizon boundary conditions studied in [20]; namely, near the horizon consider the metric in the form where v ∈ R, ρ ≥ 0, and ϕ ∈ [0, 2π] with period 2π. The metric functions are of the form where the ellipsis stand for functions of v and ϕ that vanish at least as fast as O(ρ 3 ) near the surface ρ = 0 where the horizon is located. Notation is such that g (n) µν are the ρ-independent functions that accompany the order O(ρ n ) in the power expansion. In the expressions above, ϕϕ , g (2) vv , g (2) vϕ , and g (2) vρ are arbitrary functions of the coordinate ϕ, while κ = − 1 2 g (0) vv corresponds to the surface gravity at the horizon and thus is constant. We have also fixed g (1) vρ = 0, and we could have even set the gauge g vρ = 1 together with g ρρ = 0. As a first check that this way of computing the charges actually works, let us illustrate the calculation considering the BTZ black hole. We evaluate (15) for the Killing vector ξ = ∂ v and realize the functional variation by varying the parameter r + ; that is, we perform r + → r + + δr + . This induces a variation of the near horizon form of the BTZ metric g µν → g µν + δg µν , with and after integrating we find which is actually the correct result for the mass of the BTZ black hole in the massive gravity theory. In addition, in order to check this method, we can try to follow the same steps to compute the mass of the generalization of the BTZ black hole that, for massive gravity theory, was found in [15,16]; see Eqs. (24)- (25) in the latter reference. This black hole, which only exists when 2m 2 ℓ 2 = −1, has non-constant curvature, is asymptotically AdS 3 in a way that is weaker than the standard Brown-Henneaux boundary conditions, and presents two horizons; let us denote r ± the location of the horizons and δr ± their independent variations. This yields which gives This actually coincides with the correct value of the mass; see Eq. (8) in [24]; see also (12) in [23], cf. Eq. (49) therein. In the particular case r − = −r + the solution reduces to the static BTZ black hole, and in that case (25) reduces to (23) for 2m 2 ℓ 2 = −1. This indicates that the method of computing the mass from the near horizon charges is working perfectly, even in the case of black holes with non-constant curvature. At this point, one might wonder why this near horizon computation is giving the correct value of the mass and not, as in [20], the product between the Hawking temperature and the Bekenstein-Hawking entropy, cf. [21,23]. The answer is that, while the near horizon boundary conditions considered here are exactly the same as in [20], the way in which we implement the functional variation here is different: Here, we do not consider variations in the space of metrics that keep the horizon temperature constant, but we consider arbitrary variations in a one-or two-parameters family. In other words, δg in (15) here generically yields δg (0) vv = 0. As a result, we correctly reproduce the black hole mass from the near horizon computation, with the appropriate numerical factor. In the case of the BTZ black hole, the same result (23) can be obtained by resorting to the ADT method, which amounts to consider linearized solutions around the AdS 3 vacuum in the asymptotic, near boundary region. However, in the case of the z = 3 Lifshitz black hole, the method that resorts to the linearization of the metric in the large-r region does not lead to the correct result for the mass. The reason why it happens has been explained in [19]. In that case, the computation yieldsM We confirm this output, which is not the correct result for the z = 3 Lifshitz black hole. The correct value for the mass of the latter can be obtained as we did above for the case of the z = 1 solutions. However, this would first require to put the solution (14) in the near horizon form (20)- (21). To achieve so, we define coordinates We observe that ρ = 0 at the horizon r = r + , and it holds that for small ρ. The change of variable (27) suffices to put metric (14) in the form (20)- (21) with which in particular yields the surface gravity κ = r 3 + /ℓ 4 . Now, we are ready to evaluate (15) for the Killing vector ξ = ∂ v and realize the functional variation by varying the parameter δr + . This yields the metric variation g µν → g µν + δg µν with And, finally, we obtain which is the correct result for the mass; that is, η = −1/4. The factor L/(2πℓ) in this expression comes from the integration on the coordinate x = ϕℓ, and it is 1 for the case ϕ has a period 2π. Conclusions In summary, we conclude that the mass, the entropy and the temperature of the z = 3 black hole solution are given by respectively. While the entropy can be computed by the Wald formula, the temperature follows from the standard geometrical methods. These quantities satisfy the first principle dM = T dS and a Smarr type formula M = 1 4 T S. Notice that, despite being a solution of a higher-curvature theory, the Lifshitz black hole happens to satisfy the area law S ∝ 2πr + /G, though with a special factor. Both the mass and the entropy turn out to be negative, so the change G → −G is needed for making sense out of the theory around this background. One may wonder what happens in the case of stationary, non-static black holes. In the case of asymptotically AdS 3 rotating black holes, a near-horizon computation in massive 3D gravity was done in [23]. In the case of the rotating version of (14), such solution actually exists [25] and can be analytically constructed by an improper boost acting on the static metric; however, the resulting spacetime happens not to be asymptotically Lifshitz. Before concluding, it would be interesting to compare our result with those of the literature and to explain the differences: As said, in [8] the author found the η = −1/4, in agreement with our (32); see Eq. (2.23) in [8]. The value η = −1/4 was also found in [10]; see Eq. (5.70) therein. In order to compare with [10] is is necessary to consider that our convention for the sign of the Einstein-Hilbert piece in the gravity action corresponds to σ = +1 in that paper; besides, they consider conventions with the opposite sign for m 2 ; this is also consistent with (23). In Ref. [9], the authors find the different value η = 1/16; see Eq. (27) therein. Another different value appears in [11], where η = 7/8 is obtained; see Eq. (25) therein. In [12], the authors found |η| = 1/4, see Eq. (37) therein, which is actually consistent with our result as they consider the opposite overall sign of the gravity action. Our result turns out to be consistent with holography. One of the reasons is that it agrees with the result obtained by computing the quasi-local energy with the boundary stress-tensor [10]. While in the case of bulk theories whose gravity sector is described by the Einstein-Hilbert action such a computation follows straightforwardly from the holographic renormalization recipe, in the case of higher-derivative theories such as massive 3D gravity the definition of a holographic stresstensor requires additional prescriptions to define the variational principle and, consequently, to write down the counterterms. This introduces certain degree of ambiguity in the calculation. Therefore, the fact of having reproduced with our computation the results of [10] can be regarded as a further support of the definition of the quasi-local stress-tensor proposed therein. Another reason why our result is compatible with holography is that it agrees with the mass spectrum that leads to reproduce the entropy of the Lifshitz black hole from the generalized Cardy formula computation [12], which follows from considering the generalization of the modular invariance of the partition function of the dual theory to arbitrary values of z. This points into the direction of a microscopic derivation of the Lifshitz black hole entropy.
4,736.2
2021-05-24T00:00:00.000
[ "Physics" ]
E ff ect of the Surface Roughness on the Shear Strength of Granular Materials in Ring Shear Tests : Surface roughness plays an important role in estimating the shear strength of granular materials. A series of ring shear tests with di ff erent surface roughnesses (i.e., smooth and rough surfaces) were performed. A large-sized ring shear device, which is applicable for fine- and coarse-grained sediments, was developed to examine the shear strength of large particle sizes (i.e., commercial gravels with a mean grain size of 6 mm). In terms of surface roughness, the drainage-and shear-velocity-dependent shear strengths of the granular materials were examined. In this study, di ff erent shear velocities of 0.1, 0.5, and 1 mm / s were applied under drained and undrained conditions. The test results clearly show that shear stress is a ff ected by drainage, shear velocity, and surface roughness. In particular, a typical strain-hardening behavior is exhibited regardless of the drainage and shear velocity condition. The measured shear strength obtained from both drained and undrained conditions increased with increasing shear velocity. All tests showed a large fragmentation using rough surfaces compared to the smooth surfaces of the device. The grain crushing was significant during shearing, even when normal stress was not applied. For a given shear velocity, surface roughness is an important feature in determining the shear strength of granular materials. Introduction The shear strength of granular materials is a very important parameter in engineering practice when examining the frictional and viscous characteristics of geomaterials [1,2]. Granular materials have complicated shear deformation characteristics involving solid-and liquid-like behaviors [3,4]. A previous study found that granular materials have much more complex elastoplastic and viscous behaviors than materials with finer particles [5]. For this reason, surface roughness has often been taken into account in examining soil-structure interactions in geotechnical engineering practices for piles, foundations, retaining walls, tunnels, embankment, and earth reinforcement [6][7][8][9][10], because the soil-structure interaction is significantly influenced by the material's properties, shape, roughness, and loading conditions (i.e., monotonic and cyclic loading). Numerous studies have focused on the frictional and viscous behaviors of loose and dense sands as representative granular matter [6,11]. According to Hu and Pu [8], for a sandy size particle (0.075-2 mm), elastic perfect-plastic behavior is dominant for a smooth interface, while strain localization is dominant for a rough interface. The importance of surface roughness in various grain sizes has also been evaluated and presented by numerous researchers, because inter-particle friction can be affected by the Young's modulus, particle shape, and surface roughness of the materials [11][12][13]. The reason for this is that natural materials have a wide spectrum of surface features in nature; it has generally been observed that inter-particle friction increases with surface roughness. According to Shahrour and Rezaie [11], the friction angle depends on both sand density and interface roughness. In order to investigate the effect of roughness on the frictional behavior of particle-particle and soil-structure interfaces, different types of tests have been used under monotonic and cyclic loading: The simple shear test [14,15], the direct shear test [8,9], and the ring shear test [16,17]. The effect of surface roughness on the mechanical behavior of granular materials is also crucial for natural disaster prediction and prevention, such as landslides and debris flows. In landslide areas, the mechanical properties that may lead to rock avalanches with rapid, massive, and dynamic movements of fragmented rocks are evaluated by the surface roughness determined by in situ and laboratory tests [9,18]. In geology, joints and weathering are related to fragmentation [19]. The shear strength, permeability, and compressibility of granular materials are also greatly affected by the fragmentation phenomenon when they are subjected to changes in stress. An empirical method to estimate the travel distance of fragmented rocks in the Wenchuan earthquake area was presented by Zhan et al. [1]. High mobile mass movements can be caused by the soft base effect [20], which is related to the increase of fine-grained sediment during a landslide motion. In general, the surface roughness of granular materials may result in particle breakage, also called grain crushing. In the shear stress (or force) and strain relationship, three stages of particle breakage under compression can be identified: (i) Local rearrangements and sliding, (ii) fragmentation by abrasion, and (iii) fragmentation by fracture [21]; it is assumed that grain fragmentation can be characterized by the progressive propagation of a transversal crack (including compression, abrasion, and fracture) inside the grain. In general, the breakage potential of a soil particle increases with its size [22]. Using a large-sized ring shear apparatus, particle breakage can be measured by the grain size distribution curve before and after shearing [23,24]. However, the mechanical behavior due to particle breakage in different drained and shear velocity conditions is not fully understood. In assessing landslide mobility, the mechanical deformation from failure to post-failure may affect the shear zone formation, which depends on the potential for grain crushing to occur. Thus, in a traditional laboratory test condition using a smooth surface for a given loading, the underestimation of shear stress appears to be inevitable. The objective of this study was to examine the effect of surface roughness on the shear strength of gravels, as a granular matter, in the ring shear test. First, the characteristics of shear stress are examined as a function of the drainage and shear velocity. Second, the surface roughness effect under drained and undrained conditions is compared for different shear velocities. Third, the determined shear strength and shear velocity relationships are discussed. Finally, the grain crushing effect is highlighted using a grain size distribution analysis of gravels before and after the tests. However, the roughness of the grain itself, cyclic behavior, and the strain rate effect based on the grain size were considered to be beyond the scope of this study. Materials Commercial aquarium gravel with different grain sizes of 5-10 mm was used, because it may result in higher fragmentation than the fines. The mean rounded shape value was approximately 6 mm ( Figure 1). The dry densities were 1.839 and 1.725 g/cm 3 . To represent an unbroken loose state of gravel, a total of 4500 g contained in a ring shear box with a hollow cylinder volume of 2580 cm 3 was considered in all tests. Grain size analyses were conducted for unbroken and broken gravel particles before and after the tests, respectively. to assemble and has simple drainage and shear velocity control systems ( Figure 1). The device incorporates one of the largest ring shear boxes, with a maximum outer diameter of 250 mm. It is applicable for both fine-and coarse-grained sediments. The samples were contained in the ring shear box, which is also called the ring shear cell [30]. Vertical loading can be applied to the samples through three piston chambers (Figure 1a,b). However, normal stress-dependent frictional strength and shear zone formation were beyond the scope of this study. After mounting, the drained or undrained condition is selected using the open/close valve in the upper plate. The shear velocity can be varied from 0.01 to 100 mm/s. In this study, three shear velocities were selected: 0.1, 0.5, and 1 mm/s, with a particular emphasis placed on surface roughness. These velocities were selected because the grain crushing of gravel is much higher (excessive) when the shear velocity is higher than 1 mm/s in ring shear tests [30]. However, the effect of surface roughness on the shear stress in the shear zone is not clear, because the inside of the ring shear box cannot be observed during the test. Normal stress was not applied during the tests regardless of the drainage condition; that is, the upper ring plate just contacts the upper part of the gravel material being tested without applying any load because the frictional behavior is strongly influenced by applied normal stress [8,11], which was not the focus of this study. The grain size and surface roughness also have significant influence on the shear resistance [14]. To minimize the normal stress effect on the granular materials, a constant normal stress is applied during a test (i.e., 0 kPa). However, the gravel may resist the dilative force during shearing when the normal stress is applied to maintain a zero vertical loading value. Thus, the materials tested are continuously subjected a contractive force, which could be negligible within the relatively low shear velocities imposed (e.g., 0.1-1 mm/s). During the tests, the shear stresses (i.e., the torques) were measured by two arms on the sides of the ring shear box. The variations in vertical displacement and normal stress were also measured. The testing procedures were identical to those detailed by Sassa et al. [25,26]. Ring Shear Test A ring shear apparatus is commonly used to examine the residual shear strength of soils based on a shear stress-shear strain relationship. It is a well-known special instrument with an artificial slip surface that allows for measurements of the shear stress when forces act on a sample [25][26][27]. One of the main advantages of the ring shear test is the unlimited shear deformation developed in a soil sample. There is a constant contact area during the test, so extra stress concentration with large shear deformation does not occur [8]. To investigate the undrained shear strength of soils, a large-sized ring shear apparatus was developed by various researchers [17,[25][26][27][28][29]. This ring shear device is easy to assemble and has simple drainage and shear velocity control systems ( Figure 1). The device incorporates one of the largest ring shear boxes, with a maximum outer diameter of 250 mm. It is applicable for both fine-and coarse-grained sediments. The samples were contained in the ring shear box, which is also called the ring shear cell [30]. Vertical loading can be applied to the samples through three piston chambers (Figure 1a,b). However, normal stress-dependent frictional strength and shear zone formation were beyond the scope of this study. After mounting, the drained or undrained condition is selected using the open/close valve in the upper plate. The shear velocity can be varied from 0.01 to 100 mm/s. In this study, three shear velocities were selected: 0.1, 0.5, and 1 mm/s, with a particular emphasis placed on surface roughness. These velocities were selected because the grain crushing of gravel is much higher (excessive) when the shear velocity is higher than 1 mm/s in ring shear tests [30]. However, the effect of surface roughness on the shear stress in the shear zone is not clear, because the inside of the ring shear box cannot be observed during the test. Normal stress was not applied during the tests regardless of the drainage condition; that is, the upper ring plate just contacts the upper part of the gravel material being tested without applying any load because the frictional behavior is strongly influenced by applied normal stress [8,11], which was not the focus of this study. The grain size and surface roughness also have significant influence on the shear resistance [14]. To minimize the normal stress effect on the granular materials, a constant normal stress is applied during a test (i.e., 0 kPa). However, the gravel may resist the dilative force during shearing when the normal stress is applied to maintain a zero vertical loading value. Thus, the materials tested are continuously subjected a contractive force, which could be negligible within the relatively low shear velocities imposed (e.g., 0.1-1 mm/s). During the tests, the shear stresses (i.e., the torques) were measured by two arms on the sides of the ring shear box. The variations in vertical displacement and normal stress were also measured. The testing procedures were identical to those detailed by Sassa et al. [25,26]. Surface Roughness in the Ring Shear Box The annular-like shape of the ring shear box has an inner diameter of 110 mm, an outer diameter of 250 mm, and a height of 75 mm ( Figure 2). The width between the inner and outer sides of the annular-shaped shear box is 70 mm. The height of the materials in the ring shear box can be varied from 70 to 80 mm, but a fixed height of 75 mm was applied in the tests. In this context, a wide range of large particle sizes (e.g., 0.075-10 mm) can be accommodated within the box. To minimize the slip effect, surface roughness is required for many types of instruments when investigating an engineering problem. In some tests, a thick sand cover with a rough surface texture was used for clayey to sandy soils. In this study, two types of ring shear boxes with smooth (classic) and rough (serrated) surfaces were made and placed inside the inner and outer sidewalls of the ring shear box to examine the frictional and fragmentation effects during shearing. No roughness was considered for the classic ring shear box except at the top and bottom. In the upper and lower plates, a combination of porous stone and 12 saw-like surfaces was used. The rough surfaces in the second ring shear box were located inside the inner and outer sides of the hollow cylinder box (Figure 2b-d). The dimensions of the rough surfaces were fixed. The thickness, width, and height were 5 mm, 3 mm, and 22 mm, respectively, for the lower ring, and 5 mm, 3 mm, and 26 mm, respectively, for the upper ring. In total, there were 32 inner rough surfaces and 72 outer rough surfaces. Because of the dimensions of the rough surfaces, large particle sizes were well sheared at the inner and outer perimeters of the shear box. In particular, the roughness was considered capable of directly affecting the gravelly soil tested (mean grain size of 6 mm). The annular-like shape of the ring shear box has an inner diameter of 110 mm, an outer diameter of 250 mm, and a height of 75 mm ( Figure 2). The width between the inner and outer sides of the annular-shaped shear box is 70 mm. The height of the materials in the ring shear box can be varied from 70 to 80 mm, but a fixed height of 75 mm was applied in the tests. In this context, a wide range of large particle sizes (e.g., 0.075-10 mm) can be accommodated within the box. To minimize the slip effect, surface roughness is required for many types of instruments when investigating an engineering problem. In some tests, a thick sand cover with a rough surface texture was used for clayey to sandy soils. In this study, two types of ring shear boxes with smooth (classic) and rough (serrated) surfaces were made and placed inside the inner and outer sidewalls of the ring shear box to examine the frictional and fragmentation effects during shearing. No roughness was considered for the classic ring shear box except at the top and bottom. In the upper and lower plates, a combination of porous stone and 12 saw-like surfaces was used. The rough surfaces in the second ring shear box were located inside the inner and outer sides of the hollow cylinder box (Figure 2bd). The dimensions of the rough surfaces were fixed. The thickness, width, and height were 5 mm, 3 mm, and 22 mm, respectively, for the lower ring, and 5 mm, 3 mm, and 26 mm, respectively, for the upper ring. In total, there were 32 inner rough surfaces and 72 outer rough surfaces. Because of the dimensions of the rough surfaces, large particle sizes were well sheared at the inner and outer perimeters of the shear box. In particular, the roughness was considered capable of directly affecting the gravelly soil tested (mean grain size of 6 mm). Results The test results indicated the shear characteristics of gravels, the minimization of the wall slip due to the rough surfaces in the inner and outer hollow cylinder box, the measured shear stress as a function of the shear velocity, and the grain crushing effect with respect to surface roughness at different drainage conditions and shear velocities. Results The test results indicated the shear characteristics of gravels, the minimization of the wall slip due to the rough surfaces in the inner and outer hollow cylinder box, the measured shear stress as a function of the shear velocity, and the grain crushing effect with respect to surface roughness at different drainage conditions and shear velocities. Shear Characteristics of Gravel in the Ring Shear Tests Ring shear tests were initially performed using a classic smooth surface for the aquarium gravel with a normal stress of 0 kPa and a velocity of 0.5 mm/s. The mechanical properties of materials can often be described by their shear stress and shear strain (or displacement) relationship in response to the applied load. Figure 3 shows that in the shear stress-shear time relationship, the particles exhibited a typical strain-hardening behavior, regardless of the drainage condition, which implies that shear stress increases with increasing shear strain. This may be considered a shear resistance (i.e., yield strength) occurring at the beginning of plastic behavior based on the rheology. There was no contractive or dilative behavior apparent during shearing. Because the normal stress remained constant during shearing, very little decrease in the height of the sample was observed (the variation was less than 0.1 mm). According to Terzaghi et al. [31], typical dense sand and over-consolidated clays are dilative, while loose sands and normally consolidated clays are contractive. There was little effect based on the drainage condition. At the end of the test, as shown in Figure 3, the maximum shear strength was approximately 20 kPa. The minor variation in normal stresses with both drained and undrained conditions was caused by particle rearrangement, interlocking, and fragmentation along the artificial sliding plane during the ring shear test (Figure 3c,f). In the drained condition, after 1200 s, there was a slight decrease in shear stress, which may occur because the strain localization due to grain crushing was more significant in the drained condition; the large particles in the drained condition could freely move from the upper part to the middle part (artificial sliding plane) within the ring shear box. Similar results were reported by Sassa et al. [26]. The pore water pressure is also an important factor in determining the shear strength in the undrained condition, but in this study, this value was not a major factor because it strongly depends on the drainage and shear velocity. Thus, the measurement of pore water pressure was not considered. Shear Characteristics of Gravel in the Ring Shear Tests Ring shear tests were initially performed using a classic smooth surface for the aquarium gravel with a normal stress of 0 kPa and a velocity of 0.5 mm/s. The mechanical properties of materials can often be described by their shear stress and shear strain (or displacement) relationship in response to the applied load. Figure 3 shows that in the shear stress-shear time relationship, the particles exhibited a typical strain-hardening behavior, regardless of the drainage condition, which implies that shear stress increases with increasing shear strain. This may be considered a shear resistance (i.e., yield strength) occurring at the beginning of plastic behavior based on the rheology. There was no contractive or dilative behavior apparent during shearing. Because the normal stress remained constant during shearing, very little decrease in the height of the sample was observed (the variation was less than 0.1 mm). According to Terzaghi et al. [31], typical dense sand and over-consolidated clays are dilative, while loose sands and normally consolidated clays are contractive. There was little effect based on the drainage condition. At the end of the test, as shown in Figure 3, the maximum shear strength was approximately 20 kPa. The minor variation in normal stresses with both drained and undrained conditions was caused by particle rearrangement, interlocking, and fragmentation along the artificial sliding plane during the ring shear test (Figure 3c,f). In the drained condition, after 1200 s, there was a slight decrease in shear stress, which may occur because the strain localization due to grain crushing was more significant in the drained condition; the large particles in the drained condition could freely move from the upper part to the middle part (artificial sliding plane) within the ring shear box. Similar results were reported by Sassa et al. [26]. The pore water pressure is also an important factor in determining the shear strength in the undrained condition, but in this study, this value was not a major factor because it strongly depends on the drainage and shear velocity. Thus, the measurement of pore water pressure was not considered. Figure 4 presents a comparison of the shear strength obtained from the smooth and rough surfaces with respect to the drainage condition drained or undrained) and shear velocity (0.1, 0.5, and 1 mm/s). The shear stress-time relationship was considered because the total displacements of the materials tested were not exactly the same for all test conditions, as they may have been different Figure 4 presents a comparison of the shear strength obtained from the smooth and rough surfaces with respect to the drainage condition drained or undrained) and shear velocity (0.1, 0.5, and 1 mm/s). The shear stress-time relationship was considered because the total displacements of the materials tested were not exactly the same for all test conditions, as they may have been different for a given shear velocity. As previously described, the tests were performed over a limited range of shear velocities because rapid shearing may have resulted in an unnecessary reduction in the shear strength of granular materials for higher velocities [32]. From the test results, the shear stress and shear deformation characteristics were very similar for both drainage conditions. The effect of surface roughness on shear stress is crucial when considering a relatively high shear velocity (i.e., > 0.1 mm/s). At 0.1 mm/s, the shear stress measured from the rough surfaces was slightly larger than from the smooth surfaces; however, at the end of the test, both values were similar at their closest point (20 kPa for the drained condition and 17 kPa for the undrained condition). At 0.5 mm/s, the difference in surface roughness was significant (Figure 4b,e). Compared to the results at 0.1 mm/s, the shear stress measured from the smooth surface only slightly increased (21-22 kPa), but the shear stress measured from the rough surface greatly increased (33-34 kPa). At the final stage of shearing (at 2200 s), the stress difference between smooth and rough surfaces was almost constant (12 kPa). For the shear velocity of 1 mm/s (Figure 4c,f), there was little difference in the stress-time relationships. The difference was small until the shearing time reached approximately 500 s; when shearing was continuously applied, dilatancy was prominent in the rough surfaces. In the drained condition, a larger difference in shear stress was observed when a larger shear velocity was applied, but the difference was small in the undrained condition. These results may be attributed to the presence of shear localization and water in the shear zone. Effect of the Surface Roughness on the Shear Stress Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 10 roughness on shear stress is crucial when considering a relatively high shear velocity (i.e., > 0.1 mm/s). At 0.1 mm/s, the shear stress measured from the rough surfaces was slightly larger than from the smooth surfaces; however, at the end of the test, both values were similar at their closest point (20 kPa for the drained condition and 17 kPa for the undrained condition). At 0.5 mm/s, the difference in surface roughness was significant (Figure 4b,e). Compared to the results at 0.1 mm/s, the shear stress measured from the smooth surface only slightly increased (21-22 kPa), but the shear stress measured from the rough surface greatly increased (33)(34). At the final stage of shearing (at 2200 s), the stress difference between smooth and rough surfaces was almost constant (12 kPa). For the shear velocity of 1 mm/s (Figure 4c,f), there was little difference in the stress-time relationships. The difference was small until the shearing time reached approximately 500 s; when shearing was continuously applied, dilatancy was prominent in the rough surfaces. In the drained condition, a larger difference in shear stress was observed when a larger shear velocity was applied, but the difference was small in the undrained condition. These results may be attributed to the presence of shear localization and water in the shear zone. There was no direct observation possible for the soil-structure interaction in ring shear tests using the developed apparatus. However, from the test results, it may be deduced that, in terms of the grain fragmentation, local rearrangement and sliding features were dominant for the shear velocity of 0.1 mm/s, while local abrasion and global fracture inside the grain were dominant for shear velocities of 0.5 and 1 mm/s. In brief, from the test results, the surface roughness was of significance, and this proves that the wall slip phenomenon was minimized. However, it is not easy to estimate the fragmentation effect based on shear velocity. Peak Shear Strength as a Function of Shear Velocity From the shear stress and shear displacement relationships, the peak and residual shear strengths could be determined. The maximum and minimum shear strengths were clearly determined from the strain-softening behaviors of materials such as dense sands or over-consolidated clays. According to Toyota et al. [33], there appears to be no shear rate effect for low and high plasticity clays when relatively low shear rates ranging from 10 −6 to 10 −3 mm/s are imposed on the slip surface. In this study, the strain-hardening behavior was dominant for a given drainage and shear velocity; thus, only the peak shear strength (i.e., the final shear stress) could be determined. For convenience, the value at the end of each test was selected. In this context, the ultimate shear strength and residual shear strength could not be determined. For a dense sandy soil-steel interface, Saberi et There was no direct observation possible for the soil-structure interaction in ring shear tests using the developed apparatus. However, from the test results, it may be deduced that, in terms of the grain fragmentation, local rearrangement and sliding features were dominant for the shear velocity of 0.1 mm/s, while local abrasion and global fracture inside the grain were dominant for shear velocities of 0.5 and 1 mm/s. In brief, from the test results, the surface roughness was of significance, and this proves that the wall slip phenomenon was minimized. However, it is not easy to estimate the fragmentation effect based on shear velocity. Peak Shear Strength as a Function of Shear Velocity From the shear stress and shear displacement relationships, the peak and residual shear strengths could be determined. The maximum and minimum shear strengths were clearly determined from the strain-softening behaviors of materials such as dense sands or over-consolidated clays. According to Toyota et al. [33], there appears to be no shear rate effect for low and high plasticity clays when relatively low shear rates ranging from 10 −6 to 10 −3 mm/s are imposed on the slip surface. In this study, the strain-hardening behavior was dominant for a given drainage and shear velocity; thus, only the peak shear strength (i.e., the final shear stress) could be determined. For convenience, the value at the end of each test was selected. In this context, the ultimate shear strength and residual shear strength could not be determined. For a dense sandy soil-steel interface, Saberi et al. [6] demonstrated that as the roughness increases, the shear strength and friction at the peak increase, but those at the residual state essentially remain unchanged. The latter may be characterized by the thickness of the crushed grain band and the fine content in a shear zone. For gravel, the shear zone thickness can be affected by a relatively high shear speed (e.g., 300 cm/s, as tested by Sassa et al. [26]) and cyclic motion with surface roughness [10]. Zhang and Zhang [10] demonstrated that a large shear deformation can result in a remarkable thickness of the crushing band near the soil-structure interface for gravelly soils with a mean grain size of 7 mm. In our study, this thickness of the crushing band increased up to a maximum of 24 mm depending on the shear applied, but it reached a stable state if a large amount of shear deformation occurs. In general, the shear resistance increases with increasing shear velocity [34,35]. Figure 5 shows the dependence of the peak shear strength on the shear velocity. Regardless of the drainage condition, the values were similar at the same roughness. At the lowest shear velocity, i.e., 0.1 mm/s, the difference varied from 0.3 kPa to 1.5 kPa. However, at the medium shear velocity, i.e., 0.5 mm/s, the difference varied from 11.7 kPa to 14.6 kPa. Compared to the lowest shear velocity, the change in shear resistance increased 10-fold. At the highest shear velocity, i.e., 1 mm/s, the difference varied from 5.2 kPa to 14.7 kPa. The difference in average values of shear strength (∆v) obtained from smooth and rough surfaces for each shear velocity were 1, 12, and 10 mm/s, for 0.1, 0.5, and 1 mm/s, respectively. The largest difference was found at the shear velocity of 0.5 mm/s, meaning that this shear velocity may induce an abrasion and fracture-dominant process not present in the 0.1 mm/s shear condition. Interestingly, in the drained condition, the shear strength at 1 mm/s was almost identical to that at 0.5 mm/s. However, in the undrained condition, the difference may have decreased by as much as 5 kPa. Hence, in the undrained condition, the water in the shear zone played an important role in shearing when measuring the torque in the ring shear test. al. [6] demonstrated that as the roughness increases, the shear strength and friction at the peak increase, but those at the residual state essentially remain unchanged. The latter may be characterized by the thickness of the crushed grain band and the fine content in a shear zone. For gravel, the shear zone thickness can be affected by a relatively high shear speed (e.g., 300 cm/s, as tested by Sassa et al. [26]) and cyclic motion with surface roughness [10]. Zhang and Zhang [10] demonstrated that a large shear deformation can result in a remarkable thickness of the crushing band near the soil-structure interface for gravelly soils with a mean grain size of 7 mm. In our study, this thickness of the crushing band increased up to a maximum of 24 mm depending on the shear applied, but it reached a stable state if a large amount of shear deformation occurs. In general, the shear resistance increases with increasing shear velocity [34,35]. Figure 5 shows the dependence of the peak shear strength on the shear velocity. Regardless of the drainage condition, the values were similar at the same roughness. At the lowest shear velocity, i.e., 0.1 mm/s, the difference varied from 0.3 kPa to 1.5 kPa. However, at the medium shear velocity, i.e., 0.5 mm/s, the difference varied from 11.7 kPa to 14.6 kPa. Compared to the lowest shear velocity, the change in shear resistance increased 10-fold. At the highest shear velocity, i.e., 1 mm/s, the difference varied from 5.2 kPa to 14.7 kPa. The difference in average values of shear strength (∆v) obtained from smooth and rough surfaces for each shear velocity were 1, 12, and 10 mm/s, for 0.1, 0.5, and 1 mm/s, respectively. The largest difference was found at the shear velocity of 0.5 mm/s, meaning that this shear velocity may induce an abrasion and fracture-dominant process not present in the 0.1 mm/s shear condition. Interestingly, in the drained condition, the shear strength at 1 mm/s was almost identical to that at 0.5 mm/s. However, in the undrained condition, the difference may have decreased by as much as 5 kPa. Hence, in the undrained condition, the water in the shear zone played an important role in shearing when measuring the torque in the ring shear test. Grain Crushing Effect with Surface Roughness in the Ring Shear Test Particle breakage has a significant effect on soil behavior [22,36]. According to Hardin [22], grain crushing is affected by many factors, e.g., particle shape, state of effective stress, void ratio, particle hardness, and presence of water. A relative breakage potential can be determined using a grain Grain Crushing Effect with Surface Roughness in the Ring Shear Test Particle breakage has a significant effect on soil behavior [22,36]. According to Hardin [22], grain crushing is affected by many factors, e.g., particle shape, state of effective stress, void ratio, particle hardness, and presence of water. A relative breakage potential can be determined using a grain distribution analysis, but this was not considered in this paper. In addition, particle compaction in the ring shear test may result in potential grain crushing and strain localization [30,37]. A grain size analysis was performed for the aquarium gravel using eight sieves before and after the tests. Figure 6 presents the grain size distribution curves with respect to surface roughness depending on the drainage condition and shear velocities of 0.1, 0.5, and 1 mm/s. As shown in Figure 6, under identical drainage conditions (Figure 6a-f) the grain crushing gradually increased with increasing shear velocity. In each curve, the area created before and after the test could be considered as a fragmentation effect through the artificial sliding plane. From the mechanical viewpoint, the roughness may cause increased friction in both drainage conditions. All tests showed large fragmentation using the rough surfaces compared to the smooth surface, although there was little difference between drained and undrained conditions. For clayey and sandy soils, it may have been easier to locate the shear surface in the test. However, it was much more difficult to visually compare the differences in shear zones obtained from drained and undrained conditions for granular materials because of the invisible slip surface in the ring shear test without using a transparent ring shear box [26]. For the gravel tested, the thickness of the shear zone (h s ) was estimated as 3-6 mm, based on the concept of relative roughness, assuming that h s = (5 to 10)·D 50 [14]. In landslides, a great volume of fine-grained sediments from the entrainment process can affect landslide transformation. For example, Zhang et al. [20] reported that "the fine soil layer carried the slidebody moving fast on the ground and decreased the integrity of the slidebody." Excess pore water pressure and the soft base effect may combine to create a possible mechanism for the transformation from slide to debris flow in landslides. Appl. Sci. 2019, 9, x FOR PEER REVIEW 8 of 10 roughness may cause increased friction in both drainage conditions. All tests showed large fragmentation using the rough surfaces compared to the smooth surface, although there was little difference between drained and undrained conditions. For clayey and sandy soils, it may have been easier to locate the shear surface in the test. However, it was much more difficult to visually compare the differences in shear zones obtained from drained and undrained conditions for granular materials because of the invisible slip surface in the ring shear test without using a transparent ring shear box [26]. For the gravel tested, the thickness of the shear zone (hs) was estimated as 3-6 mm, based on the concept of relative roughness, assuming that hs = (5 to 10)•D50 [14]. In landslides, a great volume of fine-grained sediments from the entrainment process can affect landslide transformation. For example, Zhang et al. [20] reported that "the fine soil layer carried the slidebody moving fast on the ground and decreased the integrity of the slidebody." Excess pore water pressure and the soft base effect may combine to create a possible mechanism for the transformation from slide to debris flow in landslides. Conclusions The effect of surface roughness on the shear strength of gravels using ring shear tests was examined. The shear characteristics measured from smooth and rough surfaces in a newly developed ring shear box were compared. The reduced wall friction using a smooth surface box may result in low shear strength estimates. Rough surfaces in test devices may result in overestimated shear strengths of geomaterials (clay to gravel-size), but they may also provide significant results if a slip occurs between two non-homogenous materials for the determination of the residual shear strength of gravelly soils. From the test results, the shear strength of granular materials depended on the drainage condition and shear velocity. The effect of surface roughness on shear strength, although small, could not be considered negligible with respect to minimization of the wall slip effect. The gravel exhibited a typical strain-hardening behavior regardless of the drainage and shear velocity when the gravel was firmly covered by the upper plate of the test device without applying normal Conclusions The effect of surface roughness on the shear strength of gravels using ring shear tests was examined. The shear characteristics measured from smooth and rough surfaces in a newly developed ring shear box were compared. The reduced wall friction using a smooth surface box may result in low shear strength estimates. Rough surfaces in test devices may result in overestimated shear strengths of geomaterials (clay to gravel-size), but they may also provide significant results if a slip occurs between two non-homogenous materials for the determination of the residual shear strength of gravelly soils. From the test results, the shear strength of granular materials depended on the drainage condition and shear velocity. The effect of surface roughness on shear strength, although small, could not be considered negligible with respect to minimization of the wall slip effect. The gravel exhibited a typical strain-hardening behavior regardless of the drainage and shear velocity when the gravel was firmly covered by the upper plate of the test device without applying normal stress (0 kPa). The peak shear strength obtained from drained and undrained conditions increased with increasing shear velocity, when it may be assumed that the material exhibited plastic behavior; however, a large difference was caused by surface roughness when the shear velocity was increased. The grain crushing effect was not negligible and is actually dominant for both smooth and rough surfaces. A higher degree of grain crushing was observed for rough surfaces in the ring shear test. In a landslide transition, the reduction in shear strength with respect to the variation in pore water pressure and soft base (i.e., finer materials forming at the bottom of the slidebody) caused by shearing should be examined. In addition, the development of the shear zone, contraction, and dilatancy phenomenon, which depend on the normal stress and shear velocity, should be examined.
8,992.2
2019-07-25T00:00:00.000
[ "Geology", "Materials Science" ]
The influence of magnetic field and cathode dimensions on plasma characteristics in hollow cathode system Experimental study on the effect of cylindrical hollow cathode, working pressure and magnetic field on spatial glow distribution and the characteristics of plasma produced by dc discharge in Argon gas, were investigated by image analyses for the plume within the plasma. It was found that the emission intensity appears as a periodic structure with many peaks appeared between the electrodes. Increasing the pressure leads to increase the number of intensity peaks finally converted to continuous form at high pressure, especially with applied of magnetic field, i.e. the plasma is more stable with the presence of magnetic field. The emission intensity study of plasma showed that the intensity has a maximum value at 1.07 mbar pressure and decrease with more pressure. Introduction Various structures appear in the glowing column, produced by dc discharge, depending on the gas pressure. The phenomena within the tube can be separated into two distinct groups: The cathode glow, the negative glow, and the Crookes and Faraday dark spaces belong to the cathode. They are affected strongly by pressure but do not depend markedly on tube length. The positive column, also known as the 'plasma', is elongated by increasing the length of the tube. It becomes striated because electrons are absorbed in the process of ionizing and otherwise imparting energy to the gas molecules, and newly produced secondary electrons require acceleration before they can cause ionization in their turn. The interval between striations depends on the pressure, and the visible effect is often indistinct. Striations tend to be more obvious in polyatomic gases (O 2 , N 2 , H 2 O, etc.) than in monatomic gases such as Ar [1]. The hollow cathode (HCD) was first described by Paschen in 1916. Since then, its fundamental characteristics have been studied extensively [2]. The hollow cathode discharge (HCD), a specialized type of glow discharge, has been the subject of investigations by physicists (in particular) and chemists for over a half century. Hollow cathode discharges are capable of generating dense plasmas and have been used for development of high-rate, low-pressure, highefficiency processing machines. The geometric feature of a HCD promotes oscillations of hot electrons inside the cathode, thereby enhancing ionization, ion bombardment of inner walls and other subsequent processes. At the same power the hollow cathode exhibits plasma density one to two orders of magnitude higher than that of conventional planar electrodes [3]. In a hollow cathode configuration, the sputtering and the discharge plasma are concentrated into a hollow cathode cavity (conditions are about 0.1-10 mbar pressure, 200-500 V and 10-100 mA), which consequences more effective ionization in the negative glow. Therefore, hollow cathode discharges can yield more ions than other types of sources and could benefit from reconsideration as an ion source. However, the intense plasma conditions that can be generated in a hollow cathode motivate for further investigations. Where in very high current densities can be derived at minimal discharge voltages, an effective analytic excitation environment than the more open planar geometry, a hollow cathode discharge is infrequently used as ion source [4]. In recent years hollow-cathode discharges in the E/n-range above 500Td (E is the electric field strength and n the gas number density) have gained rising attention in many fields of research. They have been used as sources of high intensity electronbeams, which can carry currents of several kA's, as well as transmitters of X-rays, covering a wide spectrum, and as ion-beam sources [5]. Several hundred literature reports may be found concerning various aspects of the HCD. Despite this, many analytical chemists today would consider the HCD as merely a sharp line source for atomic absorption spectrometry. While this is certainly its most important present application, the HCD has a long history as a spectrochemical emission source allowing direct excitation and analysis of samples [6]. Fig. 1 schematics the hollow cathode electrodes (made of Aluminum) that were designed to be used in the dc discharge system. The cathodes have cylindrical external shape with outer diameter of 22 mm and a length of 45 mm whiles the inner diameter 10 mm and a depth of 19 mm. On the other hand, the Anode electrode (made from Aluminum) has disk shape with thickness 2 cm and diameter of 10 mm. The inter-electrodes separation between the electrodes is 5.5 cm. The vacuum chamber of the system (as shown in Fig. 2) was made of cylindrical stainless steel tube. The two ends of chamber were closed by Pyrex windows, by two stainless steel flanges, and with small quartz window fixed in its center that allows seeing the generated plasma. Two smaller pipe connected in mid of the chamber, one of it was connected to pumping systems, while the other was used to deliver the Argon gas. The cylindrical electrodes (hollow cathode and anode electrodes) were made from Aluminum. Both electrodes are fixed by Teflon to prevent any connection with the chamber walls. Experimental set up The chamber was evacuated by two stages rotary pump, CIT-ALCATEL Annacy, (made in France) to a base pressure 1×10 -2 mbar. Pirani gauge type Edward (made in England) was used to measure the pressure of the chamber from atmospheric to the base pressure of the vacuum system. 4kV dc voltage was applied on the electrodes to generate the discharge in argon gas between two electrodes. Small permanent magnet, with disc shape, was put behind the hollow cathode electrode to confine the plasma. The maximum value of the magnetic field from the permanent magnet is 30 G. The magnetic field strength distribution (B) was measured using Tesla meter model (magnetfeldmeβgerat) from Phywe company (made in Germany). The plasma characteristics were investigated by using emission spectroscopy type (Thorlabs Compact CCD 100 M Spectrometers) for wavelength range from 320 nm to 740 nm. Fig. 3 schematic the effect of increasing of argon pressure on the glow discharge regions for Argon gas between two electrodes in hollow cathode system, using direct applied voltage about 4kV at different working pressures (0.27, 0.53, 0.67, 0.80, 1.07 and 1.33 mbar) without magnetic field. It is pointed out from this figure that when the pressure increases the cathode regions (cathode fall) are compressed, the negative glow becomes a thin layer of intense luminosity, while the positive column and anode fall increase. This change in the glow discharge structure with increasing of the pressure can be described as: since the mean free path of electrons is inversely proportional to the gas pressure, it follows that the distance required for an electron to travel before it has produced adequate ionization to sustain the glow would also be inversely proportional to the pressure. Then, the thickness of the cathode dark region decreases as the pressure is increases (i.e. the cathode fall is compressed). Consequently, the negative glow region becomes a thin layer of intense luminosity and the positive column region and anode fall increase. Fig. 3: Effect of Ar gas pressure on the glow discharge structure of the hollow cathode system without magnetic field. On the other hand, the presence of electric and magnetic fields together in the magnetron trap the electrons close to the surface of the hollow cathode (in the region of strong electric field). Electrons follow helical paths around the magnetic field lines so undergoing more ionizing collisions with gaseous neutral near the hollow cathode. It also means that the plasma can be sustained at a lower pressure. Fig. 4 estimated the effect of a magnetic field strength on the dc. glow discharge regions of hollow cathode system at a different pressure. The results indicated the present of magnetic field causes the cathode regions (cathode fall) are compressed, negative glow becomes a thin layer of intense luminosity, as well as the anode fall and the positive column increases. This compression in cathode fall can be explained as follows: the transverse direction of the magnetic field will bend the paths of most electrons that have relatively high speed, normal to cathode surface and enable them to produce the necessary ionization to maintain the discharge while moving a shorter distance along the axis in the cathode dark region. Thus, the length of the cathode dark region is reduced. While the increasing of the positive column length was caused by the transverse magnetic field will constrain the diffusion of charged particles perpendicular to its direction. Fig. 4: Influence of Ar gas pressure on the glow discharge region in the hollow cathode system in the present of magnetic field. The emission intensity of discharge regions in this system are analyzed using the image J software and converted to 3D forms where the third dimension represents the amount of brightness in the image. These shapes were used later to find the intensity behavior on the central line. Fig. 5 shows the 3D image for the spatial emission intensity distribution by image analysis of Fig. 2 for the discharge glow of cylindrical hollow cathode system at different working pressure (0.27, 0.53, 0.67, 0.80, 1.07 and 1.33 mbar). Many features can be observed from Fig. 5, the maximum intensity occur in the region near from the cathode surface (where the electric field is strong in this region). The plasma discharge regions are compression with increasing of gas pressure. The number of discharge regions that appear increasing with increasing of gas pressure. The glow discharge intensity in the region between two electrodes is nonhomogeneous which increasing in the region nears the surface of two electrodes. From 3D glow representation the glow distribution profile on central line were deduced using the image J software for the three cathodes design in two cases (with and without magnetic field) at different working pressure, to make a comparison between the different working conditions, where the most variance appeared on the line of symmetry (between electrode centers). Figs. 6 and 7 show the variation in intensity distribution between two electrodes of hollow cathode system with increasing of gas pressure in the absence and presence of magnetic field, respectively. One can observe from Fig. 6 that the emission intensity increasing with decreasing of gas pressure. This behavior can be explained as, the increasing of gas pressure causes to increasing of inelastic collision of electron with Ar atoms. Thus, the increasing of gas pressure shown decreasing of electron temperature and this causes to decrease the light emission intensity with increasing of gas pressure. This result agreement with those observed by references [7,8]. On the other hand, the results of Fig. 7 showed that, in the presence of magnetic field, the emission intensity shown inverse behavior to that in the absence of magnetic field (i.e. the emission intensity increasing with increasing of gas pressure). This behavior may be explained as, the presence of magnetic field causes to increase the plasma confinement and this will increase the electron inelastic collision with argon atoms and causes to decrease the electron temperature. This is due to the characteristics of hollow cathode at the low pressures to produce more energetic excitation [9]. 8 shows the influence of magnetic field on glow distribution between two electrodes at constant pressure 0.27 mbar. This selection of this pressure because the change is obvious. The results illustrate the emission light intensity reduced in the presence of magnetic field. The glow discharge regions shifted from its original positions. These behaviors may be due to the mean free path of electrons was reduced in the presence of magnetic field. Fig. 9 shows the variation of emission intensity of plasma as a function of working pressure in two cases; with and without magnetic field. It seems that the emission intensity have a maximum value at 1.07 mbar. The magnetic field increasing the emission intensity of plasma in low pressure (less than 1 mbar). While, the presence of magnetic field causes to decreases the emission intensity of plasma for pressure greater than 1 mbar. Conclusions The spatial glow distribution in plasma produced by dc discharge in Argon gas using hollow cathode system effected by working pressure and magnetron existence. It was found that the emission intensity appear as a periodic structure and the number of intensity peaks increase with increasing the pressure and finally converted to continuous form at high pressure, especially when magnetic field existence, were the plasma be more stable with the presence of magnetic field. The emission intensity distribution have a maximum values at 1.07 mbar and decrease with more pressure.
2,896.6
2018-09-10T00:00:00.000
[ "Physics" ]
The Chebyshev center as an alternative to the analytic center in the feasibility pump As a heuristic for obtaining feasible points of mixed integer linear problems, the feasibility pump (FP) generates two sequences of points: one of feasible solutions for the relaxed linear problem; and another of integer points obtained by rounding the linear solutions. In a previous work, the present authors proposed a variant of FP, named analytic center FP, which obtains integer solutions by rounding points in the segment between the linear solution and the analytic center of the polyhedron of the relaxed problem. This work introduces a new FP variant that replaces the analytic center with the Chebyshev center. Two of the benefits of using the Chebyshev center are: (i) it requires the solution of a linear optimization problem (unlike the analytic center, which involves a convex nonlinear optimization problem for its exact solution); and (ii) it is invariant to redundant constraints (unlike the analytic center, which may not be well centered within the polyhedron for problems with highly rank-deficient matrices). The computational results obtained with a set of more than 200 MIPLIB2003 and MIPLIB2010 instances show that the Chebyshev center FP is competitive and can serve as an alternative to other FP variants. Given the mixed-integer linear problem (MILP) where A ∈ ℝ m×n , b ∈ ℝ m , c ∈ ℝ n , I ⊆ N = {1, … , n} and P = {x ∈ ℝ n ∶ Ax = b, x ≥ 0} (that is, P is the feasible region of the linear relaxation of (1)), finding a feasible point of ( 1) is a challenging (NP-hard) problem.Many heuristics have been developed for obtaining feasible (hopefully good) solutions of (1).In this paper, we focus on the feasibility pump (FP) [5,12], which has proven to be a successful heuristic, not only for linear problems but also for nonconvex nonlinear problems [4,7,9]. Briefly, FP alternates between two sequences of points: one of feasible solutions for the linear relaxation of (1), and another of integer points, that hopefully converge to a feasible integer solution.The integer point is obtained by applying some rounding procedure to the feasible solution of the linear relaxation. Several strides have been made to further develop the original FP.In [1], the authors take the objective function of the MILP into account at each iteration of the algorithm in order to find better quality solutions.This approach was named objective FP.In [13], a new improved rounding scheme based on constraint propagation was introduced.Interior-point methods were applied to primal heuristics in [3] (an approach named analytic center FP or AC-FP) and [14] (resulting in the analytic center feasibility method or ACFM).Although both AC-FP and ACFM used the analytic center of P , they are significantly different.In particular, AC-FP (which is briefly outlined in Sect.3.1) relies on FP and it computes only one analytic center, while ACFM is based on a cutting plane method and it computes an analytic center at each iteration of the algorithm. AC-FP explores non-integer points in the segment where the feasible point of the linear relaxation of (1) joins the analytic center.The motivation behind using the analytic center lies in the fact that rounding an interior point increases the chances of finding a feasible integer solution.AC-FP was proven in [3] to be a successful heuristic, namely by improving the standard FP in several tested MILP instances.In [6] the authors extended the AC-FP idea by enhancing the rounding procedure. However, in both practice and theory, using the analytic center has two downsides.First, computing the exact analytic center of P means solving the convex nonlinear optimization problem min − ∑ n i=1 ln x i ∶ Ax = b .An approximate solu- tion to this problem was suggested in [3] by applying a path-following (or barrier) interior-point algorithm to the problem min 0 ∶ Ax = b, x ≥ 0 .This excludes using the simplex method for computing the analytic center.The second drawback is that, theoretically, a large number of redundant constraints (in problems with highly rank deficient matrices A) may change the location of the analytic center [10].In this paper, we consider an alternative to the analytic center, named the Chebyshev center.Using the Chebyshev center within FP overcomes the two above drawbacks: the center of P is not affected by redundant constraints in A (this is clearly shown below in Sect.3.2); and the center can be computed using either the simplex or The Chebyshev center as an alternative to the analytic center… barrier algorithms.As this work shows, the Chebyshev center FP (CC-FP), for some instances, provides better solutions than objective FP and AC-FP variants.The paper is organized as follows.Section 2 outlines the FP heuristic.Section 3 describes the use of a generic center point (analytic or Chebyshev center) in the FP heuristic, while Sect.3.2 focuses on the computation of the Chebyshev center.Section 4 presents extensive computational results on a subset of MIPLIB2003 [2] and MIPLIB2010 [11] instances, in order to compare the objective FP, AC-FP and CC-FP, as well as to show the effectiveness of the CC-FP approach.Finally, we present some closing remarks in Sect. 5. The feasibility pump heuristic The original feasibility pump heuristic [12] works iteratively with two points: one ( x * ) is feasible for the continuous relaxation of (1), although it is possibly integer infeasible, and the other ( x ) is integral but might not be in P .The point x * is set to the optimal solution of the linear programming relaxation of (1) while x is obtained by rounding x * to the closest integer point, as follows: where [⋅] represents scalar rounding to the nearest integer.Note that the continuous variables x j , j ∉ I , do not play any role.At each iteration of the FP method, x * is updated by minimizing the following linear optimization problem where Δ(x, x) is the distance between x and x using the L 1 norm: Figure 1 illustrates one iteration of the FP method, where t represents the iteration number, and the final point xt+1 is integer feasible.FP ends when the distance Δ(x, x) is 0 (meaning we have obtained an integer feasible solution) or when a predefined termination criterion is reached.One of the main drawbacks of the FP heuristic is the possibility of visiting integer points already visited in previous iterations, thereby causing a cycle.To avoid this, a restart procedure is proposed in [12]. The FP implementation has three stages.In the first stage, the method considers only the set of binary variables by relaxing the integrality conditions on the general integer variables.In the second stage, FP takes all integer variables into account and uses the best point x obtained at Stage 1 as a starting point.Both stages terminate as soon as a feasible solution is found or when some termination criterion is reached (e.g., the best Δ(x, x) is not updated during a certain number of iterations, or the maximum number of iterations is reached-just to name two).The last stage (Stage 3) starts when FP cannot find a feasible solution to (1) within the established time limit.In this stage, a commercial solver is applied to (1) (CPLEX 12.7 is used in this work), for which the best point obtained from Stage 2 is used as a starting point.Stage 3 stops as soon as a feasible solution is found.An outline of the FP algorithm is shown in Fig. 2, and further details can be found in [5,12].Despite the successful results obtained by the original FP heuristic for finding feasible solutions of MILPs in a short computational time, using the objective function of (1) only at the beginning of the procedure often leads to a rather poor solution.To avoid this, a modified FP heuristic called objective FP was proposed in [1], which considers a convex combination of Δ(x, x) and the objective function of (1).The idea is to focus the search for feasible solutions near the region of high-quality points.The modified objective function Δ (x, x) is defined as Fig. 1 Graphical representation of the FP method Fig. 2 The feasibility pump heuristic (original version) [5,12] 1 3 The Chebyshev center as an alternative to the analytic center… where ‖ ⋅ ‖ is the Euclidean norm of a vector, and Δ is the objective function vec- tor of Δ(x, x) (i.e., the number of either binary variables at Stage 1, or both integer and binary variables at Stage 2).The weight is reduced at every iteration.When = 0 , the original FP heuristic is obtained.Note that the objective FP algorithm is nearly identical to the original FP algorithm in Fig. 2; it simply replaces Δ(x, x) with Δ t (x, x) in line 5 and adds the proper initialization and updating of .Further details can be found in [1]. Using a center point in the feasibility pump Let x ∈ ℝ n be an interior point of polyhedron P , that is, Ax = b and x > 0 (strictly positive components).All the points in the segment xx * are feasible, since they are a convex combination of two feasible points, x * and x , and therefore candidates to be rounded.In addition, all the points except x * in the segment xx * are interior, thus increasing the chances for the rounded point to be feasible for P .Of all interior points x , those that are "well centered" inside P (let them be the center points) are the best choices.The above generic approach is named the center point feasibility pump (CP-FP) in this work. At each iteration CP-FP considers points x( is feasible, then a fea- sible integer solution is found and the procedure is stopped.Otherwise, CP-FP continues to consider the new integral point to be the one that is closest to P from all the points x() (the ∞ distance between P and x() is used to measure closeness).If more than one integer point x() is feasible for P , CP-FP selects the one closest to x * (which may probably have a better objective value).Figure 3 illustrates the behaviour of CP-FP: while the standard objective FP would provide the infeasible yellow point, CP-FP could deliver the feasible green point.An outline of the algorithm is shown in Fig. 4. Note that if x( = 0) is selected at each iteration, CP-FP behaves exactly as the objective FP.Further details are given in [3]. There are several ways to get the center point.One option is the analytic center of the polyhedron, which was used in [3] with promising results.The main drawback of the analytic center is that redundant constraints can push it near the boundary of Fig. 3 Illustration of CP-FP the polyhedron [10], as is shown in Sect.3.2.To overcome this issue, we suggest in this paper using the Chebyshev center.Both center points are briefly outlined below. The analytic center The analytic center of P is defined as the point x that minimizes the primal potential function − ∑ n i=1 ln x i , i.e., Constraints x > 0 can be avoided, since the domain of ln are the positive numbers, and then ( 6) is an equality constrained strictly convex optimization problem.It is easily seen that x is also the solution of max ∏ n i=1 x i ∶ Ax = b ; that is, the analytic center attempts to maximize the distance to the hyperplanes x i = 0, i = 1, … , n , and it is thus expected to be well centered in the interior of P .Note that the ana- lytic center is not a topological property of a polytope, and it depends on how P is defined through Ax = b [15]. The analytic center solves the KKT conditions of ( 6), which can be recast as Fig. 4 The center point feasibility pump heuristic (CP-FP) [3] 1 3 The Chebyshev center as an alternative to the analytic center… y ∈ ℝ m and s ∈ ℝ n being, respectively, the Lagrange multipliers of Ax = b and an auxiliary vector (associated to x > 0 ).Alternatively, we can make use of an avail- able highly efficient implementation, in which we compute the analytic center by applying a primal-dual path-following interior-point algorithm to the barrier problem of the linear relaxation of (1) after setting c = 0 , that is, where is a positive parameter (the parameter of the barrier) that tends to zero.The arc of solutions of the barrier problem for every > 0 is named the central path.The central path converges to the analytic center of the optimal set of a linear optimization problem.When c = 0 (as in (8)) the central path converges to the analytic center of the feasible set P [15].The use of the analytic center in the feasibility pump, introduced in [3], was named the analytic center FP (AC-FP). The Chebyshev center Given a convex polyhedron described by linear inequalities the Chebyshev center x is the center of the largest inscribed Euclidean ball in Q .A Euclidean ball of center x ∈ ℝ n and radius r is the set of all points of distance less than or equal to r from x , i.e., B(x, r) = {x + u ∶ ‖u‖ 2 ≤ r} .The optimization prob- lem that finds the Chebyshev center is [8] where Since a T i u ≤ ‖a i ‖ 2 ‖u‖ 2 ≤ ‖a i ‖ 2 r , we can then write (10) as the following linear optimization problem: For the polyhedron P of the linear relaxation of (1), the Chebyshev center is defined only in terms of the inequalities x ≥ 0 and is restricted to Ax = b , which results in the following problem: The Chebyshev center does not change in the presence of redundant constraints, and it is then always well located in a central position inside the polyhedron, thus making it an effective choice for CP-FP.On the other hand, the analytic center can be pushed out near the boundary of P by redundant constraints.Figures 5 and 6 illus- trate this situation with the convex polyhedron Q described by the following linear inequalities: The largest inscribed Euclidean ball centered at x cc is also shown Fig. 6 The Chebyshev ( x ′ cc ) and analytic ( The Chebyshev center as an alternative to the analytic center… Fig. 5 shows the analytic ( x ac ) and the Chebyshev ( x cc ) centers of Q , as well as the largest inscribed Euclidean ball centered at x cc .Let us consider an alternative repre- sentation of the polyhedron , which is obtained by adding the two redundant constraints Figure 6 shows the analytic ( x ′ ac ) and the Chebyshev ( Implementation and instances Both AC-FP and CC-FP are implemented in C++ using the base code of the objective FP, which is freely available from https:// site.unibo.it/ opera tions-resea rch/ en/ resea rch/ libra ry-of-codes-and-insta nces-1.The optimization solver CPLEX (version 12.7) is used to solve the linear optimization subproblems.All the runs are carried out on a Fujitsu Primergy RX2540 M1 4X server with two 2.6 GHz Intel Xeon E5-2690v3 CPUs (48 cores) and 192 Gigabytes of RAM, under a GNU/Linux operating system (openSuse 13.2), without exploitation of the multithreading capabilities.A one-hour time limit is imposed on all the runs.AC-FP, CC-FP and objective FP are tested on a subset of MIPLIB2003 [2] and MIPLIB2010 [11] instances, whose dimensions are shown in Tables 1, 2 and 3.The columns "rows", "cols", "nnz", "int", "bin" and "con" provide respectively the numbers of constraints, variables, nonzeros, general integer variables, binary variables and continuous variables of the instances.The column "objective" shows the optimal objective function. Results We first analyze the results for the subset of MIPLIB2003 instances.Table 4 presents the results obtained.For AC-FP and CC-FP we report the total CPU time spent on stages 0 to 3 ("tFP"); the time for computing the analytic/Chebyshev center ("tAC/tCC"); the stage in which the feasible point is found ("stage"); and the gap between the feasible and the optimal solution ("gap%").For the objective FP, we report columns "gap%", "stage" and "tFP" with the same meaning as before.The primary goal of this preliminary study is to assess the benefits, if any, of using the Chebyshev center as an alternative to the analytic center.We start by comparing AC-FP with CC-FP.Looking at Table 4, we see 15 instances where AC-FP fails . The Chebyshev center as an alternative to the analytic center… The Chebyshev center as an alternative to the analytic center… The Chebyshev center as an alternative to the analytic center… (i.e., it requires stage 3).In five of those 15 instances (33.3%),CC-FP finds a feasible solution ("aflow40b", "harp2", "nsrand-ipx", "protfold" and "tr12-30").In contrast, CC-FP fails in 12 instances.In two of those instances (16.7%),AC-FP finds a feasible solution ("ds" and "nw04").Finally, in 14 of the 35 instances (40%) where both methods find a feasible solution, CC-FP obtains a solution with a lower gap than AC-FP.In another eight instances CC-FP obtains the same gap as AC-FP.Given that the total computational time is really low in both methods (less than one minute on average), CC-FP proves to be a good alternative to AC-FP.The Chebyshev center as an alternative to the analytic center… Next, we focus on comparing the objective FP heuristic with CP-FP (choosing the best option between AC-FP and CC-FP).Table 5 presents the results obtained, with the column "tCP" showing the time needed for computing either AC or CC.In 39 instances, both CP-FP and objective FP find a feasible solution.In six instances ("cap6000", "danoint", "mkc", "msc98-ip", "roll3000" and "swath"), representing a 15.4% of the cases, CP-FP improves the quality of the feasible solution achieved.Furthermore, when the objective FP fails in nine instances ("arki001", "atlanta-ip", "glass4", "mzzv11", "p2756", "protfold", "roll3000", "swath" and "timtab2"), CP-FP finds a feasible solution in three of them ("protfold", "roll3000" and "swath").It is noteworthy that CC-FP efficiently solves the instance "protfold" when both AC-FP and objective FP fail.We also observe that in the six instances where all methods fail ("arki001", "atlanta-ip", "glass4", "mzzv11", "p2756" and "timtab2"), CP-FP obtains a better feasible solution in two of them ("atlanta-ip" and " timtab2"); and in another two ("mzzv11" and "p2756") it provides the same solution as objective FP. Second, we provide a similar comparison between AC-FP, CC-FP and objective FP for a subset of MIPLIB2010 instances.The original subset contains 215 instances, but 38 of them are removed because either (i) none of the methods find a feasible solution within the one hour time limit; or (ii) they exhaust the available memory.Tables 6 and 7 show the results obtained with the remaining 177 instances.Comparing AC-FP against CC-FP, we note that in four of the 70 instances (5.7%)where AC-FP fails, CC-FP finds a feasible solution.On the other hand, in seven of the 73 instances (9.6%)where CC-AC fails, AC-FP is able to find a feasible solution.It is worth noting that CC-FP gives a higher quality solution in 36 of the 100 instances (36%) in which both methods successfully find a feasible point.In another 37 instances, AC-FP and CC-FP find points with the same objective function.Therefore, in a total of 73 out of 100 instances The Chebyshev center as an alternative to the analytic center… (73%) CC-FP obtains an equal or better result than AC-FP.These results show that the Chebyshev center can be a good alternative to the analytic center for FP variants.Finally, Tables 8 and 9 show results comparing objective FP with the best option between CC-FP or AC-FP (named CP-FP in these tables).From Tables 8 and 9 we can state that: (i) in 19 of the instances where objective FP fails, CP-FP finds a feasible solution; and (ii) CP-FP obtains a better solution than objective FP in 26% of the cases where all methods successfully end (and in 5.4% of the cases, the solutions have the same objective function).Table 10 summarizes the overall results for all the MIPLIB 2003 and 2010 instances, comparing AC-FP vs CC-FP in subtable (a), and the best between AC-FP and CC-FP (referred to as CP-FP) vs objective FP in subtable (b).The first two rows of each subtable provide, for each method, the percentage of successfully solved instances (i.e., a feasible solution is obtained by the heuristic before stage 3) and failures (i.e., stage 3 is reached).Looking at subtable (a) we notice that both AC-FP and CC-FP solve the same number of instances, although the particular set of instances solved by each method is different.In subtable (b) we see that CP-FP (either AC or CC) solves 4% more of instances than objective FP.The last two rows of subtable (a) show that CC-FP provides a solution of better gap than AC-FP in 3.5% more instances; in 19.65% of the instances both AC-FP and CC-FP report a solution of same gap.This information is also given in the last two rows of subtable (b) comparing CP-FP vs objective FP: it is seen that objective FP provides better gaps than CP-FP in many more cases.However, CP-FP is able to compute a solution in 26% of the instances that are not solved by objective FP.The Chebyshev center as an alternative to the analytic center… The Chebyshev center as an alternative to the analytic center… The Chebyshev center as an alternative to the analytic center… The Chebyshev center as an alternative to the analytic center… The Chebyshev center as an alternative to the analytic center… The Chebyshev center as an alternative to the analytic center… 1 3 The Chebyshev center as an alternative to the analytic center… Conclusions We propose using the Chebyshev center as an alternative center point to the analytic center in the successful FP heuristic.Our extensive computational results show that the CC-FP variant is competitive in some instances.Furthermore we have also shown that, in theory, the Chebyshev center might provide important benefits when the MILP problem has many redundant constraints.Although CP-FP does not always outperform objective FP, using a center point within FP has been shown to provide a competitive advantage in other FP variants that complement CP-FP, such as in [6].In those cases, using CC instead of AC can provide better and faster feasible points.Developing a decision tool to choose a priori the best center point to use within FP could form a part of further work to be done in this field. Fig. 5 Fig. 5 The Chebyshev ( x cc ) and analytic ( x ac ) centers of polyhedron Q represented by Qx ≤ b (without redundant constraints).The largest inscribed Euclidean ball centered at x cc is also shown 7 Computational results using AC-FP, CC-FP and objective FP for a subset of MILP instances from MIPLIB 2010 (Part II) while x ′ ac has been pushed out towards the boundary opposite to the redundant constraints.The FP variant based on Chebyshev centers introduced in this work is named Chebyshev center FP (CC-FP). Table 1 Characteristics of the subset of MILP instances from MIPLIB 2003 Table 2 Characteristics of the subset of MILP instances from MIPLIB 2010 (Part I) Table 3 Characteristics of the subset of MILP instances from MIPLIB 2010 (Part II) Table 4 Computational results using AC-FP, CC-FP and objective FP for a subset of MILP instances Table 6 Computational results using AC-FP, CC-FP and objective FP for a subset of MILP instances from MIPLIB 2010 (Part I) Table 8 Computational results using the best option between AC-FP and CC-FP (CP-FP) against objective FP for a subset of MILP instances from MIPLIB 2010 (Part I) Table 9 Computational results using the best option between AC-FP and CC-FP (CP-FP) against objective FP for a subset of MILP instances from MIPLIB 2010 (Part II) Table 10 Summary tables for all MIPLIB 2003 and 2010 instances Subtable (a) compares AC-FP vs CC-FP.Subtable (b) compares the best between AC-FP and CC-FP vs objective FP
5,546.8
2023-06-07T00:00:00.000
[ "Mathematics", "Computer Science" ]
Floating oil-covered debris from Deepwater Horizon: identification and application The discovery of oiled and non-oiled honeycomb material in the Gulf of Mexico surface waters and along coastal beaches shortly after the explosion of Deepwater Horizon sparked debate about its origin and the oil covering it. We show that the unknown pieces of oiled and non-oiled honeycomb material collected in the Gulf of Mexico were pieces of the riser pipe buoyancy module of Deepwater Horizon. Biomarker ratios confirmed that the oil had originated from the Macondo oil well and had undergone significant weathering. Using the National Oceanic and Atmospheric Administration’s records of the oil spill trajectory at the sea surface, we show that the honeycomb material preceded the front edge of the uncertainty of the oil slick trajectory by several kilometers. We conclude that the observation of debris fields deriving from damaged marine materials may be incorporated into emergency response efforts and forecasting of coastal impacts during future offshore oil spills, and ground truthing predicative models. Introduction Following the explosion of Deepwater Horizon in the Gulf of Mexico on 20 April 2010 and the subsequent release of 170 million gallons of crude oil [1] from the Macondo well, pieces of 'honeycomb' material were spotted floating in coastal waters and coming ashore on Gulf coast beaches (figure 1). These buoyant materials preceded the arrival of the oil slick. Floating pieces were as large as 20 cm, while those found floating were up to 3 m. Some were heavily oiled and sticky to the touch, whereas others were not. On closer inspection and dissection, the non-oiled honeycomb substance bore a uniform distribution of black spheres (∼1 cm in diameter) embedded in a white porous substrate. There has been some debate regarding the source of the honeycomb material. Some have suggested that it was biogenic carbonate that was damaged from the explosion [2]. Others suspected that it was foam from the riser pipe or part of a holding tank on Deepwater Horizon [2]. The goals of this study were threefold. First, we aimed to determine the source of the honeycomb material. Second, we wanted to determine whether the oil found on the coated material came from the Macondo well. Finally, we investigated the migration of the material for its value as a tracer for oil slick movement. Sample collection Oiled honeycomb material was collected by hand on four separate occasions. On 5 May 2010, two pieces were recovered approximately 50 km south of Dauphin Island, AL (29.77 • N, −88.10 • W) (figure 2(a)). At this distance from shore, there was a field of approximately 50 pieces of similar material interspersed with sargassum weed over a 10 km east-west line. Winds were light, and glassy sea conditions allowed a 'halo' of oil sheen to form around each honeycomb clump floating in the water. Surface water temperatures were ∼24 • C offshore. These two samples were placed in a bucket, and after about 1.5 h, we noticed the material was oozing a thick, oily material with a petroleum odor. On 7 May 2010, two additional pieces of material were recovered from the sea surface approximately 40 km south of Dauphin Island, AL (29.89 • N, −88.21 • W) ( figure 2(b)). These samples were also collected in a noticeable accumulation of sargassum weed. The pieces were covered with small patches of oil and located above tidal zone. The time of arrival of the debris on the islands is not documented. Samples were collected and sent to WHOI for analysis. Bulk property analysis of non-oiled honeycomb material The carbon, hydrogen, and nitrogen content of the white porous material and the black spherical coating was measured by Midwest Microlabs (Indianapolis, IN). The density was measured by a technique modified from Kolb and Kolb (1991) and recently used by Morét-Ferguson to determine the density of plastic materials in surface waters of the Atlantic Ocean [3]. The bulk densities of individual honeycomb pieces vary depending on the quantity and distribution of intact hollow black spheres within the material, as these add significant buoyancy. To test for the presence of carbonates, drops of concentrated hydrochloric acid were dripped onto fragments of the honeycomb material. Solvent extraction of materials The oiled and non-oiled pieces of honeycomb material were extracted with dichloromethane/methanol (90:10) and spiked with an internal standard, n-hexadecane-d 34 . The extracts were stored until analysis by gas chromatography (GC-FID) and comprehensive two-dimensional gas chromatography with flame ionization detection (GC × GC-FID). Analysis of total petroleum hydrocarbons (TPHs) The GC-FID system was a Hewlett-Packard 5890 Series II gas chromatograph with an FID. Approximately 25 mg of material was spiked with 10 µg of octyl ether (recovery standard). Samples (0.5 µl) were injected cool-on-column and separated on a 100% dimethyl polysiloxane capillary column (Restek Rtx-1, 30 m length, 0.25 mm I.D., 0.25 µm film thickness) with H 2 as the carrier gas at a constant flow of 5 ml min −1 . The GC oven was programmed from 45 • C (5 min hold) and ramped at 6 • C min −1 to 315 • C and then at 20 • C min −1 to 320 • C (30 min hold). Using standard baseline subtraction techniques, several regions of the chromatograms were integrated representing n-alkane carbon numbers: C 10 -C 25 , C 25 -C 45 and C 45+ [4]. Total petroleum hydrocarbons (TPHs) were quantified by integrating the total area of the FID signal and using response factors determined from n-alkane standards [5]. Individual n-alkanes (n-C 10 to n-C 40 ) and the methyl branched isoprenoid alkanes, pristine and phytane, were measured. Laboratory blanks were free of petroleum hydrocarbons. Biomarker analysis The solvent extracts for biomarkers were analyzed on a GC × GC-FID system equipped with a dual stage cryogenic modulator (Leco, Saint Joseph, MI) installed in an Agilent 7890A gas chromatograph configured with a 7683 series split/splitless auto-injector, two capillary gas chromatography columns, and a FID. Refer to [6,7] for a more complete discussion on this technique. 2.6. Analysis of polycyclic aromatic hydrocarbons (PAHs) and high temperature simulated distillation (HTSD) PAHs were measured by GC with mass spectrometry (GC-MS) by Alpha Analytical (Mansfield, MA) using a modified Environmental Protection Agency method 8270 that targets both parent and alkylated PAHs [8]. HTSD was performed by Triton Analytics Corporation (Houston, TX). Results and discussion The physical characteristics of the honeycomb material provided invaluable clues on its source. Efforts to remove small pieces for analysis were hindered by the hardness of the material. Since some hypothesized that the white material was coral or other biogenic carbonate structure, we first dripped concentrated hydrochloric acid on the honeycomb material. There was no evolution of carbon dioxide or evidence of bubbling. Elemental analysis (by mass) of the non-oiled material was 64, 8.1 and 0.3% (213:27:1) (white porous material) and 52, 3.9 and 2.3% (22.6:1.7:1) (black material) for carbon, hydrogen and nitrogen, respectively. These ratios are much larger than those of marine biogenic origin [9,10]. The bulk densities of the non-oiled and oiled material were 0.57 and 0.97 g ml −1 , respectively. The solvent extract of the non-oiled material was colorless and contained no detectable petroleum hydrocarbons (figure 3). These results indicate that the non-oiled honeycomb pieces were from a hard, buoyant, engineered, non-carbonate material and contained no detectable petroleum hydrocarbons. During a field study in the Chandeleur Islands (29.97 • N, −88.83 • W), part of the Breton National Wildlife Refuge, on 6 April 2011, we observed large pieces (1-3 m in length) of the honeycomb material covered with white fiberglass sheathing. There was a visual match between the unknown honeycomb material and this engineered marine material. The manufacturing company and serial number, 'Cuming Corporation 75-1059', was legible on one of the pieces of debris. The material was identified as part of a 1000-feet service depth riser pipe buoyancy module that was manufactured for the R&B Falcon (Transocean) Deepwater Horizon. By extension, we conclude the unknown pieces of non-oiled and oiled honeycomb material found floating in the Gulf of Mexico were from one of the riser pipe buoyancy modules of Deepwater Horizon. To determine the source of oil found on the honeycomb material, we compared the biomarker ratios of the hopanes and steranes to the Macondo well oil [11] (table 1). Biomarker ratios confirmed the Macondo well as the source oil. To highlight the fidelity and similarity of biomarker ratios of other oiled samples, we included ratios from samples collected over a one-year period, that were 1 to 230 km from the location of the Deepwater Horizon disaster (table 1). After determining that the oil on the honeycomb material was from the Macondo well, we compared changes in the abundance and distribution of compounds in the honeycomb oil and the Macondo well oil (figure 3). We analyzed oil found on several samples and all showed similar profiles. Here, we will highlight the analysis of one sample. The oil on the honeycomb material had lost the most volatile and water-soluble petroleum hydrocarbons relative to the Macondo well oil (figure 3). We then compared our results to the analysis of the HTSD of the Macondo well oil. The latter is a gas chromatographic method used to define the boiling distribution of the GC-amenable fraction [12]. For Macondo well oil, we found that, on a non-polar capillary column, 25, 50, 75 and 85% of the mass of the whole oil elutes before the n-C 11 , n-C 18 , n-C 30 and n-C 40 alkanes, respectively ( figure 3(c)). Based on these results, we estimate a loss of slightly more than 25% of the initial petroleum hydrocarbons from the honeycomb oil due to evaporation and other processes, within 15 days after the explosion. Based on the n-C 18 /phytane ratio, an indicator for early biodegradation, and the abundance of other n-alkanes, there was no evidence of biodegradation [13]. PAH analysis showed a rapid loss of lower molecular weight PAHs, consistent with numerous other studies. To provide a more quantitative assessment of the weathering of the PAHs on the oiled material, we normalized the concentration of each PAH to the concentration of the recalcitrant biomarker 17α(H), 21β(H)-hopane [8,14], which acts as an internal standard. Briefly, naphthalene, fluorene, phenanthrene and chrysene in the oiled material were depleted relative to Macondo well oil by 98, 72, 43 and 0%, respectively; highlighting the greater susceptibility of smaller two-ring PAHs to weathering than larger five-ring PAHs, which is consistent with other oil spills [5,8]. The measured densities of 0.57 and 0.97 g ml −1 for the un-oiled and oiled honeycomb, respectively, provided additional information that these materials floated at the sea surface. Hence, we suspected that the protruding profile of the buoyant material enabled it to traverse the sea surface more rapidly than floating oil, thereby traveling in advance of the oil slick. Floating objects at sea move according to both currents and winds, and the contribution of each factor is described using the leeway or windage. The wind leeway for fresh oil is found to be between 3.0 and 3.3% [16]. Most oil spill models use a range of wind leeway that may vary with environmental conditions and as the oil weathers [17]. By contrast, floating objects can have a wind leeway as high as 6% or more [15]. Therefore oil spill models may not accurately track such objects. Even a small deviation in leeway can, over time, result in significant differences in surface tracks because of typical wind fields. During the ongoing Deepwater Horizon spill and response effort, the National Oceanic and Atmospheric Administration (NOAA) produced daily trajectory forecasts mapping the potential surface locations of spill oiled (figure 2). The forecasts were created using the General NOAA Operational Modeling Environment (GNOME) model that utilized currents, wind velocity and overflight data, as well as satellite imagery, to calculate an estimated oil slick trajectory together with an associated uncertainty [17]. The leeway value used in the response modeling varied from 0 to 4%, using a random uniform distribution that was reinitialized every 15 min [17]. Based on the trajectory forecasts for 5 and 7 May 2010, the predicted outer bounds of the oil slick were approximately 10 km behind the 50 pieces of oiled debris found offshore of Dauphin Island, AL (figure 2). Hence if all other conditions were the same, the leeway for the honeycomb material was greater than that used by NOAA for the oil slick (0-4%). However, observations revealed that slicks did not appear until two days after the explosion, which would suggest that there was a smaller difference between the forecasts and the location of the debris due to a leeway closer to the upper estimate of 4%. We did not attempt to constrain a quantitative value for the leeway of the honeycomb material given its widely varying shapes, degrees of oiling as well as timing of oil surfacing. Nevertheless, we conclude that observations of floating, oiled debris may be interpreted as a harbinger of the oil trajectory providing advanced warning to coastlines or other ecologically sensitive areas. Such information may be incorporated for oil spill emergency response, slick trajectory forecasting efforts, and quality control on past and future models. Summary After an analysis of the unknown honeycomb material and the subsequent oil found on samples, we are able to identify this material as pieces of the riser pipe buoyancy module of Deepwater Horizon coated with a layer of Macondo well oil. Within 15 days, there was significant weathering of the low molecular weight petroleum components (25% loss of petroleum hydrocarbons when compared with the Macondo well oil) with no significant biodegradation of the crude oil hydrocarbons. Large quantities of the highly buoyant honeycomb material were observed several kilometers outside of the uncertainty of NOAA's oil trajectory paths. These results provide insights into the fate of debris fields deriving from damaged marine materials and should be incorporated into emergency response efforts and forecasting of coastal impacts during future offshore oil spills.
3,252.2
2012-03-01T00:00:00.000
[ "Geology" ]
H\"older Continuity of Harmonic Quasiconformal Mappings We prove that for harmonic quasiconformal mappings $\alpha$-H\"older continuity on the boundary implies $\alpha$-H\"older continuity of the map itself. Our result holds for the class of uniformly perfect bounded domains, in fact we can allow that a portion of the boundary is thin in the sense of capacity. The problem for general bounded domains remains open. Introduction The following theorem is the main result in [8]. The exponent β is the best possible, as the example of a radial quasiconformal map f (x) = |x| α−1 x, 0 < α < 1, of B n onto itself shows (see [11], p. 49). Also, the assumption of boundedness is essential. Indeed, one can consider g(x) = |x| a x, |x| ≥ 1 where a > 0. Then g is quasiconformal in D = R n \ B n (see [11], p. 49), it is identity on ∂D and hence Lipschitz continuous on ∂D. However, |g(te 1 ) − g(e 1 )| ≍ t a+1 , t → ∞, and therefore g is not globally Lipschitz continuous on D. This paper deals with the following question, suggested by P. Koskela: is it possible to replace β with α if we assume, in addition to quasiconformality, that f is harmonic? In the special case D = B n this was proved, for arbitrary moduli of continuity ω(δ), in [2]. Our main result is that the answer is positive, if ∂D is a uniformly perfect set (cf. [6]). In fact, we prove a more general result, including domains having a thin, in the sense of capacity, portion of the boundary. However, this generality is in a sense illusory, because any harmonic and quasiconformal (briefly hqc) mapping extends harmonically and quasiconformally across such portion of the boundary. Nevertheless In the case of smooth boundaries much better regularity up to the boundary can be deduced, see [7]; related results for harmonic functions were obtained by [1]. We denote by B(x, r) and S(x, r) the open ball, respectively sphere, in R n with center x and radius r > 0. We adopt the basic notation, terminology and definitions related to quasiconformal maps from [11]. A condenser is a pair (K, U), where K is a non-empty compact subset of an open set U ⊂ R n . The capacity of the condenser (K, U) is defined as where infimum is taken over all continuous real-valued u ∈ ACL n (R n ) such that u(x) = 1 for x ∈ K and u(x) = 0 for x ∈ R n \ U. In fact, one can replace the ACL n condition with Lipschitz continuity in this definition. We note that, for a compact K ⊂ R n and open bounded sets U 1 and U 2 containing K we have: cap(K, U 1 ) = 0 iff cap(K, U 2 ) = 0, therefore the notion of a compact set of zero capacity is well defined (see [12], Remarks 7.13) and we can write cap(K) = 0 in this situation. For the notion of the modulus M(Γ) of a family Γ of curves in R n we refer to [11] and [12]. These two notions are related: by results of [5] and [13] we have where ∆(E, F ; G) denotes the family of curves connecting E to F within G, see [11] or [12] for details. In addition to this notion of capacity, related to quasiconformal mappings, we need Wiener capacity, related to harmonic functions. For a compact K ⊂ R n , n ≥ 3, it is defined by where infimum is taken over all Lipschitz continuous compactly supported functions u on R n such that u = 1 on K. Let us note that every compact K ⊂ R n which has capacity zero has Wiener capacity zero. Indeed, choose an open ball B R = B(0, R) ⊃ K. Since n ≥ 2 we have, by Hölder inequality, R n |∇u| 2 dV ≤ |B R | 1−2/n R n |∇u| n dV 2/n for any Lipschitz continuous u vanishing outside U, our claim follows immediately from definitions. A compact set K ⊂ R n , consisting of at least two points, is α-uniformly perfect (α > 0) if there is no ring R separating K (i.e. such that both components of R n \ R intersect K) such that mod(R) > α. We say that a compact K ⊂ R n is uniformly perfect if it is α-uniformly perfect for some α > 0. We denote the α-dimensional Hausdorff measure of a set F ⊂ R n by Λ α (F ). The main result In this section D denotes a bounded domain in R n , n ≥ 3. Let Γ 0 = {x ∈ ∂D : cap (B(x, ǫ) ∩ ∂D) = 0 for some ǫ > 0}, and Γ 1 = ∂D \ Γ 0 . Using this notation we can state our main result. If Γ 0 is empty we obtain the following Corollary 2.2. If f : D → R n is continuous on D, Hölder continuous with exponent α, 0 < α ≤ 1, on ∂D, harmonic and quasiconformal in D and if ∂D is uniformly perfect, then f is Hölder continuous with exponent α on D. The first step in proving Theorem 2.1 is reduction to the case Γ 0 = ∅. In fact, we show that existence of a hqc extension of f across Γ 0 follows from well known results. Let D ′ = D ∪ Γ 0 . Then D ′ is an open set in R n , Γ 0 is a closed subset of D ′ and ∂D ′ = Γ 1 . Since Γ 0 is a countable union of compact subsets K j of capacity zero and hence of Wiener capacity zero we conclude that Γ 0 has Wiener capacity zero. Hence, by a classical result (see [4]), there is a (unique) extension G : D ′ → R n of f which is harmonic in D ′ . Obviously, F = G is a harmonic quasiconformal extension of f to D ′ which has the same quasiconformality constant as f . In effect, we reduced the proof of Theorem 2.1 to the proof of Corollary 2.2. We begin the proof of Corollary 2.2 with the following Proof. Fix y ∈ D as above and z ∈ ∂D such that |y − z| = d ≡ r . Clearly diam(∂D) = diam(D) > 2r . Set F 1 = B(z, r) ∩ (∂D) and F 2 = B(z, r) ∩ B(y, d 2 ), F 3 = S(z, 2r) . Let Γ i,j = ∆(F i , F j ; R n ) for i, j = 1, 2, 3. By [6, Thm 4.1(3)] there exists a constant a = a(E, n) > 0 such that while by standard estimates [11, 7.5] there exists b = b(n) > 0 such that Next, by [12,Cor 5.41] there exists m = m(E, n) > 0 such that In conclusion, from the above lemma, our assumption and Lemma 8 in [8] we conclude that there is a constant M, depending on m, n, K(f ), C and α only such that However, an argument presented in [8] shows that the above estimate holds for y ∈ D, x ∈ ∂D without any further conditions, but with possibly different constant: (2.5) |f (x) − f (y)| ≤ M ′ |x − y| α , y ∈ D, x ∈ ∂D. The following lemma was proved in [3] for real valued functions, but the proof relies on the maximum principle which holds also for vector valued harmonic functions, hence lemma holds for harmonic mappings as well. for 0 < r ≤ r 0 .
1,831.2
2010-12-14T00:00:00.000
[ "Mathematics" ]
University of Birmingham Measurement of D-meson production versus multiplicity in pPb collisions at sNN = 5 . 02 TeV The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at √ sNN = 5.02 TeV with the ALICE detector at the LHC is reported. D0, D+ and D∗+ mesons are reconstructed via their hadronic decay channels in the centre-of-mass rapidity range −0.96 < ycms < 0.04 and transverse momentum interval 1 < pT < 24 GeV/c. The multiplicity dependence of D-meson production is examined by either comparing yields in p–Pb collisions in different event classes, selected based on the multiplicity of produced particles or zero-degree energy, with those in pp collisions, scaled by the number of binary nucleon-nucleon collisions (nuclear modification factor); as well as by evaluating the per-event yields in p–Pb collisions in different multiplicity intervals normalised to the multiplicity-integrated ones (relative yields). The nuclear modification factors for D0, D+ and D∗+ are consistent with one another. The D-meson nuclear modification factors as a function of the zero-degree energy are consistent with unity within uncertainties in the measured pT regions and event classes. The relative D-meson yields, calculated in various pT intervals, increase as a function of the charged-particle multiplicity. The results are compared with the equivalent pp measurements at √ s = 7 TeV as well as with EPOS 3 calculations. Introduction In high-energy hadronic collisions, heavy quarks (charm and beauty) are produced in hard parton scattering processes. Due to their large masses, their production cross sections can be calculated in the framework of perturbative Quantum Chromodynamics (pQCD) down to low transverse momenta. The differential cross section for heavy-flavour hadron production in nucleon-nucleon collisions can be calculated in the factorisation approach by the convolution of parton densities in the incoming nucleon, the short-distance partonic cross section of heavy quark production, and the fragmentation function that describes the transition of the heavy quark into a heavy-flavour hadron [1]. Thus, heavy-flavour production is sensitive to the gluon and the possible heavy-quark content in the nucleon and provides constraints on the parton distribution functions (PDFs) in the proton and in -1 -JHEP08(2016)078 the nucleus [2,3]. Measurements of heavy-flavour hadron production in hadronic collisions provide tests of pQCD and constitute a crucial baseline for the study of heavy-flavour production in heavy-ion collisions [4,5]. A suppression of heavy-flavour yields is observed in heavy-ion collisions at high transverse momentum (p T ), and is interpreted as being due to the formation of a Quark-Gluon Plasma (QGP). Recently, the study of heavy-flavour production as a function of the multiplicity of charged particles produced in the collision has attracted growing interest. Such measurements probe the interplay between hard and soft mechanisms in particle production. At LHC energies, the multiplicity dependence of heavy-flavour production is likely to be affected by the larger amount of gluon radiation associated with short-distance production processes, as well as by the contribution of Multiple-Parton Interactions (MPI) [23][24][25]. It has also been argued that, due to the spatial distribution of partons in the transverse plane, the probability for MPI to occur in a pp collision increases towards smaller impact parameters [26][27][28]. This effect might be further enhanced by quantum-mechanical fluctuations of gluon densities at small Bjorken-x [29]. The measurements of prompt D mesons, inclusive and non-prompt J/ψ in pp collisions at √ s = 7 TeV [30, 31], and of the three Υ states in pp collisions at √ s = 2.76 TeV [32], provide evidence for a similar increase of open and hidden heavy-flavour yields as a function of charged-particle multiplicity. These results suggest that the enhancement probably originates in short-distance production processes, and is not influenced by hadronisation mechanisms. The enhancement is quantitatively described by calculations including MPI contributions, namely percolation model estimates [33,34], the EPOS 3 event generator [35,36] and PYTHIA 8.157 calculations [37]. In proton-nucleus collisions, several so-called 'Cold Nuclear Matter' (CNM) effects occur due to the presence of a nucleus in the colliding system, and, possibly, to the large density of produced particles. These CNM effects can affect the production of heavy-flavour hadrons at all the stages of their formation. In particular, the PDFs of nucleons bound in nuclei are modified with respect to those of free nucleons. This modification of the PDFs in the nucleus can be described by phenomenological parameterisations (nuclear PDFs, or nPDFs) [38][39][40]. Alternatively, when the production process is dominated by gluons at low Bjorken-x, the nucleus can be described by the Colour-Glass Condensate (CGC) effective theory as a coherent and saturated gluonic system [41][42][43][44]. The kinematics of the partons in the initial state can be affected by multiple scatterings (transverse momentum broadening, or k T broadening) [45][46][47] or by gluon radiation (energy loss) [48] before the heavy-quark pair is produced. Gluon radiation may also occur after the heavy-quark pair JHEP08(2016)078 is formed [49]. Other measurements in p-Pb collisions at √ s NN = 5.02 TeV, e.g. those of angular correlations between charged particles [50][51][52][53], of ψ(2S) suppression [54] and of the relative yields of the three Υ states [32], indicate that final-state effects also play an important role. The measured charm production cross section in minimum-bias p-Pb collisions at √ s NN = 5.02 TeV [55] is consistent within uncertainties with that in pp collisions at the same energy scaled by the atomic mass number of the Pb nucleus. The nuclear modification factor was also found to be consistent with calculations considering EPS09 nPDFs [38], CGC, or transverse momentum broadening and initial-state energy loss. The influence of cold nuclear matter effects on multiplicity-integrated D-meson production in p-Pb collisions is smaller than the measurement uncertainties. Additional insight into CNM effects can be obtained by measuring the heavy-flavour hadron yields as a function of the multiplicity of charged particles produced in the p-Pb collision. The aim of these studies is to explore the dependence of heavy-flavour production on the collision geometry and on the density of final-state particles. Indeed, it is expected that the multiplicity of produced particles depends on the number of nucleons overlapping in the collision region, and therefore on the geometry of the collision (i.e. on the collision centrality). Most of the aforementioned models of CNM effects consider a dependence on the collision geometry, usually expressed through the impact parameter of the collision, the number of participant nucleons (N part ), or the number of nucleon-nucleon collisions (N coll ). In general, CNM effects are expected to be more pronounced in central collisions, i.e. those having a small impact parameter. Some of the parameterisations of the nPDFs have studied the influence of the local nucleon density [56][57][58][59]. The spatially dependent EPS09 and EKS98 nPDF sets, EPS09s and EKS98s, are formulated as a function of the nuclear thickness [56]. The leading twist nuclear shadowing calculation [60] assumes the Glauber-Gribov approach of the collision geometry and predicts the dependence of the nPDF on the collision impact parameter. The estimates of the initial-state k T broadening due to multiple soft collisions also consider a dependence on the collision impact parameter [46,47]. Initial-state parton energy loss is also expected to evolve with the collision geometry as a consequence of the different nuclear density, though detailed calculations including this effect are not yet available. Finally, if final-state effects were to affect heavy-flavour production in p-Pb collisions, their influence would also vary with the density of produced particles. In this paper, we report the p T -differential measurements of D 0 , D + and D * + production as a function of multiplicity in p-Pb collisions at √ s NN = 5.02 TeV. The experimental setup and the data sample are described in section 2. The determination of the multiplicity and the estimation of the collision centrality and of the number of nucleon-nucleon collisions are discussed in section 3. The D-meson reconstruction strategy is explained in section 4. The results are reported in the form of the D-meson nuclear modification factor in different centrality classes (section 5), and the relative D-meson yields as a function of the relative charged-particle multiplicity at central and backward rapidity (section 6). JHEP08(2016)078 2 Experimental apparatus and data sample The ALICE apparatus is described in detail in [61] and its performance in [62]. It is composed of a series of detectors in the central barrel for tracking and particle identification; the Muon Spectrometer in the forward direction for muon tracking and identification; and a further set of detectors at forward rapidity for triggering and event characterisation. The central barrel detectors are located inside a large solenoid magnet that provides a 0.5 T field parallel to the beam direction, which corresponds to the z-axis of the ALICE coordinate system. In this section, the detectors used for the D-meson analysis are briefly described. The Inner Tracking System (ITS), the Time Projection Chamber (TPC) and the Time Of Flight detector (TOF) allow the reconstruction and identification of charged particles in the central pseudorapidity region. The V0 detector, composed of two scintillator arrays located in the forward and backward pseudorapidity regions, is used for online event triggering and multiplicity determination. The Zero Degree Calorimeters (ZDC) are used for event selection and to estimate the collision centrality via the zero-degree energy. The ITS is composed of six cylindrical layers of silicon detectors, located at radii between 3.9 cm (about 1 cm from the beam vacuum tube) and 43.0 cm. The two innermost layers, which respectively cover |η| < 2.0 and |η| < 1.4, comprise the Silicon Pixel Detectors (SPD); the two intermediate layers, within |η| < 0.9, consist of Silicon Drift Detectors (SDD); and the two outer layers, also covering |η| < 0.9, consist of double-sided Silicon Strip Detectors (SSD). The low material budget, high spatial resolution, and position of the detector setup surrounding the beam vacuum tube and close to the interaction point allow it to provide a measurement of the charged-particle impact parameter in the transverse plane (d 0 ), i.e. the distance of closest approach between the track and the primary vertex along rφ, with a resolution better than 75 µm for transverse momenta p T > 1 GeV/c [63]. The TPC is a large cylindrical drift detector, extending from 85 cm to 247 cm in the radial direction and covering the range −250 < z < +250 cm along the beam axis [64]. It provides charged-particle trajectory reconstruction with up to 159 space points per track in the pseudorapidity range |η| < 0.9 and in the full azimuth. The primary interaction vertex position and covariance matrix are determined from tracks reconstructed from hits in the TPC and the ITS via a χ 2 analytic minimisation method. The TOF detector is equipped with Multi-gap Resistive Plate Chambers (MRPCs) [62]. It is placed at radii between 377 cm and 399 cm, and has the same pseudorapidity and azimuthal coverage as the TPC. The TOF measures the flight times of charged particles from the interaction point to the detector with an overall resolution of about 85 ps. For events with the 20% lowest multiplicities, the resolution decreases to about 120 ps due to a worse start-time (collision-time) resolution. The start-time of the event is determined by combining the time estimated using the particle arrival times at the TOF and the time measured by the T0 detector, an array of Cherenkov counters located at +350 cm and −70 cm along the beamline. Particle identification (PID) is performed by comparing the measurement of the specific energy deposition dE/dx in the TPC and the time-of-flight information from the TOF with the respective expected values for each mass hypothesis. The V0 detector consists of two arrays of scintillator tiles covering the pseudorapidity regions −3.7 < η < −1.7 (V0C) and 2.8 < η < 5.1 (V0A) [65]. The data sample analysed -4 -JHEP08(2016)078 in this paper was collected with a minimum-bias interaction trigger requiring at least one hit in both V0A and V0C counters coincident with the arrival time of the proton and lead bunches. The ZDC is composed of two sets of neutron (ZNA and ZNC) and proton (ZPA and ZPC) calorimeters positioned on either side of the interaction point at z = ±112.5 m. Contamination from beam-background interactions was removed via offline selections based on the timing information provided by the V0 and the ZNA. The signals registered by the SPD and V0 detectors were used to determine the event charged-particle multiplicity; the SPD, V0 and ZDC detectors were also exploited to classify the events in centrality classes, as will be described in section 3. The data sample used in this paper was recorded in January 2013, during the p-Pb LHC run. Protons with an energy of 4 TeV were collided with Pb ions with an energy of 1.58 TeV per nucleon, resulting in collisions at a centre-of-mass energy per nucleon pair, √ s NN , of 5.02 TeV. With this beam configuration, the centre-of-mass system moves with a rapidity of ∆y cms = 0.465 in the direction of the proton beam, due to the different energies per nucleon of the proton and the lead beams. In the case of the D-meson analyses presented here, performed in the laboratory reference interval |y lab | < 0.5, this leads to a shifted centre-of-mass rapidity coverage of −0.96 < y cms < 0.04. In the following, we will use the notation η and y lab to refer to the pseudorapidity and rapidity values in the laboratory reference frame, and η cms and y cms for the values evaluated in the centre-of-mass reference frame. A total of 10 8 minimum-bias triggered events, corresponding to an integrated luminosity of L int = 48.6 ± 1.6 µb −1 , passed the selection criteria and were analysed. Multiplicity determination The production of D mesons in p-Pb collisions has been studied as a function of chargedparticle multiplicity using two different observables. One observable is the p T -differential nuclear modification factor, which is defined as the ratio of the p T -differential yields measured in p-Pb collisions in centrality intervals to those in pp collisions, scaled by the number of binary nucleon-nucleon collisions. The centrality intervals were defined using three different estimators based on the multiplicity in the SPD and V0A detectors and the energy deposited in the zero-degree neutron calorimeter in the Pb-going side (ZNA). The procedure used to determine the number of binary nucleonnucleon collisions for each event class is described in section 3.1 and [66]. The other observable, referred to as the relative yield, is defined as the ratio of the per-event D-meson yields in p-Pb collisions in different multiplicity intervals normalised to the multiplicity-integrated yields. Details on the evaluation of the charged-particle multiplicity are discussed in section 3.2. In this analysis, the values of multiplicity measured in two different pseudorapidity intervals, namely at mid-rapidity with the SPD and at large rapidity in the Pb-going direction with the V0A, were considered. Centrality estimators and T pPb determination A centrality-dependent measurement of the nuclear modification factor requires the p-Pb data sample to be sliced into classes according to an experimental observable related to -5 -JHEP08(2016)078 the collision centrality, as well as a determination of the average nuclear overlap function T pPb , which is proportional to the number of nucleon-nucleon collisions N coll , for each centrality class. The minimum-bias p-Pb data sample was divided into four centrality classes by exploiting the information from: (i) V0A, the amplitude of the signal measured by the V0 scintillator array located in the Pb-going side, covering 2.8 < η < 5.1, which is proportional to the number of charged particles produced in this pseudorapidity interval; (ii) CL1, the number of clusters in the outer layer of the SPD, covering |η| < 1.4, which is proportional to the number of charged particles at mid-rapidity; and (iii) ZNA, the energy deposited in the Zero Degree Neutron Calorimeter positioned in the Pb-going side by the slow nucleons produced in the interaction by nuclear de-excitation processes, or knocked out by wounded nucleons. The multiplicity of these neutrons is expected to grow monotonically with the number of binary collisions, N coll . Centrality classes were defined as percentiles of the visible cross section, which was measured to be (2.09 ± 0.07) b [67]. For the centrality classes defined using the CL1 and V0A multiplicities, a Glauber Monte Carlo was used to calculate the relevant geometrical quantities, namely the average numbers of participant nucleons N Glauber part , of binary collisions N Glauber coll , and the average nuclear overlap function T Glauber pPb [66]. For the case where the ZNA information was used, the values of N part , N coll and T pPb were obtained using the so-called hybrid method [66]. In this approach, the determination of T pPb in a given ZNA-energy class relies on the assumption that the charged-particle multiplicity measured at mid-rapidity (−1 < η cms < 0) scales with the number of participant nucleons, N part . 1) where N MB part = 7.9 is the average number of participants in minimum-bias collisions and σ NN = (70 ± 5) mb is the interpolated inelastic nucleon-nucleon cross section at √ s NN = 5.02 TeV [66]. The values of T pPb obtained with the three estimators in the four multiplicity (zero-degree energy) classes used for the analysis are reported in table 1. It was demonstrated by the studies of charged-particle production reported in [66] that when centrality classes are defined in p-Pb collisions, some biases are present. Firstly, there is a multiplicity selection bias due to the large multiplicity fluctuations for p-Pb interactions at a given impact parameter, which are comparable in magnitude to the full dynamic range of the minimum-bias multiplicity distribution. In addition, there is a jet-veto bias due to the contribution to the overall multiplicity from particles arising from the fragmentation of partons produced in hard-scattering processes. This causes low-(high-) multiplicity p-Pb collisions to correspond to a lower (higher) number of hard scatterings per nucleonnucleon collision. Furthermore, a purely geometrical bias was suspected to affect peripheral collisions for all centrality estimators, due to the fact that the mean impact parameter between the proton and each nucleon of the Pb nucleus, calculated from a Monte Carlo Glauber simulation, rises significantly for N part < 6, thus reducing the average number of multi-parton interactions for peripheral collisions. 60-100 0.037 0.037 23 0.046 6.2 Table 1. T pPb values in p-Pb collisions at √ s NN = 5.02 TeV obtained with a Glauber-model based approach for V0A and CL1, and from the hybrid method for ZNA, as described in [66]. These biases cause the nuclear modification factor of charged particles to differ from unity in the centrality classes even in the absence of nuclear effects. These biases decrease with increasing rapidity separation between the centrality estimator and the region where the nuclear modification factor is measured. A strong selection bias is observed for the CL1 estimator, due to the full overlap with the tracking region, which is reduced with the V0A estimator. By contrast, the selection based on the energy deposited in the ZNA is expected to be free from the biases related to the event selection, and is only affected by the geometrical bias. For these reasons, the results based on the ZNA selection, which is the least biased [66], provide insight into possible centrality-dependent nuclear effects on charm production in p-Pb collisions. Moreover, the measurements of the D-meson nuclear modification factor in centrality intervals defined with the three estimators described above offer the possibility to study these biases based on heavy-flavour production, which, due to the large mass of the charm quarks, is expected to scale with the number of binary collisions over the whole p T range, provided that cold nuclear matter effects are negligible. This is in contrast to the charged-particle yield, where a scaling with N coll is expected to occur only in the high-p T region. Relative event multiplicity determination The charged-particle multiplicity, N ch , was estimated at mid-rapidity by measuring the number of tracklets, N tracklets , reconstructed in the SPD. A tracklet is defined as a track segment that joins a pair of space points on the two SPD layers and is aligned with the reconstructed primary vertex. N tracklets was counted within |η| < 1.0. The pseudorapidity acceptance of the SPD depends on the position of the interaction vertex along the beam line z vtx , both due to the asymmetry of the collision system and the limited coverage of the detector. In addition, the overall SPD acceptance varies as a function of time due to a varying number of active channels. A data-driven correction was applied to the N tracklets distributions on an event-by-event basis to account for these two effects. This was done by renormalising the N tracklets distributions to the overall minimum with a Poissonian smearing to account for the fluctuations. Multiplicity classes were then defined based on the percentiles of analysed events in each N tracklets range. JHEP08(2016)078 The conversion of N tracklets to N ch was performed using minimum-bias Monte Carlo simulations. The distribution of the measured N tracklets as a function of the number of generated "physical primaries" (N ch ) in the simulation was considered for this purpose. Physical primaries are defined as prompt particles produced in the collision and their decay products, excluding those from weak decays of strange particles. The proportionality factor was evaluated from a linear fit to the distribution, and was then applied to the mean N tracklets in each interval to give the estimated N ch values. These values were then divided by the width of the considered η range, ∆η = 2, to give an estimated dN ch /dη. The uncertainty of the N tracklets to N ch conversion was estimated by testing its deviation from linearity. A linear fit to the distribution was performed in each multiplicity interval to evaluate the possible changing slope of the distribution between intervals. From these fits, a series of scaling factors were obtained and compared to the multiplicity-integrated one, resulting in a 5% uncertainty. The results are given as a function of the relative charged-particle multiplicity, (dN ch /dη)/ dN ch /dη , where dN ch /dη = 17.64 ± 0.01 (stat.) ± 0.68 (syst.) was measured by ALICE for inelastic p-Pb collisions at √ s NN = 5.02 TeV with at least one charged particle within |η| < 1.0 [68]. The N tracklets ranges considered in this analysis, and the corresponding relative multiplicity values, are given in table 2. The production of D mesons was also studied as a function of charged-particle multiplicity in the region 2.8 < η < 5.1, as measured with the signal amplitude in the V0A detector, N V0A , reported in units of the minimum-ionising-particle charge. This estimator allows the multiplicity and the D-meson yields to be evaluated in two different pseudorapidity intervals (backward and central η), avoiding possible auto-correlations. The average N V0A depends on z vtx , due to the varying distance between the primary vertex and the detector array. This effect was corrected with the same method used for the N tracklets case, leading to an overall average N V0A of 82.7. In this case, the results are considered as a function of the V0A multiplicity relative to the mean multiplicity in the same rapidity region, rather than performing a conversion to dN ch /dη. The N V0A intervals considered, and the corresponding relative multiplicity intervals, are reported in table 3. It should be noted that the analyses performed as a function of centrality examine the events in samples populated by 20% of the analysed events (40% for the most peripheral events, see table 1), whereas those performed as a function of charged-particle multiplicity explore events from low to extremely high multiplicities, corresponding to about 60% and 5% of the analysed events, respectively (see tables 2 and 3). For the latter analyses, the event classes were defined to study the D-meson yield at extreme multiplicities. D meson reconstruction The D 0 , D + , and D * + mesons were reconstructed via their hadronic decay channels D 0 → K − π + (with a branching ratio, BR, of 3.88±0.05%), D + → K − π + π + (BR of 9.13±0.19%), and D * + → D 0 π + (BR of 67.7 ± 0.05%) followed by D 0 → K − π + , and their corresponding charge conjugates [69]. The D 0 and D + weak decays, with mean proper decay lengths (cτ ) of about 123 and 312 µm, respectively, were selected from reconstructed secondary Table 2. Summary of the multiplicity intervals at central rapidity used for the analyses. The number of reconstructed tracklets N tracklets , the average charged-particle multiplicity dN ch /dη (uncertainty of 5% not quoted), and the relative charged-particle multiplicity (dN ch /dη)/ dN ch /dη (uncertainty of 6.3% not quoted) are listed (see section 6.1 for the uncertainties description). The number of events analysed for the D 0 -meson analysis is also reported for each multiplicity range. Table 3. Summary of the multiplicity intervals at backward rapidity used for the analyses. The V0A signal N V0A intervals and the relative multiplicity (N V0A )/ N V0A (uncertainty of 5% not quoted) are listed (see section 6.1 for the uncertainties description). The number of events analysed for the D 0 -meson analysis is also reported for each multiplicity range. vertices separated by a few hundred microns from the interaction point. The D * + meson decays strongly at the primary vertex, and the decay topology of the produced D 0 was reconstructed along with a soft pion originating at the primary vertex. Events were selected by requiring a primary vertex within ±10 cm from the centre of the detector along the beamline. An algorithm to detect multiple interaction vertices was used to reduce the pile-up contribution. D 0 and D + candidates were defined using pairs or triplets of tracks with the proper charge sign combination, within the fiducial acceptance |η| < 0.8 and with transverse momentum p T > 0.3 GeV/c. Only good quality tracks were considered in the combinatorics by requiring selection criteria as described in [19,20,55]. The selection of tracks with |η| < 0.8 reduces the D-meson acceptance, which drops steeply to zero for |y lab | > 0.5 at low p T and for |y lab | > 0.8 at p T > 5 GeV/c. Therefore, a p T -dependent fiducial acceptance region was defined, as reported in [19,20,55]. The selection strategy of the D-meson decay topology was based on the displacement of the decay tracks from the interaction vertex, the separation between the secondary and primary vertices, and the pointing angle, defined as the angle between the reconstructed -9 -JHEP08(2016)078 D-meson momentum and its flight line (the vector between the primary and the secondary vertices). The cuts on the selection variables were chosen in order to obtain a large statistical significance of the D-meson signals, as well as an as large as possible selection efficiency. Therefore, the cut values depend on the D-meson p T and species. In the case of the analysis of the relative yields as a function of multiplicity, the same selections were used in all multiplicity intervals in order to minimise the effect of the efficiency corrections on the ratio of the yields in the multiplicity intervals to the multiplicity-integrated ones. On the other hand, for the analysis of the nuclear modification factor in different centrality classes, the cut values were optimised in each centrality class. Particle identification criteria were applied on the decay tracks, based on the TPC and TOF detector responses, in order to obtain a further reduction of the combinatorial background as explained in [19,20,55]. The raw D-meson yields, both multiplicity-integrated and in each multiplicity or centrality class, were extracted in the considered p T intervals by means of a fit to the invariant mass (M ) distributions of the selected candidates (for the D * + meson the mass difference distributions ∆M = M (Kππ) − M (Kπ) were used). The fit function is the sum of a Gaussian to describe the signal and a function describing the background shape, which is an exponential for D 0 and D + and a threshold function multiplied by an exponential where M π is the pion mass and a and b are free parameters) for the D * + . The centroids and the widths of the Gaussian functions were found to be in agreement with the world average D-meson masses and the values obtained in simulations, respectively, in all multiplicity, centrality and p T intervals. In particular, the widths of the Gaussian functions are independent of multiplicity (or centrality) and increase with increasing D-meson p T . In the relative yield analysis, in order to reduce the effect of the statistical fluctuations, the fits were performed by fixing the Gaussian centroids to the world average D-meson masses, and the widths to the values obtained from a fit to the invariant mass distribution in minimum-bias events, where the signal statistical significance is larger. Figure 1 shows the D 0 and D + invariant mass, and D * + mass difference distributions in the 2 < p T < 4 GeV/c, 4 < p T < 6 GeV/c, 6 < p T < 8 GeV/c intervals, respectively, for the 0-20% and 60-100% centrality classes defined with the ZNA estimator. The fits to the invariant mass distributions were repeated under different conditions and the raw yields were extracted by using alternative methods in order to determine the systematic uncertainties related to the extraction of the raw D-meson counts. The fits were performed by varying the invariant mass ranges and bin widths of the histograms, and considering different functions to describe the background, namely parabolic or linear functions. The raw yields were also obtained by counting the entries of the histograms within a 3σ interval centred on the peak position, after the subtraction of the background estimated from a fit to the side bands, far away from the D-meson peaks. The raw counts of D mesons extracted in each p T and multiplicity interval were corrected for the acceptance and the reconstruction and selection efficiency. The correction factor for each D-meson species was obtained by using Monte Carlo simulations. Events containing a cc or bb pair were generated by using the PYTHIA v6.4.21 event generator [70] with the Perugia-0 tune [71] and adding an underlying event generated with -10 -JHEP08(2016)078 candidates and of the mass difference for D * + candidates (right column) in two centrality classes defined with the ZNA estimator: 0-20% and 60-100%. The red lines in each plot represent the fit to the background, and the blue lines represent the sum of signal and background. One p T interval is shown for each meson species: 2 < p T < 4 GeV/c for D 0 , 4 < p T < 6 GeV/c for D + , and 6 < p T < 8 GeV/c for D * + . HIJING v.1.36 [72]. Detailed descriptions of the detector response, the geometry of the apparatus and the conditions of the luminous region were included in the simulation. The generated D-meson p T distribution was tuned in order to reproduce the FONLL [16] spectrum at √ s = 5.02 TeV. The reconstruction and selection efficiency depends on the multiplicity of charged particles produced in the collision, since the primary vertex resolution and the resolution on the topological selection variables improve at high multiplicity. The generated events were weighted on the basis of their charged-particle multiplicity in order to match the multiplicity distribution observed in the data. The reconstruction and selection efficiency depends on the D-meson species and on p T . For prompt D 0 mesons it is about 1-2% in the 1 < p T < 2 GeV/c interval, where the selection criteria are more stringent due to the higher combinatorial background, and it increases to 20% in 12 < p T < 24 GeV/c. The efficiency for D mesons from B decays is higher because the decay vertices of feed-down D mesons are more displaced from the primary vertex and they are more efficiently selected by the topological selections. The efficiencies are slightly larger at high multiplicity, by about 4-10%. The D-meson raw yields have two components: the prompt D-meson contribution (produced in the charm quark fragmentation, either directly or through strong decays of excited open charm states) and the feed-down contribution originating from B-meson decays. The yield of D mesons from B decays was subtracted from the raw counts by applying a correction factor, f prompt , which represents the fraction of promptly produced -11 -JHEP08(2016)078 D mesons. The f prompt factor was evaluated using the B-hadron production cross section obtained from the FONLL pQCD calculation [16][17][18], the B → D + X kinematics from the EvtGen package [73], and the acceptance times efficiency for D mesons from B decays obtained from the Monte Carlo simulations [19]. The value of f prompt depends on the nuclear modification factor, R feed-down pPb , of the feed-down D mesons. This quantity is related to the nuclear modification of beauty production, which has not been measured in the p T interval of these analyses. Therefore, the nuclear modification factor of feed-down D mesons was assumed to be equal to that of prompt D mesons, R feed-down pPb = R prompt pPb , and a systematic uncertainty was assigned considering the variation 0.9 < R feed-down pPb /R prompt pPb < 1.3. These assumptions were based on the study of the possible modification of the Bhadron production due to the modification of the PDFs in the nucleus through either CGC or pQCD calculations with the EPS09 parameterisation of the nPDFs [38,41]. Nuclear modification factor as a function of centrality The nuclear modification factor of prompt D 0 , D + and D * + mesons was studied as a function of p T using the three different centrality estimators introduced in section 3.1, based on different measurements of the centrality in terms of multiplicity (CL1 and V0A estimators) or zero-degree energy (ZNA estimator). For each estimator, the analysis of D-meson production was carried out in four event classes, and the nuclear modification factor was calculated as: where (dN D /dp T ) cent pPb is the yield of prompt D mesons in p-Pb collisions in a given centrality class, (dσ D /dp T ) pp is the cross section of prompt D mesons in pp collisions at the same √ s, and T pPb is the average nuclear overlap function in a given centrality class, which was estimated with the Glauber-model approach for the CL1 and V0A estimators (T Glauber pPb ) and with the hybrid method for the ZNA estimator (T mult pPb ) (see section 3.1). In contrast to the multiplicity-integrated R pPb = (dσ D /dp T ) pPb / A · (dσ D /dp T ) pp , Q pPb is influenced by potential biases in the centrality estimation that are not related to nuclear effects, as explained in section 3.1. Hence, Q pPb may be different from unity even in the absence of nuclear effects, in particular if measured with respect to the CL1 and V0A estimators. Complementary to this, the measurement of Q pPb with the ZNA estimator allows the least biased estimation of the possible centrality-dependent modification of the p T -differential yields in p-Pb collisions with respect to the binary-scaled yields in pp collisions. The cross sections of prompt D-meson production in pp collisions at √ s = 5.02 TeV were obtained by a pQCD-based energy scaling of the p T -differential cross sections measured at √ s = 7 TeV with the scaling factor evaluated by the ratio of the FONLL [16][17][18] calculations at 5.02 and 7 TeV [74]. The scaling procedure was validated by comparing the D-meson p T -differential cross sections at 2.76 TeV with the 7 TeV data scaled down to 2.76 TeV [20]. In the case of D 0 mesons, some refinements were considered for the lowest and highest p T intervals. For 1 < p T < 2 GeV/c, where the D 0 cross section was -12 -JHEP08(2016)078 measured at both 7 and 2.76 TeV [4,19], both measurements were scaled to 5.02 TeV and averaged using the inverse squared of their relative statistical and systematic uncertainties as weights. Since the ALICE measurements of the D 0 cross section in pp data are limited to p T < 16 GeV/c, the estimate for 16 < p T < 24 GeV/c was determined by extrapolating the 7 TeV cross section to higher p T using the FONLL p T -differential spectrum normalised to the measurement in 5 < p T < 16 GeV/c, and scaling it down to 5.02 TeV. The raw numbers of D mesons in each p T and centrality interval were extracted and corrected by the acceptance and efficiency obtained from Monte Carlo simulations, as described in section 4. The feed-down from B-hadron decays was subtracted from the extracted yields by calculating f prompt in each centrality class independently, as described in section 4. Systematic uncertainties The systematic uncertainties (yield extraction, reconstruction and selection efficiency determination and feed-down subtraction) do not depend on the estimator used to define the centrality classes. A mild dependence of the uncertainty on the multiplicity that populates the different centrality classes was observed, resulting in slightly larger uncertainties in the event class with the lowest multiplicity. The systematic uncertainty of the yield extraction procedure was estimated by varying the fit conditions and by using the bin counting method as introduced in section 4. It is about 3-4% at intermediate p T (2 < p T < 6 GeV/c) and increases to 8-10% at p T < 2 GeV/c and p T > 6 GeV/c. For the D 0 meson, the yield extraction systematic uncertainty includes the contribution to the raw yield of signal candidates reconstructed by assigning the wrong mass to the final state hadrons (about 3-4% for all p T intervals) [55]. The influence of the tracking efficiency was estimated by varying the track selection criteria. The corresponding uncertainty was found to be about 3% per track, resulting in a total uncertainty of 6% (9%) for a two-(three-)particle decay. The uncertainty due to the Dmeson candidate selection criteria was evaluated by varying the topological selections used. It was estimated to be 10% for the interval 1 < p T < 2 GeV/c and 5% for p T > 2 GeV/c. The effect of the generated D-meson p T shape used to compute the efficiency was estimated by comparing the efficiency values obtained with the PYTHIA and the FONLL p T spectra. A systematic uncertainty of 2-3% was applied only in the interval 1 < p T < 2 GeV/c due to this. The uncertainty due to the multiplicity dependence of the reconstruction and selection efficiency was evaluated changing the weight functions used to reproduce the measured charged-particle multiplicity in the simulations. The multiplicity weights were determined by the ratio of the distribution of the number of tracklets within |η| < 1 in data and Monte Carlo. The weights were computed for: (i) all events selected in the analysis, (ii) events with a D-meson candidate within approximately ±10σ of the invariant mass peak, and (iii) events with a D-meson candidate in the ±3σ invariant mass region. A deviation of about 10% is observed for D mesons at low p T . For high-p T D mesons (p T > 12 GeV/c), the weights have a smaller effect on the efficiency determination, introducing a difference of only 4%. JHEP08(2016)078 The analysis was repeated without applying the particle identification selections to the D-meson decay hadrons. The corrected yields were consistent, within statistical fluctuations, with those calculated considering particle identification selections. Therefore, no corresponding uncertainty was assigned. The systematic uncertainty due to the subtraction of feed-down D mesons from B decays was estimated by considering the FONLL uncertainties on the normalisation and factorisation scales and using a second subtraction method based on the ratio of FONLL calculations for D-and B-meson cross sections [19]. The magnitude of this systematic uncertainty depends on the meson species and on the p T interval considered in the measurement, since it is related to the topological selections applied in each analysis. As explained in section 4, a variation of the feed-down D-meson nuclear modification factor was also taken into account as part of the systematics. The quadratic sum of the two contributions to the Q pPb was found to range from a few percent up to 30%. The denominator of the Q pPb has an uncertainty on the T pPb , which is reported in table 1, and an uncertainty on the pp reference. The latter has a contribution coming from the 7 TeV measurement (ranging from 15% up to 25%) and one from the scaling factor ranging from +17% −4% at p T = 1 GeV/c to ±3% for p T > 8 GeV/c. The uncertainty on the energy scaling factor was estimated by varying the calculation parameters as described in [74]. A larger uncertainty for D 0 in 16 < p T < 24 GeV/c was quantified due to the extrapolation procedure explained above; in that case the uncertainty is +17.5% −4% . The global Q pPb uncertainties were determined by adding the pp and p-Pb uncertainties in quadrature, except for the branching ratio uncertainty, which cancels out in the ratio, and the feed-down contribution, which partially cancels out. Results The nuclear modification factors of D 0 , D + and D * + mesons were calculated according to eq. (5.1) in four centrality classes (0-20%, 20-40%, 40-60% and 60-100%) defined with the ZNA estimator, and applying the hybrid method to obtain the T pPb in each class. Figure 2 illustrates these results for 0-20% and 40-60% centrality classes. The Q pPb of the three D-meson species were found to be consistent with one another within the statistical and systematic uncertainties for each p T and centrality class considered. Therefore, the average of the D 0 , D + and D * + meson results was evaluated in each centrality class considering the inverse square of the relative statistical uncertainties as weights. The systematic uncertainties on the averages were computed considering the tracking efficiency, the B feed-down subtraction and the scaling of the pp reference as correlated uncertainty sources among the three mesons. The averages of the D 0 , D + and D * + p T -differential nuclear modification factors in different centrality classes obtained with the ZNA estimator are presented in figure 3 and table 4. The D-meson Q pPb results in the different centrality classes are consistent with unity within the uncertainties in the measurement p T interval. Typical values of the Q pPb uncertainties are of 7% (stat.) and 16% (syst.) for 2 < p T < 4 GeV/c. It should be noted that with this centrality estimator no bias is expected due to the event selection, and only a small bias in peripheral events, due to the geometrical bias in the determination of the number of hard scatterings, was observed in the studies with charged particles [66]. Therefore, with the least biased centrality estimator, the D-meson Q pPb results are consistent within statistical and systematic uncertainties with binary collision scaling of the yield in pp collisions, independent of the geometry of the collision. Q pPb with CL1 and V 0A estimators As explained in section 3.1, the D 0 , D + and D * + Q pPb were also calculated with the CL1 and V0A estimators in four centrality classes to study the centrality selection biases based on heavy-flavour production from low to high p T . The Q pPb results for the three D-meson species were found to be consistent with one another within the statistical and systematic uncertainties for each p T and centrality class considered. Therefore, the averages of the D 0 , D + and D * + meson results and the systematic uncertainties were evaluated as explained before. The averages of the p T -differential D 0 , D + and D * + nuclear modification factors in different centrality classes with CL1 and V0A estimators are presented in figure 4 (see also tables 5 and 6). The centrality estimation from the CL1 multiplicity suffers from a large bias introduced by multiplicity fluctuations in the central rapidity region caused by fluctuations of the number of hard scatterings per nucleon collision, which affect the T pPb determination [66]. The Q CL1 pPb results show an ordering from low (60-100%) to high (0-20%) multiplicity, with a difference larger than a factor of two between the most central and most peripheral classes, induced by the bias on the centrality estimator. The V0A estimator classifies the events as a function of the multiplicity in the backward rapidity region. The rapidity gap with respect to the central rapidity D-meson analyses . Average D 0 , D + and D * + meson Q pPb as a function of centrality with the CL1, the V0A and the ZNA estimators for (a) 2 < p T < 4 GeV/c and (b) 8 < p T < 12 GeV/c. The average D-meson Q pPb in 8 < p T < 12 GeV/c is compared with the charged-particle Q pPb calculated for p T > 10 GeV/c [66]. The vertical error bars and the empty boxes represent, respectively, the statistical and systematic uncertainties on the D-meson results. The filled boxes at Q pPb = 1 indicate the correlated systematic uncertainties: the grey-filled box represents the uncertainty on the pp reference and the p-Pb analysis PID and track selection uncertainties, common to all estimators for a given p T interval; the red-filled box represents the correlated systematic uncertainty on N coll determination for the ZNA energy estimator. similar qualitative behaviour to the Q CL1 pPb ones, with a smaller difference between centrality classes. This is consistent with the expectation of a smaller bias when there is a rapidity gap between the regions where the centrality and the D-meson yield are studied. Comparison with charged-particle Q pPb The average D-meson Q pPb results obtained with the three estimators, for 2 < p T < 4 GeV/c and 8 < p T < 12 GeV/c, are displayed as a function of centrality in figure 5. The D-meson Q pPb for 8 < p T < 12 GeV/c is compared with the analogous measurement for charged hadrons with p T > 10 GeV/c [66]. In this transverse momentum region also the production of charged hadrons is expected to scale with the number of binary nucleon-nucleon collisions [66]. The measured trends of charged-particle Q pPb at high p T in all the CL1 and V0A centrality classes were found to be reasonably described by an incoherent superposition of N coll pp collisions generated with PYTHIA, after defining the event centrality from the charged-particle multiplicity in the rapidity region covered by each estimator in the same way as in data (|η| < 1.4 for CL1, 2.8 < η < 5.1 for V0A) [66]. The Q pPb results for D mesons and charged hadrons with p T > 10 GeV/c show a similar trend as a function of centrality and estimator due to the bias in the centrality determination, as observed in [66] based on high-p T particle production in the light flavour -17 - JHEP08(2016)078 sector. The results presented in this paper allow these studies to be extended into the charm sector and down to low p T . 6 Relative yields as a function of multiplicity D 0 , D + and D * + meson yields were also studied as a function of the charged-particle multiplicity in two pseudorapidity intervals, see section 3.2. The D-meson yields were evaluated for various multiplicity and p T intervals and the results are reported in terms of corrected per-event yields normalised to the multiplicity-integrated values where the index j identifies the multiplicity interval, N j raw D is the raw yield extracted from the fit to the invariant mass distribution in each multiplicity interval, j prompt D represents the reconstruction and selection efficiencies for prompt D mesons, and N j events is the number of events analysed in each multiplicity interval. The efficiencies were estimated with Monte Carlo simulations (see section 4). Equation (6.1) holds under the assumption that the relative contribution to the raw D-meson yield due to the feed-down from beauty-hadron decays does not depend on the multiplicity of the event, and therefore cancels out in the ratio to the multiplicity-integrated values. This assumption is justified by the beauty production measurements as a function of multiplicity in pp collisions, and also by PYTHIA simulations [31]. The acceptance correction, defined as the fraction of D mesons within a given rapidity and p T interval that decay into pairs or triplets of particles within the detector coverage, cancels out in this ratio. The number of events used for the normalisation of the multiplicity-integrated yield must be corrected for the fraction of non-single diffractive events that are not accepted by the minimum-bias trigger condition, expressed as N MB trigger / MB trigger with MB trigger = (96.4±3.1)% [67]. It was verified with PYTHIA 6.4.21 Monte Carlo simulations that the minimum-bias trigger is 100% efficient for D mesons in the kinematic range of the measurement, meaning that the number of D mesons in the minimum-bias triggered events is the same as in the sample of non-single diffractive events. Systematic uncertainties In this section the systematic uncertainties estimated for the D-meson measurements as a function of N tracklets and as a function of the N V0A multiplicity are outlined. The most significant source of systematic uncertainty is the one related to the signal extraction procedure. The raw D-meson yields were obtained by fixing the position of the Gaussian signal peak to the world averages of the D-meson masses, and the widths to the values obtained from the fit to the multiplicity integrated invariant mass distributions. To estimate the yield extraction uncertainty the fit parameters were varied as described in section 4. In addition to the variations listed in section 4, the fits were performed also allowing the position and the width of the Gaussian terms to remain free in the individual JHEP08(2016)078 multiplicity intervals. The yield extraction uncertainty was estimated based on the stability of the ratio of the raw yields N j raw D / N raw D , where the same raw yield extraction method was used in the multiplicity interval j and for the multiplicity-integrated result. The magnitude of this uncertainty depends on p T and meson species. The contribution of the yield extraction procedure to the systematic uncertainties varied between 4-10%. The influence of D-meson selections, due to the PID and the topological selections, were examined and found to have no significant effect on the final result, since they enter equally into the numerator and denominator of eq. (6.1). As mentioned in section 4, the contribution of feed-down from B decays to the raw yield was estimated based on FONLL calculations [18]. In this case, it was assumed that the fraction of D mesons that are not from feed-down decays, f prompt , remains constant as a function of multiplicity, causing it to cancel out in the numerator and denominator of the ratio in eq. (6.1). The feed-down contribution was therefore not explicitly subtracted from the final result. A systematic uncertainty related to this hypothesis was assigned by assuming that the fraction f j B / f B , where f B = 1 − f prompt , increases linearly from 1/2 to 2 from the lowest to the highest multiplicity intervals. The resulting uncertainty depends on multiplicity, p T and meson species, and ranges from +4 −0 % to +10 −0 % at low multiplicity and from +0 −4 % to +0 −20 % at high multiplicity. In the analyses as a function of N tracklets , the relative average values of N j tracklets / N tracklets for each interval were corrected to give relative (dN ch /dη) j / dN ch /dη values, as described in section 3.2. The systematic uncertainty due to this correction was estimated in the simulations based on the resolution and the linearity of the correlation between the number of tracklets, N tracklets , and the number of generated charged primary particles, N ch . The deviation from linearity was found to contribute by roughly 5% to the uncertainty on the relative multiplicity. Finally, the uncertainty on the measured dN ch /dη in inelastic p-Pb collisions measured in [68] was considered. This contributed an uncertainty of approximately 4%. The total systematic uncertainty on the relative charged-particle density per N tracklets interval was found to be 6.3%. In the analyses as a function of N V0A , the measurements are reported as a function of the relative multiplicity N V0A N V0A . The uncertainty on the mean multiplicity values, N V0A , was determined by comparing the mean and median values of the distributions. It was found to be below 5% for each multiplicity interval, and about 30% for the multiplicityintegrated value. Results The relative D-meson yields were calculated for each p T and multiplicity interval according to eq. (6.1). The results are reported as a function of the relative charged-particle multiplicity at both backward and central rapidity. It is worth noting that the smaller number of reconstructed D mesons in the lowest and highest p T intervals 1 limited the number of multiplicity intervals of the measurement for those p T intervals. 1 The number of reconstructed D mesons in the lowest and highest pT intervals is smaller than in the other pT intervals. At low pT, the strategy employed to cope with the low signal-to-background ratio was (b) D mesons with 4 < pT < 8 GeV/c. Figure 6. Relative D 0 , D + and D * + meson yields for two selected p T intervals as a function of charged-particle multiplicity at central rapidity. The relative yields are presented in the top panels with their statistical (vertical bars) and systematic (empty boxes) uncertainties, apart from the feed-down fraction uncertainty, which is drawn separately in the bottom panels. The position of the points on the abscissa is the average value of (dN ch /dη) dN ch /dη . For D + and D * + mesons the points are shifted horizontally by 1.5% to improve the visibility. The diagonal (dashed) line is also shown to guide the eye. The relative D 0 , D + and D * + yields were measured in five p T intervals from 1 to 24 GeV/c as a function of the charged-particle multiplicity at mid-rapidity. Figure 6 presents the measurements for selected p T intervals with their statistical (vertical bars) and systematic (boxes) uncertainties, apart from the feed-down fraction uncertainty, which is drawn separately in the bottom panels. The position of points on the abscissa is the average value of (dN ch /dη) dN ch /dη , but for some meson species they are shifted horizontally by 1.5% to improve the visibility. The relative yields of the three D-meson species are consistent with one another in all p T intervals within uncertainties. The average of the relative D 0 , D + and D * + yields was evaluated considering the inverse square of their relative statistical uncertainties as weights. The yield extraction uncertainties were treated as uncorrelated systematic uncertainties, while the feed-down subtraction uncertainties were considered as correlated uncertainty sources. Figure 7a presents the average D-meson yields for each p T interval. The results are reported in table 7. The p T evolution of the yields was examined using the results in the 2 < p T < 4 GeV/c interval as reference and by computing the ratio between the average relative D-meson yields in the various p T intervals and those in 2 < p T < 4 GeV/c. The results are shown in figure 7b. to apply tight topological selections, decreasing the selection efficiency and consequently the number of reconstructed D mesons. At high pT, the small number of candidates is the consequence of the steeply falling D-meson pT spectra. Average of relative D 0 , D + and D * + yields as a function of the relative charged-particle multiplicity at central rapidity. (a) Average of relative D-meson yields in p T intervals. (b) Ratio of the average relative yields in all p T intervals with respect to that of the 2 < p T < 4 GeV/c interval. The results are presented in the top panels with their statistical (vertical bars) and systematic (boxes) uncertainties, apart from the feed-down fraction uncertainty, which is drawn separately in the bottom panels. The position of the points on the abscissa is the average value of (dN ch /dη) dN ch /dη . For some p T intervals the points are shifted horizontally by 1.5% to improve the visibility. The dashed lines are also shown to guide the eye, a diagonal on (a) and a constant on (b). The yield increase is independent of transverse momentum within the uncertainties of the measurement. The D-meson yields show a faster-than-linear increase with the chargedparticle multiplicity at central rapidity. The yield increase is approximately a factor of 7 for multiplicities of 4.2 times dN ch /dη . These results are compared with the equivalent measurements in pp collisions, as well as with model calculations, in section 6.2.1. The measurement of the relative D 0 , D + and D * + yields was also performed as a function of the relative charged-particle multiplicity at large rapidity in the Pb-going direction, thus introducing an η gap between the regions where the D mesons and the multiplicity are measured. The charge collected by the V0A detector, N V0A , was considered as a multiplicity estimator (see section 3.2). Simulations have shown that the collected charge is proportional to the charged-particle multiplicity in the measured η range, 2.8 < η < 5.1. The relative D-meson yields measured in p T and N V0A intervals are reported as a function of the relative multiplicity in the V0A detector, N V0A N V0A . The D 0 , D + and D * + yields are consistent with one another in all the measurement intervals, within uncertainties. The average D-meson yield was calculated with the same procedure used for the results as a function of charged-particle multiplicity at mid-rapidity. multiplicity at backward rapidity. The yield increase is consistent with a linear growth as a function of multiplicity. The results as a function of V0A multiplicity indicate that the per-event D-meson yield increases as a function of multiplicity, regardless of the η range in which the multiplicity is measured. This remains the case even when the charged-particle yield is measured in a different η interval from the D mesons, which originate from the fragmentation of charm quarks produced in hard partonic scattering processes. One notable effect to consider when comparing the trends of D-meson production as a function of multiplicity at central and large rapidity is that the charged-particle multiplicity was observed to scale differently with the number of nucleons involved in the p-A interaction depending on η [66,75]. In particular, at central rapidity the charged-particle multiplicity is found to scale with the number of participant nucleons, N part , while at large rapidities in the Pb-going direction (i.e. in the V0A acceptance) it scales with the number of participants of the Pb nucleus, which is equal to N part − 1 = N coll in p-Pb collisions. It was verified that the results of the D-meson yields as a function of multiplicity are consistent with those of the Q pPb analysis (see section 5). In the Q pPb analysis, D-meson production is studied by dividing the events into centrality classes equally populated by 20% of the events, whereas in this section we examine events with extremely high multiplicity (see tables 2 and 3). Events with low (high) multiplicity correspond to interactions with a smaller (larger) number of hard scatterings per nucleon-nucleon collision, as well as to -22 -JHEP08(2016)078 negative (positive) multiplicity fluctuations which affect event classification and influence both measurements. Comparison of p-Pb data with pp results and models The relative D-meson yield (average of D 0 , D + and D * + ) as a function of charged-particle multiplicity at central rapidity in p-Pb collisions at √ s NN = 5.02 TeV is compared with the corresponding pp measurements at √ s = 7 TeV for 2 < p T < 4 GeV/c in figure 9a. A similar relative increase of charmed-meson yield with charged-particle multiplicity is observed in pp and p-Pb collisions. Note that the multiplicity is measured for both pp and p-Pb collisions in the same pseudorapidity range in the laboratory system, which corresponds to different ranges in the centre-of-mass frame for the two collision systems, due to the asymmetry of the beam energies in the p-Pb case. The increasing yield in pp data can be described by calculations taking into account the contribution of Multiple-Parton Interactions (MPI) [23][24][25], by the influence of the interactions between colour sources in the percolation model [33,34], or by the effect of the initial conditions of the collision followed by a hydrodynamic evolution computed with the EPOS 3 event generator [35,36] where the individual scatterings are identified with parton ladders. In p-Pb collisions, the multiplicity dependence of heavy-flavour production is also affected by the presence of multiple binary nucleon-nucleon interactions, and the initial conditions of the collision are modified due to CNM effects. Charmed-meson yields in pp and p-Pb collisions as a function of the relative multiplicity at large rapidity are compared in figure 9b for 2 < p T < 4 GeV/c. The multiplicity in p-Pb collisions is measured in 2.8 < η < 5.1 in the Pb-going direction, whereas in pp data the multiplicities at backward (2.8 < η < 5.1) and forward (−3.7 < η < −1.7) pseudorapidity were summed together. The D-meson yields increase faster in pp than p-Pb collisions as a function of the relative multiplicity at backward rapidity. The different pseudorapidity intervals of the multiplicity measurement may contribute to this observation. In addition, measurements in p-Pb collisions differ from those in pp interactions because the initial conditions of the collision are affected by the presence of the Pb nucleus, and because there are multiple binary nucleon-nucleon interactions per p-Pb collision. Figures 10 and 11 present comparisons of the D-meson results and EPOS 3.116 model estimates. The EPOS 3 event generator [35,36] imposes the same theoretical framework for various colliding systems: pp, p-A and A-A. The initial conditions are generated using the "Parton-based Gribov-Regge" formalism [35] of multiple scatterings. Each individual scattering is identified with a parton ladder, composed of a pQCD hard process with initialand final-state radiation. The non-linear effects of parton evolution are treated introducing a saturation scale below which those effects become important. With these initial conditions, a 3D+1 viscous hydrodynamical evolution is applied to the core of the collision [36]. The measurements agree with the EPOS 3 model calculations within uncertainties. The results at high multiplicity are better reproduced by the calculation including a viscous hydrodynamical evolution of the collision, which predicts a faster-than-linear increase of the charmed-meson yield with multiplicity at central rapidity. The same calculation evaluates an approximately linear increase of the charmed-meson yield with the multiplicity . Average relative D-meson yields in |y lab | < 0.5 as a function of (a) the relative chargedparticle multiplicity at mid-rapidity |η| < 1.0, and (b) at backward-rapidity 2.8 < η < 5.1 (including also −3.7 < η < −1.7 in pp data) for 2 < p T < 4 GeV/c. The relative yields are presented in the top panels with their statistical (vertical bars) and systematic (boxes) uncertainties, apart from the uncertainty on the B feed-down fraction, which is drawn separately in the bottom panels. The positions of the points on the abscissa are the average values of (dN ch /dη) dN ch /dη or N V0A N V0A . A diagonal (dashed) line is also shown to guide the eye. measured at backward rapidity due to the reduced influence of flow on charged particles produced at large rapidity. Summary The production of D 0 , D + and D * + mesons as a function of multiplicity in p-Pb collisions at √ s NN = 5.02 TeV, measured with the ALICE detector, has been reported. D mesons were reconstructed in their hadronic decays in different transverse momentum intervals within 1 < p T < 24 GeV/c, in the centre-of-mass rapidity range −0.96 < y cms < 0.04. The multiplicity dependence of D-meson production was studied both by comparing their yields in p-Pb collisions for various centrality classes with those of binary scaled pp collisions at the same centre-of-mass energy via the nuclear modification factor, and by evaluating the relative yields sliced in multiplicity intervals with respect to the multiplicity-integrated ones. The p T -differential nuclear modification factor, Q pPb , of the D mesons was evaluated with three centrality estimators according to the multiplicity measured in different pseudorapidity intervals: CL1 in |η| < 1.4, V0A in 2.8 < η < 5.1 in the Pb-going direction, and the energy of slow neutrons detected by the ZNA calorimeter at very large rapidity. For each estimator, the events were classified in four classes corresponding to percentiles of the cross section: 0-20%, 20-40%, 40 [35,36] are also shown. The coloured lines represent the calculation curves, whereas the shaded bands represent their statistical uncertainties at given values of (dN ch /dη) dN ch /dη . A diagonal (dashed) line is also shown to guide the eye. three D-meson species fluctuate around unity and are consistent in the measured p T and centrality intervals within uncertainties. The results with the CL1 estimator suggest an ordering from higher (> 1) to lower (< 1) Q pPb values from the 0-20% to the 60-100% centrality class. This disparity is reduced when Q pPb is calculated using the V0A estimator, and vanishes when it is determined with the ZNA estimator (Q pPb ≈ 1). These effects are understood to be due to the biases in the centrality determination in p-Pb collisions based on measurements of multiplicity. The ZNA estimator is the least affected by these sources of biases, and the Q pPb results obtained with this estimator indicate that there is no evidence of a centrality dependence of the D-meson production in p-Pb collisions with respect to that of pp collisions at the same centre-of-mass energy in the measured p T interval within the uncertainties. [35,36] are also shown. The coloured lines represent the calculation curves, whereas the shaded bands represent their statistical uncertainties at given values of N V0A N V0A . A diagonal (dashed) line is also shown to guide the eye. The D-meson yields were also studied in p-Pb collisions as a function of the relative charged-particle multiplicity at mid-rapidity, |η| < 1.0, and at large rapidity, 2.8 < η < 5.1, in the Pb-going direction. The relative yields, i.e. the yields in a given multiplicity interval divided by the multiplicity-integrated ones, were calculated differentially in transverse momentum. In contrast to Q pPb , which examines particle production in samples of 20% of the analysed events, this observable explores events from low to extremely high multiplicities corresponding to only 5% (1%) of the analysed events in p-Pb (pp) collisions. The measurements of the relative yields for D 0 , D + and D * + mesons are consistent within the uncertainties. The D-meson yields increase with charged-particle multiplicity, and the increase is independent of p T within the measurement uncertainties. The yield increases with a faster-than-linear trend as a function of the charged-particle multiplicity at mid-rapidity. This behaviour is similar to that of the corresponding measurements in pp collisions at -26 -JHEP08(2016)078 √ s = 7 TeV. Possible interpretations include short-distance gluon radiation, contributions from Multiple-Parton Interactions, the influence of initial conditions followed by a hydrodynamic expansion (EPOS 3 event generator), or the percolation model scenario. In addition, the contribution from multiple binary nucleon-nucleon collisions must be considered in p-Pb collisions. By contrast, the increase of the charmed-meson yields as a function of charged-particle multiplicity at large rapidity in the Pb-going direction is consistent with a linear growth as a function of multiplicity. EPOS -31 - JHEP08(2016)078 Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
16,072
2016-01-01T00:00:00.000
[ "Physics" ]
Improved Corrosion Resistance Behaviour of AlSi10Mg Alloy due to Selective Laser Melting : The corrosion behaviour of AlSi10Mg alloy produced by selective laser melting (SLM) under two different atmospheres, namely argon and nitrogen, was compared to that of AlSi10Mg alloy that had been cast. The present study demonstrates the systematic electrochemical behaviour of selective-laser-melted (SLMed) AlSi10Mg. Potentiodynamic polarisation and electrochemical impedance spectroscopy (EIS) were used to investigate the electrochemical behaviour, illustrating the degrading features of SLMed AlSi10Mg alloy in 0.1 M NaCl solution. The corrosion resistance of AlSi10Mg produced using selective laser melting was found to be 2–3 times greater than that of AlSi10Mg that had been cast. The degradation behaviour was also explained by SEM analysis of the corroded samples of SLMed AlSi10Mg and as-cast AlSi10Mg alloy. It may be deduced that the better corrosion resistance of AlSi10Mg produced through selective laser melting is due to the fast cooling rate associated with the solidification of AlSi10Mg alloy fabricated through selective laser melting, compared with the slow cooling rate associated with the solidification of AlSi10Mg produced by casting. Introduction Additive manufacturing has emerged as a promising technique to manufacture complex structures using metallic, ceramic and polymeric materials [1].Selective laser melting (a type of additive manufacturing) is an advanced manufacturing technique wherein the production of a part is built layer-by-layer by rastering a high-powered laser as directed by a computer-aided design (CAD) model [2].The corrosion of additively manufactured alloy is a very important aspect of applications, e.g., in marine industries.It is important to look at the effect of microstructural features such as porosity, grain structures, dislocation networks, residual stress, solute segregation and surface roughness on the corrosion behaviour of additively manufactured alloys [3].AlSi10Mg alloy [4][5][6][7][8][9][10] and AlSi12 alloy [11] are the most-utilised Al alloys for research on the corrosion of additively manufactured Al alloys.Si-containing aluminium alloys are frequently used due to their castability, low shrinkage and relatively low melting point [12][13][14][15][16]. Hypoeutectic AlSi10Mg alloys are used in automotive, aerospace and marine industries due to their high specific strength, low density, low thermal diffusion coefficient and low cost of reclamation [17]. AlSi10Mg has a microstructure that is made up of an α-Al phase, eutectic Si particles and secondary phases such as Mg 2 Si (β-phase).According to earlier investigations, the mechanical properties of AlSi10Mg alloy are degraded by eutectic Si particles with coarse and acicular morphology [18][19][20].As a result, it is crucial to alter the microstructure of AlSi10Mg alloy in order to enhance its mechanical properties.A more refined microstructure and uniform distribution of eutectic Si particles may be achieved by quick solidification at a cooling rate of 10 6 • C/s (in case of selective laser melting) [21].However, the cooling rate during casting is about 100 • C/s [22], which is less than the cooling rate of 10 6 o C/s in the case of additive manufacturing.Therefore, selective laser melting can be an alternative approach to achieving a cooling rate as fast as 10 6 • C/s for the refined microstructure and uniform distribution of eutectic Si particles [23].Corrosion properties play a significant role in the industrial application of additively manufactured materials [24].Literature indicates that SLMed aluminum alloys demonstrate better corrosion resistance than that of as-cast aluminum alloys [25].According to several studies, the corrosion properties of additively manufactured samples were improved due to the homogenous microstructure and the lack of iron-based intermetallics [4,7,24], whereas other studies revealed a decline in the corrosion performances due to the reduced protection of the passive layer [9,24,26].Rafieazad et al examined how friction stir processing affected the microstructure and the electrochemical stability of L-PBF AlSi10Mg in aerated 3.5 wt.% NaCl electrolyte [27].The positive shift of the pitting potential and decrease in the corrosion rate and corrosion current density were evidence of improved corrosion performance of the additively manufactured sample utilising friction stir processing [27].Damborenea et al. performed a corrosion study on additively manufactured AlSi10Mg and found similar corrosion resistance compared to as-cast AlSi10Mg [28].Zakay et al. showed an improvement in corrosion resistance after heat treatment at 200 • C for 2 h of SLMed AlSi10Mg alloy due to a relieving of residual stresses and fine Si particles within the aluminum matrix [29].Girelli et al. showed that T6 heat treatment on AlSi10Mg additively manufactured alloy is beneficial for corrosion resistance due to the formation of homogenised microstructure [30], while Kubacki et al. [31] and Gu et al. [16] found a decrease in corrosion resistance in heat-treated SLMed AlSi10Mg alloys.Cabrini et al. showed that building direction has no effect on additively manufactured AlSi10Mg alloy [32]. Very limited research has been conducted on how alloys made by the additive manufacturing process behave when exposed to corrosion [33].Recent research on the corrosion behaviour of SLMed AlSi10Mg alloys has been quite limited [4][5][6][7][8][9][10]33]. The current study used potentiodynamic polarisation and electrochemical impedance spectroscopy to the study the corrosion behaviour of SLMed AlSi10Mg compared to as-cast AlSi10Mg.The effects of the environment and build direction on the corrosion resistance of additively manufactured AlSi10Mg alloy were also revealed by this investigation.It was hypothesised in this study that SLMed AlSi10Mg will provide better corrosion resistance due to its fine grain morphology compared to as-cast AlSi10Mg alloy. Materials and Experimental Procedures Hypoeutectoid AlSi10Mg alloy powder was used to create additively manufactured samples using EOS M280 DMLS with the following parameters: laser power was 370 W, scan speed was 1300 mm/s, hatch spacing was 0.19 mm and layer thickness was 30 µm as indicated in reference [34] (Figure 1a).The AlSi10Mg specimens were printed in small rectangular blocks of the following sizes: 10 mm (x) × 8 mm (y) × 100 mm (z) and 100 mm (x) × 8 mm (y) × 10 mm (z).Vertically built samples are those with the dimensions of 10 mm × 8 mm × 100 mm, while horizontally built samples were those with dimensions of 100 mm × 8 mm × 10 mm.The samples were built layer-by-layer on a substrate that had been preheated to 300 • C. The first layer was built along the X direction and subsequent layer directions were rotated 67 • after each scan (Figure 1b).The procedure was carried out in a protective environment of shielding gases, argon and nitrogen, while maintaining the same flow rate, pressure and other processing parameters.The SLMed AlSi10Mg alloy samples were made vertically and horizontally.These four SLM conditions were designated as follows: (i) under argon horizontally built, (ii) under argon vertically built, (iii) under nitrogen horizontally built and (iv) under nitrogen vertically built.The as-cast samples of coupon size 25 mm × 25 mm × 6 mm were received from JiangyinMaideli Advanced Materials Co. Ltd., Jiangsu, China. The chemical composition of as-cast AlSi10Mg alloys is shown in Table 1. Microstructural Analysis The optical microstructure of the samples was characterised using a Carl Zeiss optical microscope (Zeiss, Jena, Germany).The sample surfaces were ground using SiC sheets in the following grit sizes: 320, 800, 1200, 2000, and 2500.This was undertaken in order to characterise the samples using the optical microscope and SEM/EDS.The samples were then polished using diamond particles sized 3 µ m and 1 µ m.The samples were etched using Kallings reagent (5 mL of CuCl2 + 100 mL of Hydrochloric acid and 100 mL of Ethanol) following diamond polishing to observe the microstructural features. X-ray Diffraction for Phase Identification Phase identification was performed on the SLMed alloy under the four conditions and the as-cast AlSi10Mg alloy using a Bruker a D8 Discover AXS Powder X-ray diffractometer.The diffraction peak analysis was carried out with the aid of X'pertHighscore Plus software. Electrochemical Characterisation The electrochemical tests in 0.1 M NaCl were undertaken to evaluate the corrosion resistance of samples.A three-electrode cell was used for all of the electrochemical testing (the sample with an exposed area of 0.385 cm 2 acted as a working electrode, the platinum The as-cast samples of coupon size 25 mm × 25 mm × 6 mm were received from JiangyinMaideli Advanced Materials Co. Ltd., Jiangsu, China. The chemical composition of as-cast AlSi10Mg alloys is shown in Table 1. Microstructural Analysis The optical microstructure of the samples was characterised using a Carl Zeiss optical microscope (Zeiss, Jena, Germany).The sample surfaces were ground using SiC sheets in the following grit sizes: 320, 800, 1200, 2000, and 2500.This was undertaken in order to characterise the samples using the optical microscope and SEM/EDS.The samples were then polished using diamond particles sized 3 µm and 1 µm.The samples were etched using Kallings reagent (5 mL of CuCl 2 + 100 mL of Hydrochloric acid and 100 mL of Ethanol) following diamond polishing to observe the microstructural features. X-ray Diffraction for Phase Identification Phase identification was performed on the SLMed alloy under the four conditions and the as-cast AlSi10Mg alloy using a Bruker a D8 Discover AXS Powder X-ray diffractometer.The diffraction peak analysis was carried out with the aid of X'pertHighscore Plus software. Electrochemical Characterisation The electrochemical tests in 0.1 M NaCl were undertaken to evaluate the corrosion resistance of samples.A three-electrode cell was used for all of the electrochemical testing (the sample with an exposed area of 0.385 cm 2 acted as a working electrode, the platinum wire worked as a counter electrode, and the saturated calomel electrode worked as a reference electrode).E corr vs. time graphs were created after immersion in 0.1 M NaCl for 30 min to determine the stabilised open-circuit potential (OCP). SLMed AlSi10Mg and as-cast AlSi10Mg samples were polished with emery paper of up to 2500 grade and cleaned with acetone and allowed to air dry. Potentiodynamic Polarisation Tests The cathodic and anodic plots were created by sweeping the potential on either side of the OCP at a scan rate of 0.5 mV/s for the SLMed and as-cast AlSi10Mg alloy samples.The sweeping of potential was carried out over the potential range of −300/+300 mV wrt OCP for SLMed and as-cast AlSi10Mg samples. Electrochemical Impedance Spectroscopy (EIS) SLMed and as-cast AlSi10Mg samples were subjected to electrochemical impedance spectroscopy (EIS) in 0.1 M NaCl solution using GAMRY reference 600+ potentiostat and an electrochemical cell with three electrodes.The E corr vs time plots were created after immersion in 0.1 M NaCl solution for 30 min to ascertain the stabilised open-circuit potential (OCP).For the purpose of EIS studies, the condition was deemed stable when the OCP fluctuation remained within 10 mV during a period of 1000 s.EIS tests were performed by applying a sinusoidal signal at Ecorr with a perturbation potential of 10 mV.Gamry Instruments Framework software (version 7.07) was used to measure the impedance response at frequencies ranging from 1 MHz to 10 mHz, capturing 10 points per decade of frequency.These frequencies were selected so that they reached the asymptotic limits where the imaginary impedance tends to be zero at the lowest and highest frequencies of the employed frequency range. Post Corrosion Morphology The post-corrosion morphology of the SLMed AlSi10Mg alloy under different conditions and the as-cast AlSi10Mg alloy after a 30 min immersion in 0.1 M NaCl followed by potentiodynamic polarisation test and removing corrosion products was inspected using SEM to study the localised corrosion response.The scanning electron microscope (SEM) with energy-dispersive spectroscopy was used to capture the SEM images. Microstructural Analysis The selective laser-melted AlSi10Mg alloy was characterised using optical microscopy to reveal the structure through a layer-by-layer process.The microstructures of the as-cast AlSi10Mg alloy and SLMed AlSi10Mg alloy in four conditions are shown in Figures 2 and 3, respectively.Figure 2 shows an optical micrograph of as-cast AlSi10Mg showing the Al matrix embedded with Si and MnFe 4 Al 12 Si 2 intermetallic and some porosity.The MnFe 4 Al 12 Si 2 intermetallic was also confirmed through XRD analysis.In the horizontally built sample (Figure 3a,c), the top surface (XY plane) showed the scan track features (the width of the scan tracks is about 150-200 µm).These tracks are signature features in samples manufactured by selective laser melting.The top surface (XY plane) also showed that scan path features were at an angle equal to 67 • .The two Z planes (Figure 3b,d) showed a similar meso-structure of semi-circular melt pool layers (thickness of 30-100 µm).The meso-structures on the corresponding planes in the horizontally built samples were similar to the vertically built samples.The difference between the horizontally and vertically built samples was that the scanning surface was smaller in the case of the vertically built sample. Figure 4a,b show the SEM micrographs of as-cast AlSi10Mg alloy at 1800× and 30,000× magnification.Figure 5a-d show the SEM micrographs of the SLMed AlSi10Mg alloy in four conditions at 500× magnification.For the SLMed counterparts, the microstructure consisted of overlapped melt pools (Figure 5a-d), which is due to the progressive laser rastering which causes melting and solidification of successive layers of materials.Within the melt pools, a very fine dendritic structure of the α-Al matrix bounded by the eutectic Si phase developed.Figure 6a-d show the SEM micrographs of the SLMed AlSi10Mg alloy in four conditions at 10,000× magnification.The extremely high cooling rates involved in selective laser melting caused a strong microstructural refinement.The additive manufactured (as-built) AlSi10Mg parts consisted of a fine network of Si particles inside the aluminum matrix as shown in Figure 6a-d.Microstructural refinement may therefore be the cause of the improvement of the corrosion resistance of the as-built SLMed AlSi10Mg specimens when compared with their cast counterparts.Microstructure refinement could be observed at 10,000 x SEM micrographs (Figure 6a-d).Figure 4a,b show the SEM micrographs of as-cast AlSi10Mg alloy at 1800× and 30,000× magnification.Figure 5a-d show the SEM micrographs of the SLMed AlSi10Mg alloy in four conditions at 500× magnification.For the SLMed counterparts, the microstructure consisted of overlapped melt pools (Figure 5a-d), which is due to the progres- Figure 4a,b show the SEM micrographs of as-cast AlSi10Mg alloy at 1800× and 30,000× magnification.Figure 5a-d show the SEM micrographs of the SLMed AlSi10Mg alloy in four conditions at 500× magnification.For the SLMed counterparts, the microstructure consisted of overlapped melt pools (Figure 5a-d), which is due to the progres- AlSi10Mg alloy in four conditions at 10,000× magnification.The extremely high cooling rates involved in selective laser melting caused a strong microstructural refinement.The additive manufactured (as-built) AlSi10Mg parts consisted of a fine network of Si particles inside the aluminum matrix as shown in Figure 6a-d.Microstructural refinement may therefore be the cause of the improvement of the corrosion resistance of the as-built SLMed AlSi10Mg specimens when compared with their cast counterparts.Microstructure refinement could be observed at 10,000 x SEM micrographs (Figure 6a-d).sive laser rastering which causes melting and solidification of successive layers of materials.Within the melt pools, a very fine dendritic structure of the α-Al matrix bounded by the eutectic Si phase developed.Figure 6a-d show the SEM micrographs of the SLMed AlSi10Mg alloy in four conditions at 10,000× magnification.The extremely high cooling rates involved in selective laser melting caused a strong microstructural refinement.The additive manufactured (as-built) AlSi10Mg parts consisted of a fine network of Si particles inside the aluminum matrix as shown in Figure 6a-d.Microstructural refinement may therefore be the cause of the improvement of the corrosion resistance of the as-built SLMed AlSi10Mg specimens when compared with their cast counterparts.Microstructure refinement could be observed at 10,000 x SEM micrographs (Figure 6a-d). X-ray Diffraction for Phase Identification An XRD analysis was undertaken in order to clearly distinguish between the phases that were present in the material in the SLMed AlSi10Mg alloy and the as-cast AlSi10Mg alloy.Figure 7a shows the 2θ peaks of each of the four conditions of the SLMed AlSi10Mg samples as well as the as-cast AlSi10Mg.A zoom of the XRD plot for as-cast AlSi10Mg X-ray Diffraction for Phase Identification An XRD analysis was undertaken in order to clearly distinguish between the phases that were present in the material in the SLMed AlSi10Mg alloy and the as-cast AlSi10Mg alloy.Figure 7a shows the 2θ peaks of each of the four conditions of the SLMed AlSi10Mg samples as well as the as-cast AlSi10Mg.A zoom of the XRD plot for as-cast AlSi10Mg showing MnFe 4 Al12Si 2 is depicted in Figure 7b. X-ray Diffraction for Phase Identification An XRD analysis was undertaken in order to clearly distinguish between the phases that were present in the material in the SLMed AlSi10Mg alloy and the as-cast AlSi10Mg alloy.Figure 7a shows the 2θ peaks of each of the four conditions of the SLMed AlSi10Mg samples as well as the as-cast AlSi10Mg.A zoom of the XRD plot for as-cast AlSi10Mg showing MnFe4Al12Si2 is depicted in Figure 7b.The SLMed AlSi10Mg alloy exhibited Al and Si peaks in XRD plots in all four conditions, namely under nitrogen vertically built, under nitrogen horizontally built, under argon vertically built and under argon horizontally built.The as-cast AlSi10Mg alloy also exhibited Al, Si and MnFe 4 Al 12 Si 2 peaks in the XRD plot.The varied intensities were connected to the inherent epitaxial solidification characteristics of the SLMed alloy. Electrochemical Characterisation The corrosion potential (E corr ) of the SLMed AlSi10Mg alloys (different conditions) was approximately 120-150 mV higher than that of the as-cast AlSi10Mg alloy (Figure 8).Since E corr is the measure of corrosion susceptibility, a positive shift in E corr suggests that the samples made by the additive manufacturing process have better corrosion resistance compared with that of as-cast samples.The corrosion current densities (i corr ) of both of the SLMed AlSi10Mg alloys were found to be 2-3 times lower than that of the as-cast AlSi10Mg alloy, indicating a 2-3-fold improvement in corrosion resistance.The modest rise in the applied potential caused a fast increase in the anodic current demonstrating pitting corrosion characteristics, which was more prominent for the as-cast AlSi10Mg alloy sample than for the additively manufactured AlSi10Mg alloy sample, as shown in the anodic branch of the Tafel plots in Figure 8.The values of E pit -E corr for SLMed AlSi10Mg (under Ar, horizontally built) and as-cast samples were found to be approximately 173 mV and 243 mV, indicating a narrow passive region above the corrosion potential.Similar behaviour was observed for the other conditions of the SLMed AlSi10Mg alloys.Therefore, for definite conclusions regarding improvement in corrosion resistance, EIS studies were performed in addition to potentiodynamic polarisation. rise in the applied potential caused a fast increase in the anodic current demonstrating pitting corrosion characteristics, which was more prominent for the as-cast AlSi10Mg alloy sample than for the additively manufactured AlSi10Mg alloy sample, as shown in the anodic branch of the Tafel plots in Figure 8.The values of Epit-Ecorr for SLMed AlSi10Mg (under Ar, horizontally built) and as-cast samples were found to be approximately 173 mV and 243 mV, indicating a narrow passive region above the corrosion potential.Similar behaviour was observed for the other conditions of the SLMed AlSi10Mg alloys.Therefore, for definite conclusions regarding improvement in corrosion resistance, EIS studies were performed in addition to potentiodynamic polarisation.A simulation of experimental EIS data was performed to quantitatively evaluate the characteristic parameters such as capacitance, charge transfer resistance and pore resistance, etc., using an appropriate equivalent electrical circuit (EEC) and a speculative corrosion mechanism.An appropriate EEC with two time constants and a Warburg element is shown in Figure 12.The time constant related to the corrosion products/solution interface is in the high-frequency range, while the time constant related to the metal/solution interface is in the lower-frequency zone.A simulation of experimental EIS data was performed to quantitatively evaluate the characteristic parameters such as capacitance, charge transfer resistance and pore resistance, etc., using an appropriate equivalent electrical circuit (EEC) and a speculative corrosion mechanism.An appropriate EEC with two time constants and a Warburg element is shown in Figure 12.The time constant related to the corrosion products/solution interface is in the high-frequency range, while the time constant related to the metal/solution interface is in the lower-frequency zone.A simulation of experimental EIS data was performed to quantitatively evaluate the characteristic parameters such as capacitance, charge transfer resistance and pore resistance, etc., using an appropriate equivalent electrical circuit (EEC) and a speculative corrosion mechanism.An appropriate EEC with two time constants and a Warburg element is shown in Figure 12.The time constant related to the corrosion products/solution interface is in the high-frequency range, while the time constant related to the metal/solution interface is in the lower-frequency zone.The interconnectedness of the corrosion products on the metal surface in contact with the solution led to the selection of the EEC shown in Figure 12.In the EEC, Rs stands for the solution resistance; Qf and Rf are the constant phase element (CPE) and the pore resistance in parallel combination to represent corrosion products, respectively; Cdl is electrical double-layer capacitance; Rc is the charge transfer resistance; and W is the Warburg The interconnectedness of the corrosion products on the metal surface in contact with the solution led to the selection of the EEC shown in Figure 12.In the EEC, R s stands for the solution resistance; Q f and R f are the constant phase element (CPE) and the pore resistance in parallel combination to represent corrosion products, respectively; Cdl is electrical double-layer capacitance; R c is the charge transfer resistance; and W is the Warburg element.Most of the credit for the CPE conduct is given to electrode porosity, roughness and distributed surface reactivity.The impedance analysis was conducted using the Gamry Echem Analyst package for Windows.Table 2 lists the characteristic EEC parameters.Low values of χ 2 goodness of fit suggests that the proposed EEC fits well with the experimental data.Charge transfer resistance is the resistance in moving an electron out of a molecule in an electrolytic solution and onto a molecule in an anode.The degree of the Warburg diffusion resistance, which represents the diffusion of corrosive solution, is the same under all conditions.Table 2 shows that the corrosion resistance of the selective laser melted (SLMed) AlSi10Mg alloys, the sum of pore resistance (R f ) and the resistance offered by the metal/ electrolyte interface (R c ) was 9.57 × 10 3 Ω cm 2 , 12.1 × 10 3 Ω cm 2 , 15.9 × 10 3 Ω cm 2 and 10.3 × 10 3 Ω cm 2 for under Ar horizontally built, under Ar vertically built, under N 2 horizontally built and under N 2 vertically built samples, respectively, and that of the as-cast AlSi10Mg alloy was 4.74 × 10 3 Ω cm 2 .The data in Table 2 support the assertion that selective laser melted (SLMed) AlSi10Mg alloys in various conditions provide a 2-3 times improvement in corrosion resistance, and this improvement in corrosion resistance is supported by Bode impedance graphs in Figure 10.The lower chi-squared value denotes the tolerable accuracy of EEC parameters determined by EIS data simulation.The close fit of simulated EIS data with the experimental data of as-cast and selective laser melted AlSi10Mg under Ar, horizontally built condition and under N 2 , vertically built condition is shown in Figure 13a-c.The experimental data fit well with the simulated data in the frequency range 1,000,000 Hz to 0.01 Hz, as shown in Figure 14.The validity of the employed EEC (Figure 12) and the corrosion mechanism for as-cast and selective laser melted AlSi10Mg alloys was confirmed by the associated low chi-squared value and low errors in the EEC parameters. Post Corrosion Morphology Figure 14a-d show the corrosion morphology of the SLMed AlSi10Mg alloy under different conditions and as-cast AlSi10Mg alloy after a 30 min immersion in 0.1 M NaCl followed by a potentiodynamic polarisation test and after removing corrosion products.There was more global corrosion for the as-cast alloy, but pitting corrosion was more pronounced for the SLMed AlSi10Mg alloy. The corrosion behaviour of additively manufactured alloys differed from that of ascast alloys due to specific conditions associated with the AM process such as layer-to-layer solidification associated with small melt pools and higher cooling rates.From a microstructure point of view, AM alloys showed fine structures as compared with cast alloys (which showed large silicon precipitates with a needle shape in the aluminum matrix), while the fine silicon precipitates formed a three-dimensional network at the grain boundaries and enclosed the aluminum matrix.These large connected silicon precipitates, which are present near or at the melt pool boundary (as shown in Figure 3a-d), decelerated the corrosion, leading to a lower corrosion rate compared with that of the as-cast AlSi10Mg Post Corrosion Morphology Figure 14a-d show the corrosion morphology of the SLMed AlSi10Mg alloy under different conditions and as-cast AlSi10Mg alloy after a 30 min immersion in 0.1 M NaCl followed by a potentiodynamic polarisation test and after removing corrosion products.There was more global corrosion for the as-cast alloy, but pitting corrosion was more pronounced for the SLMed AlSi10Mg alloy. The corrosion behaviour of additively manufactured alloys differed from that of as-cast alloys due to specific conditions associated with the AM process such as layerto-layer solidification associated with small melt pools and higher cooling rates.From a microstructure point of view, AM alloys showed fine structures as compared with cast alloys (which showed large silicon precipitates with a needle shape in the aluminum matrix), while the fine silicon precipitates formed a three-dimensional network at the grain boundaries and enclosed the aluminum matrix.These large connected silicon precipitates, which are present near or at the melt pool boundary (as shown in Figure 3a-d), decelerated the corrosion, leading to a lower corrosion rate compared with that of the as-cast AlSi10Mg alloy.The microstructure has a great influence on the corrosion resistance of the SLMed AlSi10Mg alloy, and by tailoring the microstructure, it is possible to increase the corrosion resistance in this alloy [35].The EIS data analysis showed a higher (R f + R c ) value for the SLMed AlSi10Mg alloys compared with the as-cast alloy (as shown in Table 2), which provide a less defective, more compact and more protective oxide layer on AM samples and indicates a lower corrosion rate.The potentiodynamic results also showed the lower current density for SLMed AlSi10Mg (2-3 times) compared with as-cast AlSi10Mg alloy, which indicates an improved corrosion resistance for the SLMed AlSi10Mg alloy.A positive shift in Ecorr suggests that the samples made by the selective laser melting process have better corrosion protection thermodynamically compared with that of the as-cast samples.The effect of build orientation (horizontal and vertical) and environment (Ar and N 2 ) did not show any effect on the corrosion resistance of the SLMed AlSi10Mg alloy.It may be safely said that the better corrosion resistance of AlSi10Mg manufactured by selective laser melting is due to a fast cooling rate related to the solidification of AlSi10Mg alloy manufactured by selective laser melting, compared with the slow cooling rate related to the solidification of AlSi10Mg produced by casting.In the selective laser melting process (SLM), the area irradiated by the laser beam is melted and rapidly solidified, forming solidification lines (laser scan tracks) with symmetrical features.Because of this unique rapid crystallisation, the subgrain structures, typically observed inside these solidification lines, could also have variable geometric symmetrical features, e.g., cellular, pentagonal or hexagonal cellular.Because of such distinctive microstructures in the SLMed AlSi10Mg alloy, it had a significantly improved corrosion resistance compared with the as-cast AlSi10Mg alloy. Conclusions The SLMed AlSi10Mg alloys demonstrated an improvement in corrosion resistance of approximately 2-3 times compared with the as-cast AlSi10Mg alloy.The corrosion resistance did not vary much depending on the working environment and built direction.Even though the improvement in corrosion resistance of SLMed AlSi10Mg is not very impressive, it is still significant to note that there was a 2-3-fold improvement compared with as-cast AlSi10Mg.Therefore, such additive manufactured samples can find potential applications where the components are subjected to corrosive conditions and where the additively manufactured component has an advantage over the as-cast component due to the various advantages of additive manufacturing, e.g., freedom of design. Figure 1 . Figure 1.(a) Selective Laser Melting process parameters: laser power 370 W, scanning speed 1300 mm/s, hatch spacing 0.19 mm and layer thickness 30 µ m; (b) X, Y and Z directions for sample and 67° rotation of scanning direction. Figure 1 . Figure 1.(a) Selective Laser Melting process parameters: laser power 370 W, scanning speed 1300 mm/s, hatch spacing 0.19 mm and layer thickness 30 µm; (b) X, Y and Z directions for sample and 67 • rotation of scanning direction. Figure 2 . Figure 2. Optical micrograph of as-cast AlSi10Mg alloy showing intermetallic, porosity and silicon particle within the aluminum matrix. Figure 3 . Figure 3. Optical micrographs of (a) horizontally built under Ar; (b) vertically built under Ar; (c) horizontally built under N2; (d) vertically built under N2 of the SLMed AlSi10Mg alloy.(Note: BD is an abbreviation for built direction). Figure 2 . 16 Figure 2 . Figure 2. Optical micrograph of as-cast AlSi10Mg alloy showing intermetallic, porosity and silicon particle within the aluminum matrix. Figure 3 . Figure 3. Optical micrographs of (a) horizontally built under Ar; (b) vertically built under Ar; (c) horizontally built under N2; (d) vertically built under N2 of the SLMed AlSi10Mg alloy.(Note: BD is an abbreviation for built direction). Figure 3 . Figure 3. Optical micrographs of (a) horizontally built under Ar; (b) vertically built under Ar; (c) horizontally built under N 2 ; (d) vertically built under N 2 of the SLMed AlSi10Mg alloy.(Note: BD is an abbreviation for built direction). Figure 6 . Figure 6.SEM micrographs of (a) horizontally built under Ar; (b) vertically built under Ar; (c) horizontally built under N 2 ; (d) vertically built under N 2 of the SLMed AlSi10Mg alloy at 10,000× magnification. Figure 7 . Figure 7. (a) XRD pattern for the SLMed AlSi10Mg alloys for all four conditions, namely under nitrogen vertically built, under nitrogen horizontally built, under argon vertically built and under argon horizontally built, and as-cast AlSi10Mg alloy (Note: N2_V, N2_H, Ar_V, and Ar_H represents SLMed AlSi10Mg under nitrogen vertically built, under nitrogen horizontally built, under argon vertically built and under argon horizontally built, respectively); (b) a zoom of XRD pattern for ascast AlSi10Mg showing MnFe4Al12Si2 peaks in XRD plot. Figure 7 . Figure 7. (a) XRD pattern for the SLMed AlSi10Mg alloys for all four conditions, namely under nitrogen vertically built, under nitrogen horizontally built, under argon vertically built and under argon horizontally built, and as-cast AlSi10Mg alloy (Note: N 2 _V, N 2 _H, Ar_V, and Ar_H represents SLMed AlSi10Mg under nitrogen vertically built, under nitrogen horizontally built, under argon vertically built and under argon horizontally built, respectively); (b) a zoom of XRD pattern for as-cast AlSi10Mg showing MnFe 4 Al 12 Si 2 peaks in XRD plot. Figure 8 .Figure 8 . Figure 8. Potentiodynamic polarisation curve of SLMed AlSi10Mg under different conditions and as-cast AlSi10Mg alloy in 0.1 M NaCl solution.Electrochemical impedance spectroscopy (EIS) was used to examine the stability of the protective passive layer on the SLMed AlSi10Mg alloy and the as-cast AlSi10Mg alloy after 30 min of immersion in 0.1 M NaCl solution.The magnitude of impedance at the lowest frequency in a Bode impedance plot and the diameter of the semicircle in a Nyquist plot are broad measures of the corrosion resistance.The Nyquist and Bode plots of SLMed AlSi10Mg under different conditions and the as-cast AlSi10Mg alloy (Figures 9 and 10) demonstrate that the corrosion resistance of the SLMed AlSi10Mg alloy was 2-3 times better.The Bode phase plots for SLMed alloys under different conditions and as-cast AlSi10Mg after 30 min of immersion in 0.1 M NaCl solution are shown in Figure 11.The broad-phase angle troughs (minima, as phase angle values are negative) suggest a merger Figure 8. Potentiodynamic polarisation curve of SLMed AlSi10Mg under different conditions and as-cast AlSi10Mg alloy in 0.1 M NaCl solution.Electrochemical impedance spectroscopy (EIS) was used to examine the stability of the protective passive layer on the SLMed AlSi10Mg alloy and the as-cast AlSi10Mg alloy after 30 min of immersion in 0.1 M NaCl solution.The magnitude of impedance at the lowest frequency in a Bode impedance plot and the diameter of the semicircle in a Nyquist plot are broad measures of the corrosion resistance.The Nyquist and Bode plots of SLMed AlSi10Mg under different conditions and the as-cast AlSi10Mg alloy (Figures9 and 10) demonstrate that the corrosion resistance of the SLMed AlSi10Mg alloy was 2-3 times better.The Bode phase plots for SLMed alloys under different conditions and as-cast AlSi10Mg after 30 min of immersion in 0.1 M NaCl solution are shown in Figure11.The broad-phase angle troughs (minima, as phase angle values are negative) suggest a merger of two time constants in all conditions.The two time constants are related to corrosion products/solution interface and metal/solution interface.The presence of two time constants is determined by the broader nature of phase angle plots, because when two narrower peaks related to two time constants merged in phase angle plots, there was a formation of a broader peak in the phase angle plot. ) demonstrate that the corrosion resistance of the SLMed AlSi10Mg alloy was 2-3 times better.The Bode phase plots for SLMed alloys under different conditions and as-cast AlSi10Mg after 30 min of immersion in 0.1 M NaCl solution are shown in Figure11.The broad-phase angle troughs (minima, as phase angle values are negative) suggest a merger of two time constants in all conditions.The two time constants are related to corrosion products/solution interface and metal/solution interface.The presence of two time constants is determined by the broader nature of phase angle plots, because when two narrower peaks related to two time constants merged in phase angle plots, there was a formation of a broader peak in the phase angle plot.Coatings 2023, 13, 225 9 of 16of two time constants in all conditions.The two time constants are related to corrosion products/solution interface and metal/solution interface.The presence of two time constants is determined by the broader nature of phase angle plots, because when two narrower peaks related to two time constants merged in phase angle plots, there was a formation of a broader peak in the phase angle plot. Figure 9 . Figure 9. Nyquist plots showing the corrosion resistance of as-cast and SLMed AlSi10Mg alloys under different conditions in 0.1 M NaCl solution. Figure 9 .of 15 Figure 9 . Figure 9. Nyquist plots showing the corrosion resistance of as-cast and SLMed AlSi10Mg alloys under different conditions in 0.1 M NaCl solution. Figure 10 . Figure 10.Bode modulus plots for the as-cast and SLMed AlSi10Mg alloys under different conditions in 0.1 M NaCl solution. Figure 11 . Figure 11.Bode phase angle plots for the as-cast and SLMed AlSi10Mg alloys under different conditions in 0.1 M NaCl solution. Figure 11 . Figure 11.Bode phase angle plots for the as-cast and SLMed AlSi10Mg alloys under different conditions in 0.1 M NaCl solution. Coatings 2023, 13 , 225 12 of 16 Figure 13 . Figure 13.Curve fitting of experimental and simulated Bode plots for the following conditions: (a) as-cast AlSi10Mg alloy; (b) selective laser melted AlSi10Mg alloy under Ar, horizontally built condition; (c) selective laser melted AlSi10Mg alloy under N2, vertically built condition after immersion in 0.1 M NaCl solution. Figure 13 . Figure 13.Curve fitting of experimental and simulated Bode plots for the following conditions: (a) ascast AlSi10Mg alloy; (b) selective laser melted AlSi10Mg alloy under Ar, horizontally built condition; (c) selective laser melted AlSi10Mg alloy under N 2 , vertically built condition after immersion in 0.1 M NaCl solution. Figure 14 . Figure 14.Post-corrosion morphology of the SLMed AlSi10Mg alloy under different conditions and as-cast AlSi10Mg alloy after 30 min immersion in in 0.1 M NaCl followed by potentiodynamic polarisation test and after removing corrosion products: (a) under Ar, horizontally built condition; (b) under Ar, vertically built condition; (c) under N2, horizontally built condition; (d) under N2, vertically built condition; (e) as-cast AlSi10Mg alloy. Figure 14 . Figure 14.Post-corrosion morphology of the SLMed AlSi10Mg alloy under different conditions and as-cast AlSi10Mg alloy after 30 min immersion in in 0.1 M NaCl followed by potentiodynamic polarisation test and after removing corrosion products: (a) under Ar, horizontally built condition; (b) under Ar, vertically built condition; (c) under N2, horizontally built condition; (d) under N2, vertically built condition; (e) as-cast AlSi10Mg alloy. Table 1 . Chemical composition of as-cast AlSi10Mg alloy. Table 2 . Quantitative analysis of EIS data using model EEC as in Figure9.
8,372
2023-01-18T00:00:00.000
[ "Materials Science" ]
Jak2-Independent Activation of Stat3 by Intracellular Angiotensin II in Human Mesangial Cells Ang II is shown to mediate the stimulatory effect of high glucose on TGF-b1 and extracellular matrix proteins in glomerular mesangial cells. Also inhibition of Ang II formation in cell media (extracellular) and lysates (intracellular) blocks high-glucose effects on TGF-b1 and matrix more effectively compared to inhibition of extracellular Ang II alone. To investigate whether intracellular Ang II can stimulate TGF-b1 and matrix independent of extracellular Ang II, cultured human mesangial cells were transfected with Ang II to increase intracellular Ang II levels and its effects on TGF-b1 and matrix proteins were determined. Prior to transfection, cells were treated with candesartan to block extracellular Ang II-induced responses via cell membrane AT1 receptors. Transfection of cells with Ang II resulted in increased levels of intracellular Ang II which was accompanied by increased production of TGF-b1, collagen IV, fibronectin, and cell proliferation as well. On further examination, intracellular Ang II was found to activate Stat3 transcription factor including increased Stat3 protein expression, tyrosine 705 phosphorylation, and DNA-binding activity. Treatment with AG-490, an inhibitor of Jak2, did not block intracellular Ang II-induced Stat3 phosphorylation at tyrosine 705 residue indicating a Jak2-independent mechanism used by intracellular Ang II for Stat3 phosphorylation. In contrast, extracellular Ang II-induced tyrosine 705 phosphorylation of Stat3 was inhibited by AG-490 confirming the presence of a Jak2-dependent pathway. These findings suggest that intracellular Ang II increases TGF-b1 and matrix in human mesangial cells and also activates Stat3 transcription factor without involvement of the extracellular Ang II signaling pathway. Introduction Kidney damage is one of the long-term complications of diabetes (diabetic nephropathy) which is characterized by excessive production of extracellular matrix by glomerular mesangial cells. Angiotensin II (Ang II), a growth-promoting hormone derived from the renin angiotensin system (RAS), is suggested to play an important role in transmitting high glucose effects on mesangial matrix [1]. Similar to glucose, Ang II increases matrix synthesis [2] and decreases matrix degradation [3] leading to matrix accumulation in mesangial cells. Both glucose and Ang II appear to involve transforming growth factor-beta 1 (TGF-b1) for their actions on mesangial matrix. Previous studies have reported that high glucose causes increase in TGF-b1 mRNA expression and protein in mesangial cells [4,5]. Also, Ang II is found to stimulate TGF-b1 secretion in rat mesangial cells as demonstrated by our previous studies [3]. Because these actions of Ang II are simi-lar to those of glucose, it is likely that Ang II may act as a downstream mediator of high-glucose effects on TGF-b1 and matrix in mesangial cells. It is now well established that high-glucose milieu in diabetes causes activation of the RAS, particularly Ang II [1]. Treatment with angiotensin-converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs) has proven beneficial in delaying the progression of renal damage in type 1 and type 2 diabetic patients [6][7][8] suggesting activation of the RAS due to hyperglycemia. An increased renal vasodilator response to ACE inhibition or Ang II blockade in diabetic patients [9] has been interpreted as evidence that the intrarenal RAS is activated in diabetes. In streptozotocin-(STZ-) induced rat model of diabetes (type 1), we found increased levels of Ang II and its precursor, angiotensinogen (Agt) in glomerular extracts indicating activation of the glomerular RAS [10]. Also in type 2 diabetic rats, blockade of Ang II activity by ACE inhibitors and ARBs ameliorated 2 Journal of Signal Transduction progression of proteinuria and preserved glomerular structure further supporting RAS activation in diabetes [11]. Previous studies from our laboratory have consistently shown that high glucose activates Ang II production in mesangial cells [3,12,13] primarily by increasing synthesis of Agt, the precursor of Ang II [12]. In addition, exposure of mesangial cells to high glucose resulted in increased levels of Ang II in the cell lysates (intracellular) which were noticeably higher compared to extracellular Ang II levels found in the cell media [14,15]. Further, our recent studies showed that inhibition of extracellular Ang II formation resulted in a partial block of high-glucose-induced increase in TGF-b1 and matrix, whereas suppression of both intracellular and extracellular Ang II formation by Agt knockdown produced a greater inhibition of TGF-b1 and matrix [15]. These findings led us to hypothesize that intracellular Ang II may contribute to the overall increase in TGF-b1 and mesangial matrix proteins under high-glucose condition. Therefore, the present study was designed to investigate whether intracellular Ang II can independently affect TGF-b1 and matrix in mesangial cells without involvement of the extracellular Ang II signaling pathway. Cultured human mesangial cells were transfected with Ang II to increase intracellular Ang II levels whereas candesartan was used to block activation of extracellular Ang II signaling via the cell membrane AT1 receptors. The findings of the present study suggest that intracellular Ang II can increase TGF-b1 and mesangial matrix and also activates Stat3 transcription factor independent of the extracellular Ang II signaling pathway. Chemicals. Angiotensin II was purchased from Sigma Chemicals (St. Louis, Mo) and angiotensin II conjugated with fluorescein from Invitrogen (Carlsbad, CA). AG-490 and Jak inhibitor I were obtained from Calbiochem (EMD Chemicals Inc., Gibbstown, NJ). SDS, acrylamide/Bis, nitrocellulose membrane, Tween-20, ammonium persulphate, TEMED, and protein assay reagents were purchased from Bio-Rad laboratories (Hercules, CA) and other reagents from Sigma Chemicals (St. Louis, MO). Antibodies to total Stat3, β-actin, and goat anti-rabbit IgG conjugated with horseradish peroxidase (HRP) were obtained from Cell Signaling Technology (Danvers, MA) and anti-Jak2 antibody from CHEMICON (EMD Millipore, Danvers, MA). The protein molecular weight marker was obtained from Amersham (GE Healthcare, Piscataway, NJ) and the chemiluminescence detection kit from Pierce (Thermo Fisher Scientific, Rockford, IL). Candesartan was obtained from AstraZeneca Pharmaceuticals (Wilmington, DE). Transfection of Cells with Ang II. To study the role of intracellular Ang II specifically, intracellular levels of Ang II were increased using a protein transfection reagent (Proteojuice, Novagen, WI). Briefly, Ang II was mixed with proteojuice as per instructions from the supplier (Novagen) and incubated for 20 min at room temperature followed by 1 : 10 dilution with MsGM free of serum and supplements. Mesangial cells were then incubated with this media for 20 minutes to 24 h depending upon the experimental protocol. To inhibit binding of any free Ang II present in the proteojuice mixture to cell membrane AT1 receptors, cells were pretreated with candesartan to block AT1 receptors. At termination of experiments, cell media were collected and cells were used for preparation of either total cell lysates (in RIPA buffer) or cytosol and nuclear fractions (Active Motif, CA). Samples were stored at −70 • C until analyzed. Measurement of Ang II Levels by ELISA. Ang II levels in cell media (extracellular) and lysates (intracellular) were measured by a competitive inhibition ELISA (Peninsula-Bachem, Belmont, CA) as described previously by us [14]. Briefly, standards or samples along with anti-Ang II antibody and biotinylated Ang II peptide were incubated in a 96well plate for 2 h followed by incubation with streptavidinconjugated horseradish peroxidase for 1 h at room temperature. The final reaction in the well was developed with 3,3 ,5,5 -tetramethyl benzidine (TMB) substrate, terminated with 2N HCl, and read at 450 nm using an ELISA reader. Ang II levels in the samples were calculated from an Ang II standard curve run with each assay. Measurement of Matrix Proteins and Cell Proliferation. Cell media were dialyzed, lyophilized, and reconstituted at a known protein concentration. TGF-b1 levels were measured by a sandwich ELISA which employs a primary capture antibody and the avidin-biotin peroxidase detection system (R&D Systems, Minneapolis, MN) [3]. Collagen IV and fibronectin levels in cell media were measured by ELISA using commercially available kits from Exocell (Philadelphia, PA) and CHEMICON (EMD Millipore Danvers, MA), respectively. For determination of cell proliferation, mesangial cells were seeded in 96-well plates 24-48 prior to the assay. Cells were transfected with Ang II using proteojuice and incubated at 37 • C in 5% CO 2 and 95% air for 48 h after which proliferation of cells was measured using a colorimetric method (Roche Applied Sciences, IN). 2.6. Study of Jak2/Stat3 Pathway 2.6.1. Protein Expression of Jak2 and Stat3. Total cells lysates from mesangial cells treated with exogenous Ang II or transfected with Ang II were prepared in RIPA buffer (Santa Cruz Biotechnology, CA) and analyzed for protein expression of Jak2 and Stat3 by Western blotting. Samples were electrophoresed on 8-10% acrylamide gel and proteins transferred to nitrocellulose membrane. Incubation with anti-Jak2 or anti-Stat3 antibodies was carried out overnight at 4 • C followed by washings and incubation with a HRP-conjugated secondary antibody. The same membranes were stripped, and protein expression for β-actin (protein loading control) was determined. Protein bands were detected using chemiluminescence substrate (Pierce-Thermo Scientific, Rockford, IL) and analyzed by image analysis (Image J Software, National Institute of Health, Bethesda, MD). Results are expressed as the ratio of Jak2/β-actin or Stat3/β-actin. 2.6.2. Phosphorylation of Stat3. The phosphorylation of Stat3 was determined using a cell-based assay (SABiosciences, Frederick, MD). Briefly, human mesangial cells were seeded into 96-well cell culture plates 24-48 hr prior to the assay. Cells were divided into two sets and treated with exogenous Ang II or transfected with Ang II for 20 minutes after which media were removed and cells were fixed with 4% formaldehyde/1x phosphate buffered saline (PBS) buffer. After washes and blocking, one set of cells was incubated for 1 h at room temperature with phospho-Stat3 serine 727 or phospho-Stat3 tyrosine 705 antibodies to measure phosphorylated Stat3, and the other set of cells was incubated with a pan-Stat3 antibody to measure total Stat3. This was followed by incubation with a HRP-conjugated secondary antibody for 1 h at room temperature. The final reaction was developed with TMB and absorbance read at 450 nm using an ELISA plate reader. In each well, the antibody reaction was normalized to the relative cell number which was determined using a cell staining kit (SABiosciences). Results are expressed as the ratio of phospho-Stat3/total Stat3. DNA Binding Activity of Stat3. Nuclear extracts from mesangial cells were prepared and used for determination of Stat3-DNA binding activity (Clontech Laboratories, Inc., CA). In brief, nuclear extract samples were incubated in a 96well plate coated with oligonucleotides containing the consensus DNA binding sequences for Stat3 transcription factor. Stat3 present in the sample recognized and bound to the specific consensus DNA sequence and the resulting DNA-Stat3 complex was detected by incubating the samples with a primary anti-Stat3 antibody followed by secondary incubation with an HRP-conjugated antibody. The final reaction was developed with TMB and read at 450 nm in an ELISA plate reader. The absorbance readings (OD 450 ) represented binding activity of Stat3 transcription factor. Statistical Analysis. Data were analyzed by Student's ttest and analysis of variance (ANOVA) (Instat, Graph-Pad, San Diego, CA) followed by posttest comparisons between groups. A P < 0.05 was considered significant. Values are expressed as mean ± SEM, and "n" denotes number of experiments in each group. Transfection of Human Mesangial Cells with Ang II. First, the feasibility of transfecting primary human mesangial cells with Ang II to increase intracellular Ang II levels using proteojuice (Novagen, WI) was examined. Cells cultured in Labtek chamber slides were incubated with Ang II labeled with fluorescein (Ang II-FITC) mixed with proteojuice for 30 min and examined under epifluorescence microscope (Carl Zeiss MicroImaging Inc., NY). Figure 1 represents a sample picture from one such experiment. Cells transfected with Ang II-FITC showed presence of green fluorescence (b) compared to nontransfected cells (a). Also, cells pretreated with 100 μM candesartan (an Ang II receptor blocker) followed by transfection with Ang II-FITC showed green fluorescence (c) similar to that observed in transfected cells without candesartan treatment (b). These observations suggested that transfection of Ang II using proteojuice could deliver Ang II intracellularly and that Ang II delivery by this method is not affected by treatment with AT1 receptor blocker. To study specific effects of intracellular Ang II on mesangial cell functions, it was important to block the extracellular Ang II signaling pathway activated by binding of any free Ang II present in the proteojuice mixture to cell membrane AT1 receptors. For this purpose, candesartan was chosen because of its physical property of binding tightly to AT1 receptor which prevents receptor activation and internalization [16]. Therefore, in all further experiments, mesangial cells were pretreated with candesartan (100 μM) for 1 h and then transfected with Ang II (1 μM) using proteojuice transfection reagent. Candesartan was found to have no effect on proteojuice delivery of Ang II into mesangial cells ( Figure 1). Ang II Delivery by Proteojuice Increases Intracellular Levels of Ang II. To determine optimum conditions for increasing intracellular Ang II levels by the proteojuice transfection method, mesangial cells were incubated with 10 mM glucose (NG) alone or NG containing proteojuice and 10 −7 -10 −5 M of Ang II (NG + t-Ang II) for 30 min followed by measurement of intracellular Ang II in cell lysates. Intracellular Ang II levels increased with increasing concentrations of Ang II in the proteojuice mixture showing ∼1.7-fold increase with 10 −7 M Ang II (Figure 2(a)). Also, increases of ∼5-fold and ∼10-fold in intracellular Ang II levels were observed with 10 −6 M and 10 −5 M Ang II, respectively (Figure 2(a)). In separate experiments, mesangial cells were incubated with NG or NG containing 10 −6 M Ang II and proteojuice (NG + t-Ang II) mixture for 15 min-2h. A 1.5-fold increase in Ang II levels in the cell lysates (intracellular) was observed after 15 min of incubation (Figure 2(b)). Further, intracellular Ang II levels were increased by ∼6-fold after 30 min, ∼8-fold after 60 min, and ∼9-fold after 120 min, (Figure 2(b)). These results showed a concentration-and time-dependent increase in intracellular Ang II levels in response to Ang II transfection in mesangial cells. Intracellular Ang II Increases TGF-b1, Collagen IV, Fibronectin and Cell Proliferation. Next, the effects of increased intracellular Ang II levels on TGF-b1 and matrix proteins such as collagen IV and fibronectin were determined. Mesangial cells were incubated with 5 mM glucose alone (NG; control group) or NG containing a mixture of Ang II and proteojuice (NG + t-Ang II; transfection group) for 24 h, and cell media were analyzed for TGF-b1, collagen IV, and fibronectin levels. In Ang II transfected cells (NG + t-Ang II), TGF-b1 levels were significantly increased compared to control cells (NG + t-Ang II: 147 ± 6% versus NG: Ang II-FITC + proteojuice + candesartan (c) Figure 1: Transfection of human mesangial cells with Ang II using proteojuice. Human mesangial cells incubated for 30 min with a mixture of proteojuice and Ang II-FITC showed green fluorescence (b) compared to cells that were incubated with Ang II-FITC alone (a). In addition, cells incubated with proteojuice + Ang II-FITC and 100 μM candesartan also showed green fluorescence (c) suggesting that treatment with Ang II receptor blocker does not interfere with intracellular delivery of Ang II by proteojuice. 100 ± 3%; Figure 3) suggesting increased secretion of TGF-b1 in response to elevated intracellular Ang II levels. In Ang II transfected cells, the increase in TGF-b1 was accompanied by increases in levels of collagen IV (NG + t-Ang II: 144 ± 18%) and fibronectin (NG + t-Ang II: 140 ± 14%) ( Figure 3). Additionally, increased intracellular Ang II levels in Ang II-transfected cells (NG + t-Ang II) stimulated cell proliferation compared to cells incubated in NG alone (NG + t-Ang II: 138 ± 4% versus NG: 100 ± 9%, P < 0.05, n = 5). Since Ang II-transfected cells were pretreated with candesartan, these effects of intracellular Ang II appear to be mediated by intracellular signaling mechanisms different from the extracellular Ang II signaling pathway which is activated via AT1 receptors present on the cell membrane. Intracellular Ang II Signaling: Effect on Stat3 Transcription Factor. Since Stat3 transcription factor plays a key role in Ang II-mediated growth effects in mesangial cells [17], we tested the effect of intracellular Ang II on Stat3. to activate Stat3 via AT1 receptors in these cells (17). After 24 h of treatment, experiments were terminated, and total cell lysates (RIPA buffer) were prepared. Protein expression of total Stat3 was determined by Western blotting. As shown in Figure 4(a), mesangial cells treated with exogenous Ang II or transfected with Ang II showed increased protein expression of Stat3 transcription factor. Densitometry analysis of Western blots revealed a significant increase in Stat3 protein expression in cells treated with exogenous Ang II which was blocked by candesartan ( Figure 4(b)). These observations are in agreement with earlier reports showing activation of Stat3 by exogenous Ang II via AT1 receptors [17]. In mesangial cells transfected with Ang II (NG + t-Ang II), a significant increase in Stat3 protein expression was observed in response to increased levels of intracellular Ang II (Figure 4(b)). Because cells in NG + t-Ang II group were treated with candesartan prior to transfection with Ang II, these results suggest that the effect of intracellular Ang II on Stat3 protein is not mediated by the AT1 receptor-linked extracellular signaling pathway. Stat3 Binding Activity. Since tyrosine 705 phosphorylation is required for Stat3 nuclear translocation and DNA binding, the effect of intracellular Ang II on Stat3 DNA binding activity was examined. Mesangial cells were incubated with 5 mM glucose (NG), NG containing 1 μM exogenous, or transfected with 1 μM Ang II for 24 h, and nuclear extracts were prepared and assayed for Stat3 DNA binding activity. A significant increase in Stat3 DNA binding activity was observed in mesangial cells treated with exogenous Ang II (NG + ex-Ang II) or transfected with Ang II (NG + t-Ang II) compared to NG control (NG + ex-Ang II: 128 ± 8%; NG + t-Ang II: 126 ± 3%; NG: 100 ± 11%; n = 4; P < 0.05 versus NG). Thus, these results showed that intracellular Ang II increased Stat3 phosphorylation (Tyr705) and DNA binding activity as well. Role of Jak2 in Intracellular Ang II-Induced Activation of Stat3. Jak2, a cytosolic tyrosine kinase, is shown to cause activation of the latent cytoplasmic transcription factor such as Stat3 in mesangial cells [17]. For this reason, the role of Jak2 in intracellular Ang II-induced activation of Stat3 was investigated by utilizing Jak2 inhibitors such as AG-490 and Jak inhibitor I. AG-490 was chosen because in mesangial cells, it has been shown to inhibit Ang II-induced collagen IV protein synthesis [18] and high-glucose-induced increase in TGF-b1 and fibronectin synthesis along with inhibition of Stat3 tyrosine phosphorylation [19]. Jak inhibitor I is a more selective inhibitor of Jaks with much less effects on other kinases (Calbiochem EMD Chemicals Inc., NJ). Effect of Jak2 Inhibition on Stat3 Protein Expression Effect of AG-490. Human mesangial cells were incubated with 5 mM glucose (NG; control) or NG containing 1 μM of exogenous Ang II (NG + ex-Ang II) or NG containing 1 μM Ang II mixed with proteojuice (NG + t-Ang II) for 24 h. In separate groups, cells were coincubated with exogenous Ang II or Ang II/proteojuice mixer and 10 μM AG-490 for 24 h. At termination of experiments, total cell lysates were prepared and analyzed for Jak2 and Stat3 protein expression by Western blotting. As shown in Figure 6(a), exogenous Ang II increased Jak2 as well as Stat3 protein expression, whereas intracellular Ang II increased Stat3 protein without any effect on Jak2. Densitometry analysis of Western blots showed a significant increase in Stat3 protein in cells treated with exogenous Ang II or transfected with Ang II compared to NG controls (Figure 6(b)). Treatment with AG-490 inhibited exogenous Ang II-induced increase in Stat3 protein but failed to block increase in Stat3 protein expression in Ang IItransfected cells (Figure 6(b)). These findings suggest that the effect of intracellular Ang II on Stat3 may be mediated via a Jak2-independent mechanism. Effect of Jak Inhibitor I. To study the effect of Jak inhibitor I on intracellular Ang II-induced increase in Stat3 protein, experiments were set up as described above for AG-490 except that 10 μM Jak inhibitor I was added to the media of cells incubated with 1 μM exogenous Ang II (NG + ex-Ang II or 1 μM Ang II mixed with proteojuice (NG + t-Ang II). After 24 h, experiments were terminated, and total cell lysates were prepared and analyzed for Jak2 and Stat3 protein expression by Western blotting. Both treatment with exogenous Ang II or transfection with Ang II increased Stat3 protein expression in mesangial cells (Figure 7(a)). Densitometry analysis also revealed an increase in Stat3 protein in cells treated with exogenous Ang II or transfected with Ang II (Figure 7(b)). Treatment with Jak2 inhibitor I failed to inhibit increase in Stat3 protein in either exogenous Ang II-treated or Ang IItransfected cells (Figure 7(b)). Also, there was no effect of Jak inhibitor I on Jak2 protein expression in cells treated with exogenous Ang II or transfected with Ang II (Figures 7(a) and 7(b)). Since Jak inhibitor I primarily targets Jak1, it is likely that it may not have any effects on Jak2 in human mesangial cells as suggested by these results. Effect of Jak Inhibition on Stat3 (Tyr705) Phosphorylation. In further experiments, the effect of AG-490 or Jak inhibitor I on intracellular Ang II-induced phosphorylation of Stat3 (Tyr705) was determined. Mesangial cells were incubated with 5 mM glucose (NG) or NG containing 1 μM exogenous Ang II (NG + ex-Ang II) or NG containing 1 μM Ang II mixed with proteojuice (NG + t-Ang II). Also, cells treated with exogenous Ang II or transfected with Ang II were incubated with 10 μM of either AG-490 or Jak inhibitor I. After 20 minutes of incubation, cells were fixed and assayed for phosphorylated Stat3 (Tyr705) and total Stat3. A significant increase in Stat3 (Tyr705) phosphorylation was observed in cells exposed to exogenous Ang II (ex-Ang II) or transfected with Ang II (NG + t-Ang II) compared to cells incubated in 5 mM glucose (NG) alone (NG: 0.64 ± 0.09; NG + ex-Ang II: 1.08 ± 0.12; NG + t-Ang II: 0.98 ± 0.06) (Figure 8). Treatment with AG-490 did not inhibit intracellular Ang II-induced phosphorylation of Stat3 (Tyr705) in Ang II-transfected cells, whereas exogenous Ang II-initiated Stat3 (Tyr705) phosphorylation was significantly reduced in the presence of AG-490 ( Figure 8). In contrast, there was no effect of Jak inhibitor I on Stat3 (Tyr 705) phosphorylation in either exogenous Ang II-treated or Ang II-transfected cells (Figure 8). These results suggest that intracellular Ang II may use a Jak2-independent mechanism for Stat3 phosphorylation (Tyr705) in contrast to a Jak2dependent mechanism employed by exogenous (extracellular) Ang II. Discussion The main objective of the present study was to determine whether intracellular Ang II could independently stimulate TGF-b1 and mesangial matrix without involvement of the extracellular Ang II signaling pathway. Cultured human mesangial cells were transfected with Ang II to increase intracellular Ang II levels, while the extracellular Ang II pathway was blocked by pretreatment of cells with candesartan, an Ang II receptor antagonist. Candesartan was chosen due to its physical property of tight binding to AT1 receptor which traps the receptor at the membrane [16] and prevents AT1 receptor-linked activation of the signaling pathway. Our results showed that transfection of mesangial cells with Ang II increased intracellular Ang II levels in a concentration-and time-dependent manner. Further, mesangial cells transfected with Ang II showed stimulation of TGF-b1, collagen IV, and fibronectin secretion in response to increased levels of intracellular Ang II. Also, mesangial cell proliferation was increased in transfected cells due to elevated levels of intracellular Ang II. Because these effects of intracellular Ang II were noted while cell membrane AT1 receptors were blocked by candesartan, our findings suggest that intracellular Ang II could initiate physiological responses without involving extracellular Ang II signaling pathways which are activated by the cell membrane AT1 receptors. Most of the known effects of Ang II are induced by extracellular Ang II via activation of AT1 receptors present on the cell membrane [20]. The binding of Ang II to AT1 receptor initiates many signaling events including activation (phosphorylation) of Jak tyrosine kinases and Stat family of latent cytoplasmic transcription factors [17]. Ang II also stimulates formation of Stat3 homo-and hetrodimers complexes that translocate to the nucleus and bind to specific DNA motifs resulting in activation of the early growth response gene [21]. Several studies by Marrero and associates reported that the phosphorylation of Jak2 and Stat3 by Ang II is critical for Ang II-mediated growth effects such as activation of TGF-b1, synthesis of matrix proteins, and cell proliferation [17]. In the present study, an increase in Stat3 protein expression was found in mesangial cells treated with exogenous (extracellular) Ang II as well as transfected with Ang II (intracellular Ang II). In further experiments, the increased intracellular Ang II levels in Ang II-transfected cells was found to cause a significant increase in phosphorylation of Stat3 at tyrosine 705 (Tyr705) but not at serine 727 (Ser727) residue. This was in contrast to exogenous Ang II which caused phosphorylation of Stat3 at both tyrosine (Tyr705) and serine (Ser727) residues. Interestingly, Ang II (exogenous) was also found to induce tyrosine and serine phosphorylation of Stat3 in a study using other cell systems [22]. The same study showed that Ang IIinduced phosphorylation of Stat3 at serine 727 is mediated by activation of extracellular regulated kinases 1 and 2 (ERK 1/2) [22]. In mesangial cells transfected with Ang II, we did not observe activation (phosphorylation) of ERK 1/2 in response to increased intracellular Ang II levels (data not shown) indicating that intracellular Ang II may not induce Stat3 phosphorylation at serine 727 residue. In Ang IItransfected cells, increased Stat3 phosphorylation (Tyr705) was accompanied by a significant increase in Stat3 DNA binding activity. Because cells transfected with Ang II were pretreated with candesartan, these findings suggest that intracellular Ang II causes tyrosine 705 phosphorylation of Stat3 independent of cell membrane AT1 receptors and promote Stat3 DNA binding activity which is important for activation of gene transcription. The role of Jak2 in extracellular Ang II-induced activation and translocation of Stat3 is well documented [17]. Studies in other cell systems have demonstrated that Ang II binding to AT1 receptor initiates a physical association between carboxyl terminal of AT1 receptor with Jak2, which is a critical event for activation of Jak2 kinase [23]. Indeed, in glomerular mesangial cells, exogenous (extracellular) Ang II is shown to activate Jak2 resulting in tyrosine phosphorylation and nuclear translocation of Stat3 [18]. In the present study, the role of Jak2 in intracellular Ang II-induced phosphorylation of Stat3 was investigated, and mesangial cells treated with exogenous Ang II were included as positive controls. Treatment with AG-490, an inhibitor of Jak2, was found to block Stat3 phosphorylation (Tyr705) in mesangial cells exposed to exogenous Ang II in agreement with earlier reports [18]. To our surprise, Jak inhibitor I failed to block phosphorylation of Stat3 (Tyr705) in response to treatment with exogenous Ang II. Whereas in mesangial cells, AG-490 is shown to inhibit the effect of exogenous Ang II on collagen IV synthesis [18] and high glucose on TGF-b1 synthesis and Stat3 tyrosine 705 phosphorylation [19], not much is known about the effects of Jak2 inhibitor I in these cells. Treatment with AG-490 in mesangial cells transfected with Ang II did not block phosphorylation of Stat3 (Tyr705) suggesting that a Jak2-independent mechanism may be involved in intracellular Ang II-induced tyrosine 705 phosphorylation of Stat3. Interestingly, in other cell system, Ang II-induced tyrosine 705 phosphorylation and nuclear translocation of Stat3 is also shown to be mediated by c-Src, a nonreceptor kinase [24]. However, the functional role of c-Src in intracellular Ang II-induced Stat3 phosphorylation (Tyr705) in human mesangial cells remains to be tested. At present, not much is known on the mechanisms by which intracellular Ang II can influence mesangial cell functions. Recently, it is proposed that intracellular Ang II is stored in endosomes and upon release into the cell cytoplasm may increase production of reactive oxygen species (ROS) by direct interaction with mitochondria [25]. Previous studies have also suggested that Ang II could exert intracellular effects by binding to its receptors present in various cytoplasmic organelles including the nucleus [26]. Indeed, intracellular Ang II is shown to cause calcium mobilization in renal proximal tubular cells [27] and cell proliferation in Chinese hamster ovary cells [28] independent of cell membrane AT1 receptors. Studies have also reported the existence of intracellular AT1 receptors in renal cortical nuclei [29] and in renal cortex and medulla of rat kidney [30]. Moreover, in isolated rat cortical nuclei, Ang II increased transcription of TGF-b1 mRNA by activation of nuclear AT1 receptors [31]; whether such mechanism operates in mesangial cells remains an open question. In summary, the present study showed that intracellular Ang II activates Stat3 via a Jak2-independent mechanism in contrast to extracellular Ang II-induced Stat3 activation which is mediated by Jak2. Since both pathways appear to converge on Stat3 using different routes, they could exert synergistic effects on activation of Stat3 transcription factor resulting in a greater stimulation of gene transcription of TGF-b1 and matrix proteins, especially under high-glucose condition when both intracellular and extracellular levels of Ang II are increased [14,15]. It is noteworthy that intracellular Ang II-initiated responses were observed in the presence of candesartan, thus suggesting that ARBs are unable to block the intracellular component of Ang II signaling. This might also explain why these agents (ARBs) commonly used in clinical practice fail to completely block the progression of diabetic nephropathy. It is not known yet whether intracellular Ang II receptors are structurally identical to the cell membrane AT1 receptors or belong to a subclass of AT1 receptors which participate in the intracellular Ang II and/or nuclear signaling. There is clearly a need for further understanding of the intracellular Ang II receptors and/or signaling mechanisms for more effective control of RAS activity in diabetes and for better treatment of diabetic nephropathy.
6,963.6
2011-09-12T00:00:00.000
[ "Biology", "Medicine" ]
Elementary exact calculations of degree growth and entropy for discrete equations Second-order discrete equations are studied over the field of rational functions C(z), where z is a variable not appearing in the equation. The exact degree of each iterate as a function of z can be calculated easily using the standard calculations that arise in singularity confinement analysis, even when the singularities are not confined. This produces elementary yet rigorous entropy calculations. Introduction We will consider second-order discrete equations such as y n+1 + y n−1 = a n + b n y n 1 − y 2 n , (1.1) where (a n ) and (b n ) are as yet undetermined sequences in C. One of the first approaches to finding integrable cases of discrete equations such as (1.1) was the singularity confinement test of Grammaticos et al. [1], which has been used to identify many discrete Painlevé equations [2]. The main idea, based on an analogy with the famous Painlevé property for differential equations, is to study the behaviour of iterates after y n takes a singular value (e.g. 1 or −1 in the case of equation (1.1)). Generically, infinitely many future iterates will be infinite, but for some special choices of (a j ) and (b j ), the singularity will be confined. Although it is well known that singularity confinement is not a sufficient condition for a discrete equation to be integrable (in particular, some equations with the property are known to exhibit chaotic behaviour), this property, appropriately interpreted in different contexts, is known to be necessary in order to ensure that several measures of complexity of a solution y n grow slowly compared with solutions of generic equations. In this paper, we will show how one can use little more than the standard calculations one performs when looking for singularity confinement in order to calculate such a measure of complexity rigorously yet simply. We begin by illustrating a standard minimal analysis of equation (1.1) from the point of view of singularity confinement. In order to analyse the iterates beyond a singularity of equation (1.1), we consider that for a fixed integer n, y n−1 takes an arbitrary finite value, say k, and y n = θ + , where θ is either 1 or −1 and is a small parameter. We then calculate the next few terms in the Laurent series in for the subsequent iterates. This gives y n−1 = k, y n = θ + , θ = ±1, y n+1 = − a n + θ b n 2θ −1 + O(1), y n+2 = −θ + 2θ b n+1 − θ b n − a n a n + θ b n + O ( 2 ) and y n+3 = a n + θ b n 2θ where we have assumed that a n = ±b n and a n = ±(2b n+1 − b n ). In the limit → 0, we see that y n+1 = ∞ and y n+2 = −θ . Generically, y n+3 is also infinite unless a n+2 − a n = θ (b n+2 − 2b n+1 + b n ). (1.3) In order to confine all such singularities in this way, we demand that equation (1.3) holds for all n and for both choices θ = 1 and θ = −1. Hence (1.3) decouples into the pair of linear equations a n+2 − a n = 0 and b n+2 − 2b n+1 + b n = 0 and equation (1.1) becomes where α, β, γ and δ are constants. Equation (1.4) with γ = 0 is known to have a continuum limit to the second Painlevé equation and is often referred to as dP II , usually in the special case β = 0. Equation (1.4) with β = 0 first appeared in the work of Periwal & Shevitz [3] on exactly solvable string theories. It is the compatibility condition for a related linear problem and it is known to be a reduction of an integrable lattice equation [4]. Despite the success of this method in identifying a large number of discrete integrable equations, it is well known that some non-integrable equations also possess the singularity confinement property. For example, Hietarinta & Viallet [5] considered the equation where a is a non-zero constant, which has the singularity confinement property, yet it exhibits chaotic behaviour. They suggested that the complexity of solutions as measured by algebraic entropy should be considered. By considering y 0 and y 1 as variables, each future iterate y n of an equation such as equation (1.1) is a rational function of y 0 and y 1 . The algebraic entropy is a measure of how fast the degree d n of y n as a rational function of y 0 and y 1 grows. Specifically, the algebraic entropy is given by Integrability is associated with zero algebraic entropy, which corresponds to polynomial, as opposed to exponential, growth in d n . Algebraic entropy is related to ideas of complexity growth discussed in Arnol'd [6], Veselov [ A practical method for calculating the algebraic entropy is to obtain a finite list of degrees d n and then determine a generating function, from which the algebraic entropy can be determined simply [5]. Bellon [9] showed that discrete equations giving rise to a foliation of phase space by invariant curves have zero algebraic entropy; however, this result cannot be used to deduce the algebraic entropy of the discrete Painlevé equations. Rigorous methods based on a detailed analysis of the regularization of the equation through a sequence of blow-ups have also been applied [10,11]. Methods based on estimating the degree of cancelling factors have also provided rigorous bounds on the degree growth [12]. Studies of the cancellation and factorization properties of iterates have also been used in [13] to calculate algebraic entropy. In this paper, we will consider y 0 and y 1 to be rational functions of an auxiliary parameter z and we will calculate the degree of all subsequent iterates y n as functions of z. Rational functions of a single complex variable are much easier to deal with than rational functions of more than one variable. In particular, we do not need to consider blow-ups or cancellations to keep track of degrees. We will show how, with essentially no modification, standard singularity confinement calculations such as the one above can be used directly to determine the degrees of iterates. To calculate the degree of y n , the only extra information required from the equation is an analysis of some other singular initial conditions, which is often trivial. This measure of complexity has also been used in [14,15] where lower bounds on the degrees of iterates were obtained to show that many equations had exponential growth of degrees. In this paper, we are able to calculate the degrees exactly. Studies of the images of straight-line initial conditions in projective space (corresponding to degree one initial conditions in our setting) have been used by Bellon & Viallet [8] and Viallet [16] to calculate degrees of iterates and algebraic entropy. In this paper, we emphasize the elementary (almost naive) calculations that are required to calculate the entropy rigorously and remark that these calculations are essentially the same ones that researchers have been doing in studying confinement. Another advantage of this approach is that it allows us to study one-parameter families of solutions with lower complexity than the general solution. In this way, it can be used to look for integrable sub-cases of otherwise non-integrable equations or special solutions of integrable equations. It should be stressed that, although we are mostly considering the kind of calculations that appear in singularity confinement analysis, we do not require that the singularities be confined. These calculations merely provide the book-keeping for relating the various frequencies of certain singular values among nearby iterates. This is yet another instantiation of the observation that most rigorous methods to estimate the growth of some measure of the complexity of a discrete equation ultimately demand an analysis of the singularities of the equation in the spirit of singularity confinement. Motivated by earlier work of Okamoto on the space of initial conditions for the (differential) Painlevé equations, Sakai [17] obtained a large number of discrete equations of Painlevé type by considering dynamical systems on CP 2 blown-up at nine points (equivalently CP 1 × CP 1 blown-up at eight points). The spaces so obtained are the spaces of initial conditions for the equations. It is well known that singularity confinement has an interpretation in terms of the resolution of singularities of mappings via a sequence of blow-ups. In [10], Takenawa used the Picard group associated with this sequence of blow-ups to show rigorously that the discrete Painlevé equations arising from Sakai's construction have zero algebraic entropy (in fact the degree growth is quadratic). The degree of the nth iterate of a discrete equation relating three points can be shown itself to satisfy a recurrence with integer coefficients and a degree bounded in terms of the number of points that need to be blown-up to regularize the equation. So in principle one can determine a finite number of degrees to find this recurrence. However, quite a lot of work is needed to determine the number of blow-ups needed for a given equation. Also, iterating an equation to determine the degree when that degree grows exponentially is very difficult to do without a computer. Singularity analysis along the lines of standard singularity confinement calculations also plays a key role in both Diophantine integrability [18] and the Nevanlinna approach to discrete integrability [19] in concluding the precise forms of certain integrable equations. In particular, it is invaluable in determining the precise form of coefficients in non-autonomous equations. In both these settings, one can obtain quite strong estimates on the degrees of various rational functions of the dependent variables in integrable equations, as shown in [18,19]. However, in order to obtain the precise forms of equations, including the dependence on the independent variable, it has been shown in the examples considered in [20][21][22][23] that singularity confinement is a necessary condition for slow growth of the relevant measure of complexity. The measures of degree growth provided by Nevanlinna theory (the growth of the Nevanlinna characteristic), Diophantine integrability (growth of the height of solutions in a number field) and the growth of degrees as studied in this paper are discussed in [14] where a unifying theme is the use of singularity analysis to obtain lower bounds for complexity growth precise enough to detect exponential growth. In particular, the singularity confinement calculations in each setting are illustrated in detail to emphasize their similarities and differences. However, the analysis in this paper appears to be by far the simplest application of confinement to obtain a rigorous and precise measure of complexity. Exact calculations of degrees There are two equivalent characterizations of the degree of a rational function of a single complex variable z. Let R(z) = P(z)/Q(z), where P and Q are polynomials with no common factors. Then the degree of R is given by deg(R) = max{deg(P(z), deg(Q(z)}. However, for our purposes it is most practical to view R as a map from the extended complex plane CP 1 = C ∪ {∞} to itself. Let a be any number in the extended complex plane. Then the deg(R) is the number of pre-images of a in CP 1 counting multiplicities. For example, the degree of the rational function is five. The five pre-images of ∞ under R, listed according to multiplicity, are 0, 1, 1, ∞, ∞. (a) dP II In this section, we will use the calculation (1.2) to relate the number of pre-images of 1, −1 and ∞ of different iterates y n for dP II , equation (1.4). Suppose that y n (z) has a θ-point of multiplicity p at z = z 0 , where θ = ±1. Then y n (z) = θ + , where = (z − z 0 ) p f (z), where f is analytic at z 0 and f (z 0 ) = 0. Furthermore, assume that y n−1 takes some finite value k at z = z 0 . We assume that θ(α + β(−1) n ) + (γ n + δ) = 0, which is always true for sufficiently large n. As z tends to z 0 we have and Note that this is exactly the same calculation as (1.2) with a n = α + β(−1) n and b n = γ n + δ, apart from the 'o(1)' term in the expression for y n−1 , which plays no role in the calculation. We will assume that θ (α + β(−1) n ) + (γ n + δ) = 0 and γ n + 2γ + δ − θ(α + β(−1) n ) = 0 for all n ≥ 1. Note that these conditions are automatically satisfied for sufficiently large n, so by a translation in n, this condition can be satisfied if we provide initial conditions at a large value of n, rather than at n = 0. We see that, at any point z 0 where y n−1 and y n are both finite, then y n+1 (z 0 ) can only be infinite if y n (z 0 ) = ±1. Furthermore, the calculation (2.1) shows that in such a situation, the iterates Hence each iterate has a simple pole at z = ∞. In figure 1, each vertical line represents a copy of CP 1 , which is the domain of the corresponding y n indicated beneath it. The point at infinity is indicated at the top of the line and the '∞' indicates that y n has a simple pole there. As y 1 has degree one, it has a single 1-point (of multiplicity one). This gives rise to a simple pole of y 2 and a −1-point of y 3 . Similarly, there is a single −1-point of y 1 giving rise to a simple pole of y 2 and a 1-point of y 3 . Hence, there are exactly three (simple) poles of y 2 (including the pole at infinity) and so the degree of y 2 is three. As y 2 has degree three, it must have exactly three 1-points, counting multiplicities. In principle, this could be three simple 1-points or a 1-point of multiplicity three, etc. Now each such 1-point gives rise to the same number of infinities (i.e. poles) of y 3 , counting multiplicities. So the three 1-points of y 2 generate three infinities of y 3 and similarly the three −1-points of y 2 generate three infinities of y 3 . Together with the simple pole at z = ∞, we see that y 3 has seven infinities and hence it has degree seven. Therefore, y 3 has seven 1-points. One of these points comes from the −1 point of y 1 . So there are six 'new' 1-points. We introduce the notion N n to describe new 1-points in this context. Apart from the simple pole at z = ∞, y 4 has N 3 = 6 infinities generated by these 1-points and another N 3 = 6 infinities generated by the new −1-points of y 3 . Hence the degree of y 4 is 13. Note that, for n > 0, y n (z 0 ) can only equal one as part of a sequence 1, ∞, −1 or −1, ∞, 1. In the first case, we have called the 1-point 'new' as it is the beginning of the sequence. In the latter case, we call the 1-point 'old' as it is part of a sequence that began two steps earlier. The general case is illustrated in figure 2. We calculate the degree d n+1 of y n+1 by counting the pre-images of ∞. Now y n+1 has N n infinities generated from the new 1-points of y n and another N n from the new −1-points of y n . Together with the simple pole at z = ∞, we have This obviously corresponds to zero algebraic entropy. For more general initial conditions, the poles of y 0 and y 1 can give rise to a string of poles of bounded multiplicity at the corresponding points of future iterates. However, we are still led to an equation in which d n+1 − 2d n + d n−1 is a bounded function of n, giving growth that it at most quadratic in n. It is important to emphasize that this kind of reasoning in which we use pre-images of singular points to relate the degrees of different iterates does not rely explicitly on confinement, but it does use the kind of singularity analysis that one carries out in the context of studying singularity confinement. For example, if a n and b n are generic functions of n, then no singularity will be confined at any point. In this case, a pole of some iterate will arise at a point z 0 if the two previous iterates are both finite at z = z 0 if and only if the second value is θ = ±1. This gives rise to an infinite sequence of iterates of the form θ , ∞, −θ, ∞, θ, ∞, −θ, ∞, θ, ∞, . . .. If we start with the same initial conditions y 0 (z) = Az + B and y 1 (z) = Cz + D, then again every subsequent iterate will have a simple pole at z = ∞ and every pole in the finite plane must arise in a sequence of the form just described. So for n > 0, the only poles of y n+1 apart from the simple pole at infinity arise from each of the +1and −1-points of y n . In terms of degrees, there are 2d n such points, so the degrees satisfy d n+1 − 1 = 2(d n − 1), i.e. d n+1 = 2d n − 1, n ≥ 1. Using d 1 = 1, we have d n = 2 n − 1. Hence the entropy is log 2. For non-generic choices of the coefficients a n and b n , it is known that there are infinitely many opportunities to confine the singularities of equation (1.1) by choosing appropriate (a n ) and (b n ). Only those equations that confine at the earliest opportunity appear to be integrable and have zero algebraic entropy [24]. In [25], this phenomenon is called late as opposed to the infinitely late confinement just discussed. Knowing where each type of singularity confines (or knowing that it does not confine at all) is enough to calculate the degrees for given initial conditions. For special initial conditions, the degree growth of solutions of equation (1.4) can be slower than quadratic. In the simplest case, let us again take y 0 and y 1 to be degree one rational functions. Without loss of generality, we take y 0 (z) = z. In general, the simple pole of y 0 at z = ∞ and the simple 1-point and −1-point of y 1 will force y 2 to have exactly three simple poles and hence the degree of y 2 would be three. We could prevent the pole at z = ∞ of y 0 from producing a pole at z = ∞ of y 2 by insisting that y 1 is either −1 or 1 at z = ∞. If y 1 (∞) = −1 and y 2 (∞) is finite then as z → ∞. We can then force the degree of y 2 to be one by choosing y 1 to have a pole at z = 1 and y 1 (−1) to be finite. In a sense, we are choosing the −1 point of y 0 to be old and the 1-point to be new in the way described above. This uniquely specifies y 1 to be It is straightforward to verify that if γ = 2α then the solution y n of equation (1.4) with the initial conditions y 0 (z) = z and y 1 (z) given by (2.5) also solves the discrete Riccati equation As y n+1 is a Möbius transformation of y n , we see that the degree of all iterates is one, so there is no degree growth at all. Other special initial conditions produce solutions that can be expressed in terms of solutions of discrete linear equations. In fact, if for some choice of θ = ±1, we demand that a non-constant in z solution y n (z) of equation (1.1) only has singularities of the form θ, ∞, −θ, then we can find solutions governed by the discrete Riccati equation where a n and b n have the special form for some sequence f n . In this way, slow growth (or in this case, non-growth) of the degree of iterates singles out special integrable sub-classes of solutions of otherwise non-integrable equations. (b) An example of Hietarinta & Viallet Now we turn to the example of Hietarinta & Viallet [5], equation (1.5). The only way that an iterate can become infinite starting from finite initial values is if the previous iterate has a zero. To this end, suppose that y n (z) has a zero of multiplicity p at z = z 0 . Then, and To summarize, if y n−1 (z 0 ) and y n (z 0 ) are finite but y n+1 (z 0 ) is not, then y n has a zero of some multiplicity p at z 0 , y n+1 and y n+2 both have poles of multiplicity 2p at z 0 and y n+3 again has a zero of multiplicity p. Also, y n+4 is finite at z 0 . The fact that there are many more poles compared with zeros is the source of the positive entropy (and ultimately the non-integrability) of this equation. We again choose initial conditions y 0 = Az + B and y 1 = Cz + D. If AC(A − C) = 0, then all iterates will have a simple pole at z = ∞. We calculate the degree d n of y n with the aid of figure 3. Here, N n denotes the number of 'new' zeros of y n , i.e. those zeros at the beginning of a sequence of the form 0, ∞ 2 , ∞ 2 , 0. The only poles of y n+1 in the finite complex plane come from sequences that began from new zeros of y n and new zeros of y n−1 . Recalling that the poles have twice the multiplicity of these zeros, and including the simple pole at z = ∞, gives d n+1 = 2(N n + N n−1 ) + 1. Next we calculate the degree of y n+2 as the number of pre-images of 0. Each of the old zeros of y n+2 comes from a new zero of N n−2 . So Together with the initial conditions d 0 = d 1 = 1, we find It follows that the entropy is This value of the algebraic entropy of equation (1.5) was also obtained rigorously by Takenawa [10,11] after 14 blow-ups of CP 1 × CP 1 . (c) dP I Consider the equation (d) dP III We will study the integrable discrete equation dP III , which has the form where a + = a − and b + = b − are constants. Equation (2.9) was first identified in the seminal paper [2] by Ramani et al. Equation (2.9) has several routes into singularity. One kind of singularity arises when y n−1 is finite and y n is either b + or b − . Another kind of singular behaviour arises when y n is either a + q 2n or a − q 2n . This forces either y n−1 or y n+1 to be zero. Another route into singularity from finite values is when y n−1 vanishes. For all sufficiently large n, a ± q 2n is neither b + nor b − . If y n−1 (z 0 ) =: k is non-zero and finite and y n has a b ± -point of multiplicity p at z 0 , then for generic k, y n+1 has a pole of multiplicity p at z 0 and y n+2 has a b ∓ -point of multiplicity p. The next iterate is finite and non-zero. Similarly, if y n−1 (z 0 ) =: k is non-zero and finite and y n has a a ± q 2n -point of multiplicity p at z 0 , then for generic k, y n+1 has a zero of multiplicity p at z 0 and y n+2 has a b ∓ q 2(n+1) -point of multiplicity p. The next iterate is finite and non-zero. In this way, both of these singular behaviours are confined. For more general coefficients, the singular values would give rise to further zeros or poles of y n+3 . Next, we consider the situation in which one of y n−1 or y n has either a zero or a pole at z 0 and the other is finite and not equal to any of the other singular values: 0, a ± q 2n or b ± . Generically, these singularities belong to an infinite sequence of the form . . . , 0, k 1 , ∞, k 2 , 0, k 3 , ∞, k 4 , . . ., where the k j s are finite and not equal to any of the other singular values. We now have enough information to calculate the degree of y n for given generic initial conditions. If y 0 and y 1 are generic rational functions, the singular values of one will not occur in the same locations as the singular points of the other. Furthermore, if y 0 and y 1 zero and pole of y 0 and the zero and pole of y 1 determine four special points. Given an iterate y n , exactly one of its poles will occur at one of these special points and exactly one zero will occur at another. Let N n be the number of new b + -points of y n , which is the same as the number of new b − -points as well as the number of new a + q 2n -points and the number of new a − q 2n -points. The poles of y n+1 come from the new b + -and b − -points of y n , apart from the single simple pole at one of the four special points. Hence, the degree d n+1 of y n+1 satisfies equation (2.2). Also, the b + -points of y n are either new or they come from half the poles of y n−1 that are not at one of the special points. This gives us equation (2.3). Imposing the initial condition d 0 = d 1 = 1 again gives us (2.4). For higher-degree generic initial conditions, the constant terms in equations (2.2) and (2.3) are replaced by bounded terms and the solution is seen still to grow like n 2 for large n. (e) Other equations In all examples that we have discussed so far, the entropy has been determined by considering the kind of singular behaviour that one considers in the traditional calculations used to determine singularity confinement. In these examples, there were also a finite number of points on the complex sphere where the initial conditions led to a different sequence of singularities but in the examples considered this contribution to the degree was small and so did not influence the entropy. This is not always the case. Consider the equation a kn y k n , (2.10) for some integer K ≥ 2, where a Kn = 0 for all n ≥ 0. While there are simpler ways of calculating the degrees of iterates for this equation, we will continue with the same kind of analysis that we have applied to previous examples in order to illustrate the importance of looking at all singularities. First, notice that it is not possible for an iterate to become infinite at some point z 0 if the previous two iterates were finite at z 0 . So if we choose to determine the degree by looking at the number of pre-images of ∞, we know that the location of the poles of any future iterate are the locations of the poles of the initial conditions y 0 (z) and y 1 (z). For example, suppose that y 1 has simple a simple pole at z 0 and y 0 either has a simple pole or a regular point at z 0 . Then y n has a pole of order K n at z 0 . In particular, if y 0 (z) and y 1 (z) are degree one polynomials, then the degree of y n (which we calculate using the only poles, which are at z = ∞) is also K n for n > 0 and the entropy is log K > 0. This example again shows that we can still easily calculate degrees of iterates when singularities are not confined. However, unlike equation (1.1) for generic coefficients a n and b n , the growth in degree is driven by a kind of periodic behaviour that does not usually play a role in traditional singularity confinement-type analysis. Entropies for general initial conditions In this paper, we have concentrated on determining the exact degree of y n for given initial conditions, usually of degree one. The degree growth for more general initial conditions can easily be calculated and, moreover, bounds on the growth for arbitrary initial conditions can be obtained. It is possible of course to choose very special initial conditions such that the degrees grow slower than the generic case or even decrease rather than increase. In many cases however there will be a finite number of special singular points determined by the initial conditions, e.g. the point at infinity in equations (1.4) and (2.8), where certain singularities propagate but whose overall contribution amounts to a bounded term in the linear equation describing the degrees. Conclusion In this paper, we have shown through several examples that the standard singularity analysis that one performs in determining whether an equation possesses the singularity confinement property is almost sufficient, not only to calculate the entropy of the solutions but to calculate the exact degree of the nth iterate for given rational-in-z initial conditions. The results are both rigorous and elementary. In a recent preprint, Ramani et al. [31] have built on ideas in this paper to develop an express method of integrability detection. They compare their method with their recently introduced deautonomization approach. They apply their method to many interesting examples for which they are able to calculate the entropy exactly without the precise knowledge of the degrees. The interpretation of the singularity analysis as a way of relating the multiplicities of various iterates at a point z 0 is closely related to the complex-analytic analysis used in the estimates of the Nevanlinna characteristic. This idea played a central role in [32] where lower bounds on the growth of the Nevanlinna characteristic of meromorphic solutions were obtained using Nevanlinna's second main theorem and an assumption about the relative frequency with which certain singularities occur. These assumptions were dropped in future works [20,33,34] and the precise forms of the discrete Painlevé equations within the classes considered were obtained under the assumption that there is a meromorphic solution of finite order growing faster than the coefficients. In both the Nevanlinna approach and in the approach of this paper, slow growth is associated with a comparable number of singular values appearing in a sequence of iterates. Non-confinement typically means that we can find many more of one of the singular values than of another. However, as the example of Hietarinta & Viallet (1.5) shows, this can happen even when a singularity is confined. The calculation (2.7) shows that there are twice as many poles (counting multiplicities) than zeros, which ultimately leads to exponential growth. Vojta's dictionary [35] related definitions and results in Nevanlinna theory to similar ideas in Diophantine approximation. The logarithmic height of a non-zero rational number a/b, where a and b are co-prime, is h(a/b) = log max{|a|, |b|}. Applying this to the suggestion in [19] that difference Painlevé equations should have sufficiently many finite-order meromorphic solutions prompted the definition in [18] that a discrete equation is Diophantine integrable if the logarithmic height of the nth iterate is bounded by a power of n. The initial papers [18,19] both only gave crude information about the form of low-growth (i.e. integrable) equations. This level of information was is in some sense comparable with the information one receives about the form of differential equations if one only considers the leading order behaviour of solutions in standard Painlevé analysis. More precise information comes from a detailed singularity analysis. In the context of height growth and Diophantine integrability, singularity calculations such as (1.2) can be reinterpreted as describing 'closeness' to certain values as measured by the different absolute values on Q, or more generally on a number field. The logarithmic height can be determined by knowledge of all absolute values. In this way, lower bounds on the height growth were determined in [22,23]. Connections between Nevanlinna theory, Diophantine integrability and the degree growth described in this paper are studied in [14] in analogues of the singularity confinement calculations are described in each setting for the same class of equations. Competing interests. I declare I have no competing interests. Funding. This work was partially supported by EPSRC grant number EP/K041266/1.
7,921.2
2017-05-01T00:00:00.000
[ "Physics" ]
Fast "coalescent" simulation Background The amount of genome-wide molecular data is increasing rapidly, as is interest in developing methods appropriate for such data. There is a consequent increasing need for methods that are able to efficiently simulate such data. In this paper we implement the sequentially Markovian coalescent algorithm described by McVean and Cardin and present a further modification to that algorithm which slightly improves the closeness of the approximation to the full coalescent model. The algorithm ignores a class of recombination events known to affect the behavior of the genealogy of the sample, but which do not appear to affect the behavior of generated samples to any substantial degree. Results We show that our software is able to simulate large chromosomal regions, such as those appropriate in a consideration of genome-wide data, in a way that is several orders of magnitude faster than existing coalescent algorithms. Conclusion This algorithm provides a useful resource for those needing to simulate large quantities of data for chromosomal-length regions using an approach that is much more efficient than traditional coalescent models. Background Given the increasing prevalence of genome-wide data, and the development of methodologies for the analysis of such data, there is an increasing need for tools that can simulate data appropriate for long, genomic regions. Two options suggest themselves: 1. Model and simulation: The traditional approach has been to use a model that is (a) thought to be a reasonable approximation to the evolutionary history for the organism of interest, and (b) easy to simulate. By far the most popular such model is the coalescent [1,2] However, use of the coalescent becomes less practical for long genomic regions. Existing data and perturbation: An alternate, newer approach is to take an existing data set and then perturb it in some fashion to produce "new data from old". A simple example of such an approach would be re-sampling. More specific examples can be found in [3,4]. The first approach has the advantage of being able to produce data that is not dependent on an existing data set. However, the model it uses will be, by definition, an approximation to the evolutionary processes that pro-duced the real data. The second approach, while being dependent on the presence of an initial data set, has the advantage that the evolutionary model underlying the unperturbed data is correct. We don't know how the data got there, but it is 'real' data, so it got there via the correct evolutionary history. However, the need to then perturb the initial data to produce new data sets adds noise to the evolutionary process and thereby results in data that is only an approximation to reality. Furthermore, the extent of the dependence of the new data sets on the initial data set is unclear, and it is therefore not obvious how typical such data might be of other, unobserved, real data. We believe both of these approaches have merit. In this paper we restrict ourselves to a discussion of the former approach, in which we use an evolutionary model to simulate new data sets. The use of the standard coalescent model becomes impractical as the length of the simulated region increases. However, the coalescent has been proven to be a powerful simulation tool in these contexts (e.g., [5]). Thus, in this paper we exploit an approximation to the full coalescent algorithm. This approximation, the sequentially Markov coalescent (SMC), was introduced by McVean and Cardin [6]. It is able to simulate significantly longer regions while maintaining the properties of shortrange summary statistics. Since our particular interest is in the development of such algorithms as a tool for the testing of disease mapping methodologies, we pay close attention to the behavior of linkage disequilibrium [LD] in data simulated under the SMC model. The coalescent was introduceid in [1]. It provided an elegant and efficient model for the evolution of a population of randomlymating, neutral, haploid individuals. As such it has become a very widely used tool. Over time, generalizations have been introduced to deal with the more obvi-ously restrictive aspects of the original model. For example, recombination was introduced in [2]. Selection was introduced in [7,8]. Useful reviews are found in [9][10][11]. Our interest here centers on the use of the coalescent algorithm to simulate long chromosomal regions. When long regions are considered, and the recombination rate is therefore very high, the coalescent algorithm becomes somewhat problematic to use. Run-times become longer (see "Results") and memory requirements become greater. In a case in which two widely-separated regions were being considered, one might simulate these two regions independently, relying on the fact that the regions would be essentially unlinked. However, when one is studying a long, continuous region such a strategy becomes inappropriate since linkage disequilibrium is likely to be present along the entire region. (In a situation in which recombination hotspots were present, one might try to independently simulate regions between hotspots.) Rapid simulation of coalescent ancestries is central to estimation methods such as rejection algorithms, or to the use of simulation-studies as a test-bed for new methodologies. Thus we use a simple approximation to the coalescent in which the difficulties associated with simulating long chromosomal regions are mitigated. Hudson [2] introduced recombination into the coalescent model. Griffiths and Marjoram [12] then embedded this within the ancestral recombination graph (ARG), a more tractable description of the coalescent model in the presence of recombination. Shortly thereafter, Wiuf and Hein [13,14] introduced an alternate description of the ancestral process with recombination in which the sample is constructed by moving "along the chromosome". Their algorithm gains efficiency by ignoring a class of recombination events that do not affect the present day sample. In order to discuss this further, we review the concept of ancestral material. A chromosomal region in an individual is considered to be ancestral if it is eventually inherited by any of the sample of interest drawn from the present day population. Thus, individuals in previous generations are likely to contain chromosomal regions that are both ancestral and non-ancestral. In essence there are five types of recombination events that occur on the full ARG: Recombination in ancestral material; The various categories of recombination Figure 1 The various categories of recombination. Illustration of the different types of recombinations. Ancestral material is shown as solid red lines, while non-ancestral material is shown as red-dotted lines. Locations of recombinations are shown below and to the left of the recombination event. Type of recombination is indicated with a blue numeral above the event. 2. Recombination in non-ancestral material that has ancestral material to both sides; 3. Recombination in non-ancestral material that has ancestral material only to the left; 4. Recombination in non-ancestral material that has ancestral material only to the right; 5. Recombination in an individual that carries no ancestral material. We illustrate some of these events in Figure 1 [see Additional file 1]. Only the first two types of event actually impact the composition of the sample of interest. As the recombination parameter, ρ, increases, the number of recombinations in the ARG, which is of the order of ρlog(n) for a sample of size n, (e.g., [12]) grows. A simulation of the full ARG would contain all such recombination events, and hence be highly inefficient. This is not, primarily, due to the large number of recombination events per se, but rather is caused by the growing size of the ARG, which makes increasing demands on computer memory. Simulating the ancestral recombination graph We begin by introducing some notation. Denote the length of chromosome being considered by the unit interval [0,1]. Let x ∈ [0,1] denote a point within the region of interest. McVean and Cardin's SMC method introduces an approximation to an elegant scheme introduced by Wiuf and Hein [13,14], which we describe fully in "Implementation". In summary, Wiuf and Hein's method moves from left-to-right along the chromosome. Starting with the tree appropriate for x = 0 they find the (exponentially distributed) distance along the chromosome to the next recombination event. They then pick a point uniformly at random on the graph constructed so far and introduce a recombination at that point. The left emerging line from that recombination follows the path of the existing line (indicated in green on Figure 2 -[see Additional file 2]), but the right emrging line, which is the newly-introduced line, follows a new path (calculated from the usual coalescent prior and indicated in red on Figure 2). Once we have constructed the path for the new line we are left with a new graph that consists of the old graph plus this new line. This procedure is iterated until the end of the chromosome is reached. Note that the size of the graph increases as x increases. (For details see "Implementation".) There is a class of recombination events which occur to lines on the full ARG but which do not affect the properties of samples generated from that graph. These are recombinations which occur in regions which are nonancestral for that line (i.e. are not passed on to the sample of interest at the bottom of the graph) and which do not have ancestral regions to both their left and right (corresponding to events of types 3, 4 and 5 in the list of the previous section). The Wiuf and Hein algorithm gains efficiency over an algorithm based on the full ARG by excluding recombinations of types 4 and 5. In particular, it excludes recombinations which occur in non-ancestral material and which only have ancestral material to their right on that line, but not those that have ancestral material only to their left. This has the curious feature of making the density of recombination events in the simulated Decay of r 2 Figure 3 Decay of r 2 . This figure shows how r 2 decays as a function of distance for both the SMC and SMC' algorithm and for an exact coalescent model (simulated using ms). Data was simulated for a 2 Mb region and a sample size of n = 20. graph increase as we move along the chromosomal region. (However, it is important to note that these 'extra' recombination events occur in non-ancestral material and do not influence the composition of the final sample.) Since, for large ρ, the efficiency of algorithms that simulate (a subset of) the ARG is largely a function of the amount of memory required to store the graph, this makes the Wiuf and Hein algorithm more efficient than algorithms based on the full ARG. The popular ms algorithm of Hudson [15] also excludes recombinations in non-ancestral material that only have ancestral material to their left (i.e. of type 3). Thus, all events that do not affect the current sample are ignored in the ms algorithm, and it is therefore more efficient than the Wiuf and Hein algorithm. The novelty of the SMC scheme proposed by McVean and Cardin, is that before adding the new line corresponding to a recombination event they delete the old (existing) line for that recombination, (i.e. all parts of the old line between the point at which the recombination occurred, and the point at which the old line coalesced, are deleted). Thus, each graph we construct is in fact a tree, and knowledge of the deleted lines is lost rather than being stored within the total graph constructed so far (as in the algorithm of Wiuf and Hein). Consequently, the SMC algorithm explicitly disallows coalescence between lineages with no overlapping ancestral material. The motivation is to significantly increase algorithmic efficiency, due to lower memory requirements, while retaining most of the LD structure [6]. This is possible largely because the process by which trees are constructed as we move along the chromosome using the SMC is now Markovian. In our implementation of their algorithm we include a slight modification in which the old (existing) line for the recombination is deleted after (rather than before) the new line is added. Thus, in this modified version, which we refer to as SMC', the intuitive interpretation is that we only include recombination events that occur in ancestral material and ignore all events occurring in non-ancestral regions. Our motivation for doing this is as follows. The original specification of the SMC algorithm has the consequence of excluding a class of recombination events that occur in ancestral material but do not affect the pattern of LD in the data (because the two lines resulting from the recombination coalesce with each other before coalescing with any other line). We denote this class of recombinations by R. Thus, since all recombinations are forced to be other than type R, the rate at which recombinations of type not equal to R occurs will now be slightly higher than it would normally be under the full coalescent model (for a given recombination parameter ρ). This suggests that LD will decay slightly more quickly in the original, SMC, version of the algorithm (see "Results" for more details). Our FastCoal software implements both the SMC and SMC' versions of the algorithm. We give results to supplement those in [6] and demonstrate that the SMC/SMC' approximation is much more efficient than the coalescent for high ρ and produces data that is almost indistinguishable from that produced by an exact coalescent algorithm. The algorithm simulates an approximation to the true coalescent model. However, the degree of approximation is extremely close, at least in terms of patterns of LD. Furthermore, implementation of the algorithm results in software that is very significantly faster, and has much lower memory requirements, when ρ is large. In fact, the memory required by the algorithm is independent of ρ (in direct contrast to ms, for example). Note that by discarding the old line associated with each recombination, at any given moment the algorithm stores information for one coalescent tree rather than a more complicated and memory-intensive graph. This allows the SMC/SMC' algorithm to run efficiently for high ρ (when the size of the corresponding graph becomes very large). Other than that, the SMC/SMC' process and Wiuf & Hein algorithms are essentially the same. Thus it follows, in a manner directly analogous to that in [13,14], that T(x), the marginal genealogy at a particular location x, is still exactly described by the traditional coalescent process. See [6] for an extended discussion of the properties of the SMC algorithm and a derivation of theoretical results. We now consider the degree to which data produced by the approximation algorithm is similar to that produced by traditional coalescent algorithms and then demonstrate the relative computational efficiencies for a variety of parameter values. Wiuf and Hein's algorithm We start by summarizing Wiuf and Hein's method [13,14]. Their algorithm provides a way of constructing a subset of the ARC by moving 'along the chromosome', constructing the tree appropriate for each point on the chromosome and storing those trees within a graph that is a subset of the full ARG. Recall that x denotes a location in the interval being simulated. The algorithm proceeds as follows: 1. Set x = 0 and generate a coalescent tree for x. Denote this tree by G(x). Denote the length of the graph at x by L(x). 5. Set x = x + y. Let G(x) denote the total graph constructed so far (i.e. G(x) contains all branches appropriate for any z < x). Set L(x) equal to the total length of G(x). The method of Wiuf and Hein simulates a substantial subsample of the full ARG. Thus, its burden on computer memory is also substantial, to the point of being intractable for long genomic regions. McVean and Cardin introduced the SMC algorithm, an approximation to the process of Wiuf and Hein. The SMC algorithm reduces the topology being simulated to a tree rather than a graph. We now introduce our variation on the SMC algorithm, which we refer to as SMC'. The SMC' algorithm The SMC' algorithm proceeds as follows: 1. Set x = 0 and generate a coalescent tree for x. Denote this tree by T(x). Denote the length of the tree at x by L(x). ρ 2 7. If x + y <1 return to 2. We illustrate this algorithm in Figure 2 [see Additional file 2]. In the original SMC algorithm presented by McVean and Cardin steps 4 and 5 are conducted in reverse order. As we discussed earlier, this algorithm has the property that, at any point in time, the topology being considered is a tree rather than a graph. Furthermore, as discussed in [6], the algorithm is now Markovian as we move along the chromosome. As such it can be efficiently stored in mem-ory, the amount of memory required being independent of the recombination rate. Behavior of algorithm -tree heights As noted in [6], exploiting the approximation described by the SMC algorithm (or the SMC' variation), implies that we are no longer simulating exact coalescent ancestries. For the reasons discussed above and in [6] it rather straightforwardly follows that time to most recent common ancestor (TMRCA) for any point x ∈ [0,1] will have the same distribution as for the standard coalescent. It is of some interest to consider the mean height of the ith tree moving (from left-to-right) along the chromosome. Results for ms, SMC and SMC' are shown in Table 1 for a sample of size n = 2 and ρ = 1 and 100. Note that the ith tree will not always exist. The mean heights presented for tree i in Tables 1 and 2 are conditional on the existence of the ith tree (for each i). (Thus, iterations for which the ith tree does not exist are not used for the calculation of the mean height of tree i.)There are two intuitions underlying the results shown in the table. Firstly the results illustrate a subtlety first discussed in [13,14], in that the (i + l)th tree is most likely to exist if the ithtree has a higher TMRCA than usual. Thus, conditioning on the existence of the ith tree leads to an increase in the mean height of that tree. Clearly, this conditioning does not apply to the first tree, or the last tree (since both always exist) -thus their mean height is unchanged. Furthermore, the extent of this effect decreases as the recombination rate increases (since the ith tree becomes more likely to exist as ρ increases). Secondly, there is a difference in behavior between the tree at x, for some position x, and the ith tree along the chromosome. The former has a lower expected height than the lat-ρ 2 Table 3: Run-times. Average time per simulation, as a function of sample size n, based on 20 trials, assuming θ = 10 -4 /bp and ρ = 5 * 10 - ter, since tall trees are likely to cover a shorter length of the chromosome. For n = 2 this is akin to size-biased sampling of exponential random variables. A little thought reveals that for high ρ and small i, the ith tree will exist with very high probability, the effect of the conditioning is therefore lost, and the expected tree height will be approximately 2 due to the size-biasing effect. The effect of the size-biasing is lessened at each end of the region. /bp. Simulations were run on a 2.8 GHz Intel Xeon processor. Dashes correspond to simulations that could not be completed because they required too much (> 3 GB RAM) memory. The results show that the SMC' algorithm appears to provides a closer approximation to the full coalescent model than does SMC. As one might expect a priori, the degree of difference decreases as the sample size increases. We illustrate this in Table 2, where results are shown for a sample size of 20. It is not clear how this difference in behavior might affect the properties of the data being simulated, but it suggests that the covariance of the tree heights at any two positions x and y along the region of interest will be highest under ms, with SMC' leading to a somewhat lower covariance and SMC leading to a further lowering of the covariance. (For further evidence of this effect see the results for LD below.) Behavior of algorithm -run times We compare the run times of our software with those of ms (Hudson 2002). We concentrate on parameter values that are appropriate for modelling the data that will come from future large-scale association studies. All simulations assume that θ = 4Nu = 10 -4 (where N is the effective population size and u is the mutation rate per base pair per generation) and ρ = 4Nr = 5 * 10 -4 (where r is the crossover rate per base pair per generation). The θ value gives a SNP density of one SNP with a minor allele frequency However, for larger regions, the new algorithm is much faster than ms. When the simulated region is larger than a few Mb, ms could not be run due to memory constraints. We anticipate that roughly 32 GB of RAM and 2-6 days of computing time would be necessary to simulate data from a small chromosome (n = 4000 and 50 Mb of sequence) using the standard coalescent. In contrast, the corresponding simulations with the new algorithm take less than 2 minutes to run and use less than 200 MB of RAM. We note in passing that, as expected, the run time for our software is roughly proportional to the length of the sequence simulated. Run times for ms increase more than quadratically with respect to the simulated sequence length (results not shown). Behavior of algorithm -LD We also compared the behavior of LD in data simulated by SMC, SMC' and ms. In Figure 3 [see Additional file 3] we simulated 2 Mb of sequence from a sample size of n = 10 using 1,000 replicates. As before we assumed θ = 10 -4 / bp, θ = 5 * 10 -4 /bp. In Table 4 we use n = 100. We illustrate the behavior of several simple summaries of LD: r 2 as a function of distance, the number of distinct haplotypes (H), the minimum number of inferred recombination events R M (cf. [16]) and the fraction of sequence contained in haplotype blocks (cf. [17]). The means of these summaries are displayed in Table 4 and Figure 3. As measured, the algorithms produce nearly identical patterns of LD, although, somewhat surprisingly SMC leads to a slightly lower value of R M . We note that SMC' produces a slightly closer approximation to the full coalescent model than does SMC. This is true for all sample sizes, but we note that the degree of difference between the algorithms decreases as the sample size increases, and will, for many purposes, be insignificant. We simulated a range of other parameter values (including sample sizes ranging up to 2500) and considered several other measures of LD [18,19], including patterns of LD within triplets of sites. In all cases the broad conclusions were essentially the same (results not shown). We conclude that the SMC/ SMC' algorithm produces simulated data that has LD properties that are virtually indistinguishable from those resulting from standard coalescent simulations. Discussion Hudson's ms algorithm is an excellent and widely used tool. As we enter an age in which genome-wide studies are becoming increasingly frequent, the ability to efficiently simulate long chromosomal regions becomes more important. Consequently, the need for an efficient alternative to ms arises. While we feel that ms should continue to be the algorithm of choice when computational demands do not prohibit its use, we feel the SMC/SMC' algorithm provides a useful alternative in this new paradigm. In particular, it will lead to significant improvements in efficiency for methods such as rejection algorithms or computationally intensive simulation studies. While it models an approximation to the full coalescent model, the degree of approximation appears to be very good. Software to implement the SMC algorithm is available as open-source, freely distributable C++ code. It can be used to generate data according to both the SMC and SMC' versions of the algorithm. The current implementation includes the possibility of allowing for changes in population size, in the form of an exponential growth model. This is dealt with in the standard way by altering the rates at which sequences coalesce (see [20], for example, for details). Clearly, there are a range of other complicating factors that one might wish to add to this code. Our view is that variation in recombination or mutation rates is best handled via post-processing the data produced by the standard form of the algorithm. For example, if one generates data for given values of θ and ρ one can produce recombination hotspots by contracting a region by a factor of λ (where λ is the relative rate of recombination in the hotspot) followed by a consequent thinning of mutations (including each mutation in that region with probability 1/λ if one wishes to keep the mutation rate constant). See [21] for details. The authors encourage interested parties to submit functions to allow for such complications. We will maintain these in a central repository. Conclusion We have developed software (FastCoal) to implement the SMC algorithm of [6]. This algorithm approximates the standard coalescent process. We also introduce a modified version of the algorithm, SMC', which appears to produce a slightly closer approximation to the full coalescent model. The approximation makes the SMC/SMC' algorithm an appropriate choice for simulating long, chromosomal regions, for which existing algorithms become computationally intractable. We have shown that despite the fact that this method is an approximation to the exact coalescent model, it appears to produce data this is virtually indistinguishable from the exact model, at least in terms of patterns of pairwise LD and marginal TMRCAs. The behavior of LD is particularly relevant for genomewide mapping studies, so we feel our results give convincing evidence that this software can be used to provide test data in a highly efficient manner when testing new genome-wide mapping methodologies. Availability and requirements The FastCoal software, written in C++, is available from PM at<EMAIL_ADDRESS>and runs on Windows platforms. Authors' contributions PM and JDW are responsible for development and testing of the methodology. PM wrote the paper and C++ code.
6,125
2006-03-15T00:00:00.000
[ "Biology", "Computer Science", "Mathematics" ]
Lifestyle behaviours or socioeconomic characteristics? Gender differences in covariates of BMI in Hungary Summary Objective Lifestyle behaviours are everyday activities that result from individual's values, knowledge, and norms shaped by broader cultural and socioeconomic context. These behaviours affect body weight as well as overall health and are influenced by a number of social characteristics. The aim of this paper was to examine the net effects of lifestyle behaviours and socioeconomic factors on body mass index (BMI), and how these differed by gender. Methods This study used the 2009/2010 Hungarian Time Use Survey combining behavioural records, background information, and measures of self‐reported health and weight. The sample (n = 7765) was representative for the Hungarian population. Multivariate linear OLS regression models were employed to analyse the net effects of lifestyle and sociodemographic variables. Results Daily behaviours were associated with BMI for women, but not for men, except for smoking. Meals frequency and duration of sleep had negative effects on female BMI, whereas duration of TV viewing had a positive effect. Occupational class was associated with male BMI, but not with female. The strong negative effect of smoking was significant for both genders. Conclusions Lifestyle behaviours were linked with female BMI, with socioeconomic characteristics impacting on male BMI. These results suggest that a gender‐specific approach may be appropriate to address obesity issues in the Hungarian population. Introduction An increased body mass index (BMI) is a known risk factor for developing cardiovascular disease and different cancers (1,2). The relationship between lifestyle behaviours and an individual's weight is well-established in social and health research (3). Health lifestyle theories argue that the propensity to adopt positive health behaviours is a result of the interplay between individual motivations and structural factors, such as gender or socioeconomic status (4,5). Lifestyle behaviours have been operationalized as daily activities resulting from individual values, orientations, knowledge, and norms defined by the broader cultural, social and economic context (5). An individual's life circumstances affect their possibilities or constraints to adopt certain lifestyle behaviours (4). Lifestyle behaviours in this theoretical framework are closely linked to sociological theories of symbolic distinction in which individual's choices regarding daily practices are regarded as determined by their social position (6). There is a substantial overlap between the variables recognized as key health lifestyle behaviours, that is dietary habits, physical activity, smoking, and drinking alcohol (4,7), and behaviours having a major impact on an individual's BMI, that is food choices and eating practices, physical activity, TV viewing, and sleep (8)(9)(10)(11). Health lifestyle theories argue that focusing on a single or a small subset of behaviours does not sufficiently reflect the diversity of the social forces behind them (5). Furthermore, different lifestyle behaviours are associated with one another (4), and some practices may facilitate or constrain the other (12,13). Among lifestyle behaviours dietary intake and eating behaviours, getting an adequate amount of sleep, being physically active, and managing stress were listed as 'key weight management behaviors' (10,122). For each of these groups, there is a set of indicators that may be generated using time-use diaries. Some information, such as dietary intake or stress management, is not available, but diaries allow collecting information on prevalence, frequency, duration, and timing of selected activities over the day, which provide relevant information for analysing lifestyle behaviours linked with obesity. This study included (i) frequency of eating, (ii) having breakfast, and (iii) duration of food preparation as lifestyle practices related to diet and eating. It also analysed (iv) duration of sleep, (v) time spent being physically active, (vi) time spent watching TV, (vii) smoking, and (viii) alcohol use as other possible covariates of BMI. Frequency of meals and having breakfast have both been inversely related to BMI (14)(15)(16)(17). Time spent on food preparation might be indicative of the quality of diet as food prepared at home was shown to have a better nutritional profile than food eaten out of home (9,18). The quality of an individual's diet may also be affected by the duration of sleep (11), as sleep deprivation has been shown to be associated with food preferences, total energy intake, and metabolic processes (19,20). Levels of daily physical activity have been inversely associated with individual weight, as well as with the risk of developing metabolic syndrome or diabetes (3,21,22). In contrast, leading a sedentary lifestyle is considered a risk factor for these diseases (21,22). Time spent on physical activity together with the duration of TV watching was shown to be better predictors of BMI in children than their diet (23). TV viewing has also been associated with less healthy eating practices and weaker control over food intake (24). Lastly, smoking is known as an appetite suppressor (12), with women especially citing this effect of smoking as the primary reason for not trying to quit (13). Lifestyle practices have been theorized to reflect sociodemographic differences (4)(5)(6). At the same time, there are substantial inequalities in BMI in developed countries, and the prevalence of obesity and overweight differs across population groups (25). People in lower social positions tend to have higher average BMI (26), though in many cases gender moderates this effect. The inverse association between socioeconomic status (SES) and BMI has been consistently reported for women, but not for men (27). Men and women have different attitudes to health lifestyle practices (4), and social norms regarding body weight differ by gender (26,28,29). Women are more likely to submit to class norms which impacts their attitude toward diet and physical activity (26)(27)(28). For women, particularly in higher SES categories, being 'thin' is highly desirable (29). They are also more likely to make lifestyle changes to maintain or achieve their desired weight, often experiencing greater social pressure to do so (26,28,29). To account for the effect of SES, I used standard indicators of individual's social status that is education, income, and occupation. These measures were shown to form different associations with individual's BMI as well as with health-related behaviours. In particular, an individual's educational attainment tends to be significantly and inversely related to BMI (25,26), with better educated individuals being more careful about their food choices. Education was also shown to be the main variable explaining class differences in eating patterns, including meal frequency (30). Occupational norms may dictate which body types are considered attractive or socially acceptable in a given work environment. Slimmer individuals tend to be favoured in white-collar jobs (25), while workers with obesity experience more discrimination in professional occupations (31). Income disadvantage has been liked with a higher risk of obesity. Wealthier individuals have better access to good quality food, health care services, and quality leisure time (32), and often follow different eating patterns than those in low income categories (30). The financial situation of the household is likely to affect the diet of all family members, including children and youth (33). Overall, daily hardships faced by individuals in lower class positions might make some health-related practices appear unimportant, too far-sighted, or irrelevant compared to the more pressing demands of everyday life (34). Simple and inexpensive pleasures such as eating in fast food outlets or watching TV offer immediate gratification and as such, can be used as a means to alleviate stress (35). There are few datasets that allow exploring the relationship between lifestyle practices, individual's SES and BMI using a detailed log of daily activities. Such analyses have been conducted on the American Time Use Survey (ATUS), but overall research on the topic is limited, particularly for large samples. In Europe, data including time-use log as well as information on BMI was collected in two past-2000 national Time Use Surveys: in Finland and Hungary. This study uses detailed behavioural accounts from the 2009/2010 Hungarian Time-Use Survey. The Hungarian context is interesting for obesity research as Hungary has the highest share of obese individuals in Europe, and the fourth highest among all OECD countries (36). It is also very unequal in terms of rates of overweight and obesity across population groups (25). Following a wide-range evaluation of nutritional status of Hungarians, obesity has been recognized as a major public health threat (37). Lifestyle behaviours are a result of the interplay between individual preferences and social, economic, and cultural factors, including gender norms, knowledge about nutrition, or financial constraints. Some of these behaviours, such as eating patterns, physical activity, or sleep, are related to an individual's weight status and obesity risk, while individual SES indicators form independent associations with BMI. The objective of this paper was to disaggregate the effects of lifestyle behaviours and individual socioeconomic characteristics on BMI. It was hypothesized that men and women would differ in terms of the effect of their lifestyle behaviours on BMI, net of their socioeconomic characteristics. As women are more likely to adopt a positive health lifestyle and overall experience greater normative pressure regarding their weight, lifestyle behaviours were expected to have greater effect on female BMI than on male. Methods This study used detailed behavioural records from the most recent Hungarian Time Use Survey (HTUS). HTUS is a nationally representative survey which collected data between October 2009 and the end of September 2010. Time-use diaries provide highly accurate and reliable estimates of an individual's time allocation (38). It is very rare for time-use surveys to collect data on an individual's height and weight but there are few exceptions: American, Finish, and Hungarian survey include information on self-reported height and weight. HTUS provided time-use records for 8391 individuals (one day per person) aged 10 to 84. The following study uses a subsample of diarists aged 18 and above (n = 7765). Younger respondents were excluded due to the low number of cases and the fact that there are issues with BMI estimates for children and adolescents (39). Women accounted for 53% of the sample selected for analyses. The mean age of respondents was 44 years ±18.5 standard deviation (SD). More detailed information regarding the sociodemographic characteristics of the sample is given in the Appendix. Time spent in selected activities, their incidence, and frequency were computed based on the respondent's time diaries. Each diary recorded primary (main) and secondary (additional) activities over a 24-hour period. Estimates analysed in this study used combined data from both the primary and secondary activities sequences. The duration of an activity (food preparation, sleeping, TV viewing, physical activity) was computed based on the total duration of all episodes of that activity occurring throughout the day, regardless of whether it was recorded in the primary or secondary activity sequence. In this study, time spent on physical activity included any intentional exercise as well as walking, or active travel, which is a more relevant depiction of daily activity levels than exercise alone. The number of meals is equal to the number of all episodes of eating, and the incidence of breakfast was computed based on whether a respondent reported eating breakfast, as there was a separate code for each type of meal. The incidence of smoking or alcohol use were based on whether an individual reported any episode of smoking or drinking alcohol over the 24 hours. Regarding socioeconomic characteristics, the original Hungarian occupational category FEOR-08 was recoded into categories corresponding to the 1-digit ISCO codes. It included the following groups: (1) managers and professionals, (2) technicians and associated professionals, (3) clerical occupations, (4) sales and service occupations, (5) agriculture, fishing and forestry jobs, (6) craftsmen, industry, trade and construction occupations, (7) machine operators, and (8) low-skill jobs. The last category (9) included missing values which corresponded to being out of the labor market. Codes for army jobs were dropped due to the low number of observations. Individual monthly income was originally given in Hungarian Forint (1000 Ft ≈ 3 EUR). The income variable for the study was generated by collapsing original ten income bands into four broader income groups: (1) under 80000 Ft (Hungarian Forint), (2) between 80001 and 160000 Ft, (3) between 160001 and 300000 Ft, (4) 300001 to 1000000 Ft. Education was constructed based on the original variable corresponding to the highest completed level of education and was divided into the following categories: (1) incomplete primary (including no education), (2) complete primary, (3) vocational, (4) secondary, incomplete or complete, (3) tertiary. Models introducing sociodemographic factors also included marital status and type of settlement as additional control variables. Marital status included the following categories: (1) single, (2) married or cohabiting, (3) divorced or widowed. The type of settlement differentiated between living in (1) a county city, including Budapest, (2) a town, and (3) a village/rural settlement. Obesity Science & Practice Gender differences in BMI covariates in Hungary E. Jarosz 593 All models controlled for age and self-reported health. Age was continuous, whereas self-reported health consisted of the following categories: (1) very good, (2) good, (3) satisfactory, (4) bad, and (5) very bad. At the first stage of analyses, descriptive statistics were produced to illustrate sociodemographic differences in mean BMI values in Hungary. The paper presents the distribution of BMI categories by gender, which indicate how many respondents would qualify as being underweight (BMI < 18.5), having normal weight (18.5-24.9), being overweight (25-29.9) or obese (BMI ≥ 30). Next, the results show mean BMI values by gender and by education, occupation, and income category, then gender differences in time allocation to selected activities are described. The main stage of analyses involved running a set of multivariate OLS regression models separately for men and women. The first model presents associations between individual BMI and selected lifestyle behaviours: frequency of eating, having breakfast on the diary day, time spent on food preparation (in hours), use of alcohol or tobacco, time spent on physical exercise (in hours), duration of sleep (in hours), and time spent in TV watching, also in hours. Age and health status were included as control variables. The second model used all variables from the first model, adding indicators of an individual's socioeconomic characteristics, that is education, income, occupational category, and additional control variables: marital status, and type of settlement where respondent lived. Missing values for structural variables were included in the model to maintain the same sample size as in the case of the model analysing only behavioural variables. Results Around 54% of women and 66% of men in the Hungarian sample were classified as individuals being overweight or obese (Figure 1). The mean BMI for women was 26.0 ± 5.2 SD and for men was 26.9 ± 4.7 SD. The mean BMI levels differed with regard to an individual's socio-economic characteristics (Table 1), that is, better educated or more affluent women had a significantly lower BMI. In both cases the relationship with BMI was linear. Women with tertiary education had the lowest mean BMI at 24.6 (p < 0.01), as did women with the highest earnings with a BMI of 24.7 (p < 0.05). For men, the association between income and BMI was curvilinear, with the wealthiest and poorest men having the lowest mean BMI (25.5 and 26.4, respectively), but only the difference between the men with the highest income and the medium income categories was significant (p < 0.05). Male BMI was not significantly differentiated by educational attainment, except for the statistically significant, but not substantial difference, between men with vocational education and men with secondary or tertiary education (p < 0.05). Regarding occupational category, women in whitecollar jobs (managers and professionals, technicians, and clerical positions) had a mean BMI below 26.0, while women in agriculture, blue-collar jobs (industry and construction workers, machine operators) and low-skill jobs had a mean BMI above 26.0. The difference between women in managerial and professional occupations and women in blue-collar jobs, agriculture, or low-skill jobs was significant (p < 0.05). Occupational class also impacted on male BMI, with men in the highest occupational positions (managers and professionals) and lowest occupations (unskilled labor force) having the lowest BMI at 26.6, and 26.4, respectively. These values were significantly lower than men in sales and service occupations, agriculture, industry, and machine operators (p < 0.05). Men and women differed with regard to how much time they spent on selected daily activities (Figure 2). Women dedicated significantly less time to eating than men (84 minutes versus 90 minutes; p < 0.01), which translated to having fewer meals per day, though in this case, the difference was minor and not statistically significant (2.96 and 2.99 meals respectively), implying that women had nearly as many meals as men, but they were of shorter average duration. Women spent substantially more time (4 times longer; p < 0.01) on food preparation, while men spent more time being physically active (almost 10 minutes more per day; p < 0.01) and watching TV (9 minutes more; p < 0.01). Associations between individual lifestyle behaviours and BMI are presented in Model 1 ( Table 2). In case of women, most analysed behaviours were associated with BMI. The number of meals was negatively associated with BMI (p < 0.05), as well as duration of sleep (p < 0.01), and time spent being physically active (p < 0.05). In contrast, the duration of food preparation and time spent on TV viewing were positively associated with female BMI (p < 0.001 for both coefficients). Lastly, smoking had the strongest negative effect on female BMI, with smokers having nearly 1-point lower BMI compared to non-smokers (p < 0.001). Smoking was also the only activity that had a significant effect on male BMI, which was on average lower by 0.8 among men who smoked (p < 0.001). The greater significance of daily activities for female BMI was also reflected in the percentage of variance explained by the model. In case of men, the R-squared value for the model was 8%, and 13% for women. There was also a significant association between BMI and the control variables, age and self-reported health for both genders. (Table 2). In the case of women, the coefficients for duration of food preparation and physical activity became insignificant, which means the effects of these behaviours were explained by socioeconomic factors. The number of meals was significantly and inversely associated with BMI (p < 0.05), with every additional meal linked to a reduction in BMI by, on average, 0.3 points. The effect of sleep remained significant and negative though slightly weaker (p < 0.05). Every additional hour spent sleeping was associated with a BMI lower by approximately 0.1 point. The effect of TV viewing was positive, with every additional hour spent on TV viewing linked to an increase in BMI by approximately 0.2 Table 2 Behavioural and structural covariates of BMI, by gender. Obesity Science & Practice point (p < 0.001). The incidence of smoking had a negative effect on BMI and was stronger than in Model 1. Women who smoked had a BMI lower by approximately 1 point compared to those who did not smoke (p < 0.001). Married/cohabiting or divorced/widowed women had a higher BMI than women who were single (p < 0.001 and p < 0.01, respectively). Lastly, women living in rural areas had a higher BMI than those living in the cities (p < 0.05). As in Model 1, smoking was the only behavior associated with BMI for men. Men who smoked had on average nearly 1-point lower BMI compared to those who did not smoke (p < 0.001). Occupational characteristics in Model 2 were linked with male BMI but not with female. Men working in trade/industry or construction jobs, or as machine operators (all of which are blue-collar jobs) had a significantly higher BMI (p < 0.05 and p < 0.01, respectively), with men who did not work having a BMI lower by over 1-point (p < 0.001). Lastly, married or divorced men had a higher mean BMI than their single counterparts (p < 0.001 for both coefficients). Discussion This study provided empirical evidence of the association between everyday lifestyle behaviours and an individual's BMI, demonstrating that there are substantial gender differences in this dimension. In line with the hypothesis, the effect of lifestyle practices on BMI was stronger for women than for men. Furthermore, the effects of several types of daily behaviours on female BMI (frequency of meals, duration of sleep and duration of TV viewing) were also significant when socioeconomic factors were accounted for. The effects of time spent on food preparation and time spent being physically active were significant for female BMI only in Model 1 which did not control for socioeconomic characteristics. The longer time spent in food preparation, indicative of the fact that the food was prepared at home, was associated with a significantly higher BMI. This finding might seem contradictory to the fact that home-made food tends to have a better nutritional profile and lower fat content than food eaten out of home. However, this result is consistent with earlier findings that link longer time spent in food preparation with a higher female BMI, showing that time in food preparation is a moderator between BMI and a greatest interest in cooking (28). It is possible that women who spend more time preparing food at home are more interested in culinary practices, which has been linked with higher body weight (28); however, this could not be tested with the available data. The fact that the effects of physical activity and duration of food preparation on female BMI were fully accounted for by an individual's socioeconomic characteristics reveals possible reasons behind SES-related health inequalities existing among Hungarian women. Specifically, the time spent in physical activity and time dedicated to preparing food at home are not distributed equally across the population, which is important since Hungary is a country with very high absolute socioeconomic inequalities in weight status (25). Out of all the behaviours analysed in this study, only smoking was significantly associated with male BMI. The negative effect of smoking was strong and significant across all models for both men and women. Healthy lifestyle choices, such as preparing one's food or exercising (in case of men), did not necessarily mean that one would have a lower, or 'healthier' BMI, whereas the unhealthy practice of smoking has been shown to be the strongest predictor of lower BMI for both men and women, regardless of their socioeconomic characteristics. Firstly, it shows that practices associated with a positive health lifestyle and those linked with having lower BMI may diverge. Secondly, it implies that policy makers might need to address fears of weight gain among current smokers to increase the effectiveness of anti-smoking campaigns. Men and women also differed with regard to the structural covariates of BMI. Specifically, in Model 2, occupational characteristics were associated with male BMI, but not with female. Men in blue-collar jobs had nearly a 1-point higher BMI than their counterparts in managerial and professional occupations. This result is likely to reflect the fact that automatization of industrial processes eliminated some opportunities for physical exertion among those occupational groups (40), while dietary practices remained greatly differentiated by an individual's occupational class (41). This study had several limitations. Firstly, it did not include information on what a person ate on the diary day as this information was not available. While eating patterns have been shown to be linked with individual weight, including dietary information might provide further insight on some of the findings, such as the positive association between the longer time spent on food preparation and higher BMI for women. Secondly, limitations of the diary as an instrument might have affected the data on smoking or alcohol consumption. These measures are likely to be underestimated due to the short duration of these activities (smoking in particular), which may make respondents consider them too short to be reported. Heavy smokers or heavy drinkers were more likely to report these activities as in their case, there are more (and possibly longer) episodes of Obesity Science & Practice Gender differences in BMI covariates in Hungary E. Jarosz 597 smoking or drinking in the sequence. Furthermore, smoking and alcohol use are viewed as socially undesirable in some social groups and they may have been underreported. Lastly, some of the activities, such as drinking alcohol or sport participation (a component of physical activity variable), may not happen daily, but be done several times per week. While it is likely that the overall participation rates captured by the diaries are exact at the population level, in terms of individual BMI outcomes, having weekly estimates of such activities might be more informative. The study findings suggest that different policy measures may need to be adopted to address obesity in different population groups. Consequently, the same regulations might have different effects depending on who they are aimed at. In general, there is no one-fits-all solution, which is something that policymakers in Hungary seem to ignore. In 2011, a tax on fatty and sugary foods was introduced, clearly a measure targeted at changing individual behaviours at the population level. This triggered equity concerns because such foods are generally more likely to be purchased by lower-income individuals. Although the tax proved effective in lowering consumption of fatty foods and sugary drinks (42), it is very difficult to measure whether and how it affected existing inequalities in health and weight. The effectiveness of the tax regarding addressing obesity issues in Hungary is therefore questionable. As this research demonstrated, numerous daily practices were linked with female BMI, and occupational class was associated with male BMI. This finding suggests that a policy targeted at changing daily habits might be more effective for women, while a structural approach and occupation-based interventions might be more relevant for men. This may involve examination of men's dietary and eating habits versus their activity levels (measured using metabolic equivalents, or METS scores) during working time in those occupational categories for which the highest average BMI values were reported. As occupational characteristics shape leisure time behaviours, activities undertaken during the out-of-work hours may also be examined. Time-use diaries were shown to be a reliable source of data for this purpose (43), and this is also a possible direction toward which the present study can be expanded. Funding This research was supported by the Economic and Social Research Council (ESRC) grant number ES/L011662/1.
6,073.2
2018-12-01T00:00:00.000
[ "Economics" ]
Zoology of Atlas-groups: dessins d'enfants, finite geometries and quantum commutation Every finite simple group P can be generated by two of its elements. Pairs of generators for P are available in the Atlas of finite group representations as (not neccessarily minimal) permutation representations P. It is unusual but significant to recognize that a P is a Grothendieck's dessin d'enfant D and that most standard graphs and finite geometries G-such as near polygons and their generalizations-are stabilized by a D. In our paper, tripods P -- D -- G of rank larger than two, corresponding to simple groups, are organized into classes, e.g. symplectic, unitary, sporadic, etc (as in the Atlas). An exhaustive search and characterization of non-trivial point-line configurations defined from small index representations of simple groups is performed, with the goal to recognize their quantum physical significance. All the defined geometries G' s have a contextuality parameter close to its maximal value 1. Introduction Over the last years, it has been recognized that the detailed investigation of commutation between the elements of generalized Pauli groups -the qudits and arbitrary collections of them [1]-is useful for a better understanding of concepts of quantum information such as error correction [2,3], entanglement [4,5] and contextuality [6,7], that are cornerstones of quantum algorithms and quantum computation. Only recently the first author observed that much of the information needed is encapsulated in permutation representations, of rank larger than two, available in the Atlas of finite group representations [8]. The coset enumeration methodology of the Atlas was used by us for deriving many finite geometries underlying quantum commutation and the related contextuality [9]- [11]. As a bonus, the two-generator permutation groups and their underlying geometries may luckily be considered as dessins d'enfants [13], although this topological and algebraic aspect of the finite simple (or not simple) groups is barely mentioned in the literature. Ultimately, it may be that the Monster group and its structure fits our quantum world, as in Dyson's words [11]. More cautiously, in Sec. 2 of the present paper, we briefly account for the group concepts involved in our approach by defining a tripod P − D − G. One leg P is a desired two-generator permutation representation of a finite group P [8]. Another leg D signs the coset structure of the used subgroup H of the two-generator free group G (or of a subgroup G ′ of G with relations), whose finite index [G, H] = n is the number edges of D, and at the same time the size of the set on which P acts, as in [10]. Finally, G is the geometry with n vertices that is defined/stabilized by D [9]. Then, in Sec. 3, we organize the relevant P − D − G tripods taken from the classes of the Atlas and find that many of them reflect quantum commutation, specifically the symplectic, unitary and orthogonal classes. The geometries of other (classical and sporadic) classes are investigated similarly with the goal to recognize their possible physical significance. A physically oriented survey of simple groups is [12]. Groups, dessins and finite geometries Following the impetus given by Grothendieck [14], it is now known that there are various ways to dress a group P generated by two permutations, (i) as a connected graph drawn on a compact oriented two-dimensional surface -a bicolored map (or hypermap) with n edges, B black points, W white points, F faces, genus g and Euler characteristic 2 − 2g = B + W + F − n [15], (ii) as a Riemann surface X of the same genus equipped with a meromorphic function f from X to the Riemann sphereC unramified outside the critical set {0, 1, ∞} -the pair (X, f ) called a Belyi pair and, in this context, hypermaps are called dessins d'enfants [13,14], (iii) as a subgroup H of the free group G = a, b where P encodes the action of (right) cosets of H on the two generators a and b -the Coxeter-Todd algorithm does the job [10] and finally (iv), when P is of rank at least three, that is of point stabilizer with at least three orbits, as a non-trivial finite geometry [9]- [11]. Finite simple groups are generated by two of their elements [16] so that it is useful to characterize them as members of the categories just described. There are many mathematical papers featuring the correspondence between items (i) and (ii) in view of a better understanding of the action of the absolute Galois group Gal(Q/Q) -the automorphism group of the fieldQ of algebraic numbers-on the hypermaps [14,15,17]. Coset enumeration featured in item (iii) is at work in the permutation representations of finite groups found in the Atlas [8]. Item (i) in conjunction to (iii) and (iv) allowed us to arrive at the concept of geometric contextuality as a lack of commutativity of cosets on the lines of the finite geometry stabilized by P [10]. Item (iv) may be further clarified thanks to the concept of rank of a permutation group P . First it is expected that P acts faithfully and transitively on the set Ω = {1, 2, · · · , n} as a subgroup of the symmetric group S n . The action of P on a pair of distinct elements of Ω is defined as (α, β) p = (α p , β p ), p ∈ P , α = β. The orbits of P on Ω×Ω are called orbitals and the number of orbits is called the rank r of P on Ω. The rank of P is at least two and the 2-transitive groups identify to the rank 2 permutation groups. Second the orbitals for P are in one to one correspondence with the orbits of the stabilizer subgroup P α = {p ∈ P |α p = α} of a point α of Ω. It means that r is also defined as the number of orbits of P α . The orbits of P α on Ω are called the suborbits of P and their lengths are the subdegrees of P . A complete classification of permutation groups of rank at most 5 is in the book [18]. Next, selecting a pair (α, β) ∈ Ω×Ω, α = β, one introduces the two-point stabilizer subgroup P (α,β) = {p ∈ P |(α, β) p = (α, β)}. There exist 1 < m ≤ r such non isomorphic (two-point stabilizer) subgroups S m of P . Selecting the largest one with α = β, one defines a point/line incidence geometry G whose points are the elements of Ω and whose lines are defined by the subsets of Ω sharing the same two-point stabilizer subgroup. Thus, two lines of G are distinguished by their (isomorphic) stabilizers acting on distinct subsets of Ω. A non-trivial geometry arises from P as soon as the rank of the representation P of P is r > 2 and simultaneously the number of non isomorphic two-point stabilizers of P is m > 2. Geometric contextuality Let G ′ be a subgroup of the free group G = a, b endowed with a set of relations and H a subgroup of G of index n. As shown in Sec. 2.1, the permutation representation P associated to the pair (G ′ , H) is a dessin d'enfant D whose edges are encoded by the representative of cosets of H in G ′ . A graph/geometry G may be defined by taking the n vertices of G as the edges of D and the edges of G as the distinct (but isomorphic) two-point stabilizer subgroups of P. Further, G is said to be contextual if at least one of its lines/edges corresponds to a set/pair of vertices encoded by non-commuting cosets [10]. A straightforward measure of contextuality is the ratio κ = E c /E between the number E c of lines/edges of G with non-commuting cosets and the whole number E of lines/edges of G. Of course, lines/edges passing through the identity coset e have commuting vertices so that one always as κ < 1. In Sec. 3 below, the contextuality parameter κ corresponding to the collinear graph of the relevant geometry G is displayed in the right column of the tables. In order to compute κ, one needs the finite presentation of the corresponding subgroup H in G ′ leading to the permutation representation P but this information is not always available in the Atlas. A few significant geometries There exist layers in the organization of finite geometries, see [20] for an introduction. A partial linear space is an incidence structure Γ(P, L) of points P and lines L satisfying axioms (i) any line is at least with two points and (ii) any pair of distinct points is incident with at most one line. In our context, the geometry G that is defined by a two-generator permutation group P, alias its dessin d'enfant D, has order (s, t) meaning that every line has s + 1 points and every point is on t + 1 lines. Thus G is the geometric configuration [p s+1 , l t+1 ] (r) , with p and l the number of points and lines. The extra index r denotes the rank of P from which D arises. We introduce a first layer of organization that is less restrictive that of a near polygon to be defined below and that of a symplectic polar space encountered in Sec. 3.3. We denote by G u = G(s, t; u) a connected partial linear space with the property that, given a line L and a point x not on L, there exist a constant number u of points of L nearest to x. A near polygon (or near 2d-gon) is a partial linear space such that the maximum distance between two points (the so-called diameter) is d and, given a line L and a point x not on L, there exists 'a unique point' on L that is nearest to x. A graph (whose lines are edges) is of course of type G 1 . A near polygon is, by definition, of type G 1 . Symplectic polar spaces are of the form G u , possibly with u > 1, but not all G u with u > 1 are polar spaces. A generalized polygon (or generalized N-gon) is a near polygon whose incidence graph has diameter d (the distance between its furthest points) and girth 2d (the length of a shortest path from a vertex to itself). According to Feit-Higman theorem [21], finite generalized N-gons with s > 1 and t > 1 may exist only for N ∈ {2, 3, 4, 6, 8}. They consist of projective planes with N = 3, and generalized quadrangles GQ(s, t), generalized hexagons GH(s, t) and generalized octagons GO(s, t) when N = 4, 6, 8, respectively. Many G ′ s have a collinearity graph that is a strongly regular graph (denoted srg). These graphs are partial geometries pg(s, t; α) of order (s, t) and (constant) connection number α. By definition, α is the number of points of a line L joined to a selected point P by a line. The partial geometries pg listed in our tables are those associated to srg graphs found in [19]. A few small examples Let us illustrate our concepts by selecting a rank 3 (or higher) representation for the group of the smallest cardinality in each class of simple groups. The notation for the simple groups and their representations are taken from the Atlas [8]. Alternating The smallest non-cyclic simple group is the alternating group A 5 whose finite representation is H = a, b|a 2 = b 3 = (ab) 5 = 1 . The permutation representations of A 5 are obtained by taking the subgroups of finite index of the free group G = a, b whose representation is H. Table 1 list the rank r and the number m of two-point stabilizer subgroups for the permutation representations P up to rank 15. Symplectic The smallest (simple) symplectic group is S ′ 4 (2) = A 6 whose finite representation is H = a, b|a 2 = b 4 = (ab) 5 = (ab 2 ) 5 = 1 . Table 2 list the rank r and the number m of two-point stabilizer subgroups for the permutation representations P up to rank 30. The smallest non trivial permutation group P has index 15, rank 3 and subdegrees 1, 6, 8 as shown in Table 2. Table 2. Parameter r and s for the small index representations of A 6 . The geometry that is stabilized by P is the (self-dual) generalized quadrangle GQ(2, 2), alias the graphL(K 6 ) (the complement of line graph of the complete graph K 6 ). It is known that GQ(2, 2) is a model of two-qubit '2QB' commutation, see [10,Fig. 12]. The permutation representation of index 30 of S ′ 4 (2) stabilizes the configuration [30 16 , 160 3 ] of rank 7 that turns to be a geometry of type G 2 . As for two-qutrit commutation, one uses the S 4 (3) permutation representation P of rank 3 and index 40 b found in the Atlas. The dessin d'enfant picturing P is found on Fig. 1. The dessin has signature (B, W, F, g) = (8, 28, 6, 0). Unitary The smallest (simple) unitary group is Orthogonal The smallest (simple) orthogonal group is O 7 (3). The Atlas lists four representations of rank 3 and index 351, 364, 378 and 1080. We could recognize that the first representation is associated to the strongly regular graph srg(351, 126, 45, 45) and the geometry NO − (7, 3), the second representation is associated with srg(364, 120, 38, 40) and the geometry of the symplectic polar space W 5 (3), the third representation is associated with srg(378, 117, 36, 36) and presumably the partial geometry pg (13,18,4), and the fourth representation is associated with srg(1080, 351, 126, 108) and the geometry NO + (8, 3), see [19] for details about the undefined acronyms. The second representation corresponds to the commutation of the 364 three-qutrit '3QT' observables [1]. It is found to be of type G 4 . The representation of index 1120 and rank 4 of O 7 (3) found in the Atlas is associated to the dual of W 5 (3) that is the dense near hexagon DQ(6, 3). See table 9 for further details. Exceptional and twisted The smallest (simple) twisted exceptional group is Sz (8). The representation of index 520 listed in the Atlas leads to an unconnected graph. The representation of index 560 of rank 17 and subdegrees 1, 13 3 , 26 6 , 52 7 leads to a configuration of type [560 13 , 1820 4 ] (i.e. every point is on 13 lines and there are 1820 lines of size 4). The Atlas also provides a representation of index 1456 and rank 79 that leads to another geometry, of order (3,4), with again 1820 lines of size 4 (see also the relevant item in table 10). The physical meaning of both representations, if any, has not been discovered. Table 3. A few characteristics of a index m and rank r = 3 (or higher) representation of the simple group of smallest cardinality in each class. The characteristics of the Sp(4, 3) representation for two qutrits is added to this list. The question marks point out that a physical interpretation is lacking. Sporadic The smallest sporadic group is M 11 . The Atlas provides representations of rank 3 and index 55, rank 4 and index 66, and rank 8 and index 165. The first representation leads to the triangular graph T (11) = L(K 11 ). The second one leads two a non strongly regular graph with 495 edges, of girth 4 and diameter 2. The third representation leads to a partial linear space of order (2, 3) with 220 lines/triangles. Brief summary The results of this subsection are summarized in Table 3. Observe that the smallest simple linear group is equivalent to A 5 and that the smallest untwisted group G 2 (2) ′ is similar to U 3 (3). Except for M 11 and Sz(8) all these 'small' groups occur in the commutation of quantum observables. Further relations between the geometry of simple groups and the commutation of multiple qudits are given at the next section. Alternating The non trivial configurations that are stabilized by (low rank) small simple alternating groups are listed in Table 4. The alternating group A 7 is missing because no non-trivial geometry has been recognized. Permutation groups for alternating groups A n , n > 8 are those listed in the Atlas. The A 8 configuration on 35 points Table 4. The non trivial configurations stabilized by small simple alternating groups and their rank r given as an index. The notation T (n) = L(K n ) means the triangular graph and S(2, k, v) means a Steiner system, that is, a 2 − (v, k, 1) design [19]. The symbol srg is for a strongly regular graph. A description of the A 8 configuration on 35 points is given in the text. It has been shown at the previous section that A 5 and A 6 are associated to threequbit contextuality (via Mermin's pentagram) and two-qubit commutation (via the generalized quadrangle of order two GQ(2, 2)), respectively. Since A 8 encodes the 35 lines in P G (3,2), the corresponding configuration may be seen as a model of four-qubit contextuality, see [10,Sec. 4] for the recognition of P G(3, 2) as a model of a 4QB maximum commuting set and [22] for an explicit reference to the O + (6, 2) polarity. As the permutation representation is not in the Atlas, we provide a few details below. The permutation representation on 35 points of A 8 is P =< 35|(3, 4, 6, 12, 10, 5) (7,13,19,23,15,9) The representation is of rank 3, with suborbit lengths (1,16,18), and corresponds to a dessin D of signature (B, W, F, G) = (9, 15, 5, 4)), that is, of genus 4, and cycles [6 4 3 3 1 2 , 3 10 1 5 , 7 5 ]. The two-point stabilizer subgroups are of order 32 and 36. The group of order 36 is isomorphic to the symmetry group Z 2 3 × Z 2 2 of the Mermin square (a 3 × 3 grid), see [9,Sec. 4.4]. The edges of the collinearity graph of the putative geometry G are defined as sharing the same stabilizer subgroup of order 36, up to isomorphism, but acting on different subsets. The graph is srg of spectrum [16 1 , 2 20 , −4 14 ] and can be found in [19]. The lines of G are defined as the maximum cliques of the collinearity graph. In the present case, the lines do not all share the same stabilizer subgroup. One gets G = [35 8 , 56 5 ] (3) , a finite geometry of type G 2 . The collinearity graph associated to the stabilizer subgroup of order 32 is the complement of the collinearity graph of G and the corresponding geometry isḠ = [35 6 , 30 7 ] (3) , a configuration of type G 3 , and a model of the O + (6, 2) polarity. Linear The non trivial configurations that are stabilized by (low rank) small simple linear groups are listed in Table 5. As for a relation to physics, we already know that the linear group L 2 (4) = L 2 (5) = A 5 is associated to a 3QB pentagram and that L 2 (9) = A 6 is associated to 2QB commutation. Then the group L 5 (2) is associated to 5QB contextuality through lines in P G(4, 2). The other configurations in table 5 lack a physical meaning. Symplectic The symplectic class of simple groups is a very useful one for modeling quantum commutation of multiple qudits. At the previous section, we already met groups S ′ 4 (2) and S 4 (3) associated to two-qubits and two-qutrits, respectively. A few remarks are in order. Stricto sensu, only the generalized quadrangles GQ(2, 4) and GQ(3, 3) are 'stabilized' by the corresponding permutation representations P (and dessins d'enfants D -their signature is given at the second column). The lines of each of the two geometries are defined as having two-stabilizer subgroups acting on the same Table 6. Characteristics of small index representations of S 4 (3) and their geometry. The bold notation correspond to geometries that are 'stabilized' by the corresponding permutation representation P. The other geometries that are only 'defined' from the collinearity graph associated to P. subsets of points. In a weaker sense, the permutation representation for index 36, 40 a and 45 'define' the geometries OA(6, 3), the dual of GQ(3, 3) and GQ(4, 2) from the collinearity graph, its srg spectrum (shown at the third column) and the structure of its maximum cliques. In these last cases, not all lines of the geometry have their pair of points corresponding to the same two-stabilizer subgroup. Observe that case 40 a and case 40 b are isospectral but with a distinct D-signature. The group S 6 (2). Another group of rich structure is the symplectic group S 6 (2) whose finite representation is H = a, b|a 2 = b 7 = (ab) 9 = [ab 2 ] 12 = [a, b] 3 , [a, b 2 ] 2 = 1 . The smallest non-trivial permutation representation P of S 6 (2) stabilizes the symplectic polar space W 5 (3) associated to three-qubits [1]. The small permutation representations of S 6 (2) are shown on Table 7. Characteristics of small index representations of S 6 (2) and their geometry. The meaning of bold notation is as in Table 6. The geometry of multiple qudits. We define the multiple qudit Pauli group P q (q = p n ) as the n-fold tensor product between single p-dit Pauli operators with ω = exp( 2iπ p ) and p a prime number. Observables of P q /Center(P q ) are seen as the elements of the 2n-dimensional vector space V (2n, p) defined over the field F p . The commutator [., .] : V (2n, p) × V (2n, p) → P ′ q induces a non-singular alternating bilinear form on V (2n, p), and simultaneously a symplectic form on the projective space P G(2n − 1, p) over F p . The |V (2n, q)| = p 2n observables of P q /Center(P q ) are mapped to the points of the symplectic polar space W 2n−1 (p) of cardinality |W 2n−1 (p)| = p 2n −1 is the sum of divisor function of the argument) and two elements of [P q /Center(P q ), ×] commute if and only if the corresponding points of the polar space W 2n−1 (p) are collinear [1]. A subspace of V (2n, p) is called totally isotropic if the symplectic form vanishes identically on it. The number of such totally isotropic subspaces/generators g e (of dimension p n − 1) is Σ(n) = n i=1 (1 + p i ). A spread s p of a vector space a set of generators partitioning its points. One has |s p | = p n +1 and |V (2n, p)|−1 = |s p |×|g e | = (p n + 1) × (p n − 1) = p 2n − 1. A generator g e corresponds to a maximal commuting set and a spread s p corresponds to a maximum (and complete) set of disjoint maximal commuting sets. Two generators in a spread are mutually disjoint and the corresponding maximal commuting sets are mutually unbiased. The symplectic polar spaces W 2n−1 (p) at work, alias the commutation structure of n p-dits may be constructed by taking the permutation representation of index σ(p 2n−1 ) of the symplectic (rank 3) group S 2n (p) available in the Atlas. The special case of twoqubits [with S ′ 4 (2)], two-qutrits [with S 4 (3)], three qubits [with S 6 (2)]. For the group S 6 (3), one finds two permutation representations of index 364 and 1120 that are similar to the ones of the same index found for the group O 7 (3) (see Sec. 2, item 'Orthogonal' and Table 9). The representation of index 364 corresponds to the commutation structure of three qutrits and the one of index 1120 is the dual geometry encoding the nonintersection of the 1120 maximum commuting sets of size 26 built with the three-qutrit observables. Unitary The unitary class of simple groups is a very rich one. It defines many generalized quadrangles, the hexagons GH(2, 2) associated to 3-qubit contextuality (as shown in Sec. 2, table 3), and two near hexagons including the largest of 'slim dense' near hexagons on 891 points, as shown in Table 8 [24]. Whether such configurations have a physical relevance is unknown at the present time. Since unitary groups play a role as normalizers of Pauli groups, it may be expected that some of these geometries occur in the context of quantum error correction and Clifford groups [3]. In passing, it is noticeable to feature the hyperplane structure of the U 3 (4) configuration. A basic hyperplane is defined from points of the collinearity graph that are either at minimum or maximal distance from a selected vertex. There are 208 such hyperplanes. The other hyperplanes may be easily built from Velkamp sums H ⊕ H ′ of the basic hyperplanes H and H ′ , where the set theoretical operation ⊕ means the complement of the symmetric difference (H ∪ H ′ ) \ (H ∩ H ′ ) (as in [26]). One finds 10 distinct classes of hyperplanes totalizing 2 16 hyperplanes. Orthogonal The geometries carried by orthogonal simple groups of small index are listed in Table 9. It is noticeable that some representations are associated to the non-intersection of maximum commuting sets for three qubits [from O + 8 (2) : 2)] and three qutrits [from . These geometries are introduced in [1, Table 2]. The srg's are identified in [19]. The near hexagon O − 8 (2) There exists a near polygon (thus of type G 1 ) built from O − 8 (2) (on 765 points) that seems to have been unnoticed. The configuration is of the type [765 7 , 1071 5 ] (4) with collinearity graph of spectrum [28 1 , 11 84 , 1 476 , −7 204 ] and diameter 3 corresponding to a near hexagon of order (4,6). Since the permutation representation is a subgroup of the modular group Γ = P SL(2, Z), it is possible to see the dessin D as an hyperbolic polygon D H . As in [10,11], the genus g of D equals that of the hyperbolic polygon D H , a face of group Table 9. The non trivial configuration 'defined' by orthogonal groups with their corresponding D signature. The notation 3QB * (resp. 3QT * ) means that we are dealing with the geometry associated to the non-intersection of the maximum commuting sets built with the three-qubit (resp. three-qutrit) observables. Several configurations are of type G i . The near hexagon O − 8 (2) on 765 points is described in the text. D corresponds to a cusp of D H , the number of black points (resp. of white points) of D is where f is the number of fractions, c is the number of cusps, ν 2 and ν 3 are the number of elliptic points of order two and three of D H , respectively. In the present case, the polygon D H is associated to a non-congruence subgroup of level 17 of Γ and (n, g, ν 2 , ν 3 , c, f ) = (765, 33,13,18,45,250). A schematic of D H is shown in Fig. 2. Exceptional A few exceptional groups of low index and low rank are defining well known generalized polygons GH(2, 2) and its dual, GH(4, 4) and its dual, GH (2,8), the Ree-Tits octagon GO (2,4), as well as two extra G 1 geometries [coming from Sz (8)]. This is summarized in Table 10. Sporadic Finally, small index representations of small sporadic groups lead to geometries of various types. The results are split into three tables: configurations arising from Mathieu groups in table 11, from Leech lattice groups in table 12 and the remaining ones -small sections of the Monster group and pariahs-in table 13. Niticeable geometries arising from sporadic groups are the M 24 near hexagon NH(2, 14) on 759 points, the J 2 near octagon NO(2, 4) on 315 points and Tits generalized octagon GO(2, 4) on 1755 points. Another remarkable geometry is the one built from the McL graph on 275 points, which is found to be of type G 2 , see also https://www.win.tue.nl/∼aeb/graphs/McL.html for details about the McL graph. Conclusion We explored two-generator permutation representations of simple groups, as listed in the Atlas [8], with the viewpoint of Grothendieck's dessins d'enfants and the finite geometries associated to them, as started in our earlier work. A strong motivation for this work is the understanding of commutation structures in quantum information and their contextuality [9]- [11], [22,23]. A wealth of known and new point-line configurations G, and as much as possible their contextuality parameter κ, are defined from permutation representations P and their corresponding dessin D, using the methodology described in Sec. 2. It is intriguing that the concept of a near polygon, defined in Sec. 2.3, may be usefully expanded to that of a geometry of type G i (i > 1) to qualify some of the new configurations we found. Looking at unitary groups of
6,995.8
2016-01-19T00:00:00.000
[ "Mathematics" ]
A new phenological metric for use in pheno‐climatic models: A case study using herbarium specimens of Streptanthus tortuosus Premise Herbarium specimens have been used to detect climate‐induced shifts in flowering time by using the day of year of collection (DOY) as a proxy for first or peak flowering date. Variation among herbarium sheets in their phenological status, however, undermines the assumption that DOY accurately represents any particular phenophase. Ignoring this variation can reduce the explanatory power of pheno‐climatic models (PCMs) designed to predict the effects of climate on flowering date. Methods Here we present a protocol for the phenological scoring of imaged herbarium specimens using an ImageJ plugin, and we introduce a quantitative metric of a specimen's phenological status, the phenological index (PI), which we use in PCMs to control for phenological variation among specimens of Streptanthus tortuosus (Brassicaceeae) when testing for the effects of climate on DOY. We demonstrate that including PI as an independent variable improves model fit. Results Including PI in PCMs increased the model R 2 relative to PCMs that excluded PI; regression coefficients for climatic parameters, however, remained constant. Discussion Our protocol provides a simple, quantitative phenological metric for any observed plant. Including PI in PCMs increases R 2 and enables predictions of the DOY of any phenophase under any specified climatic conditions. Studies of phenology-the timing of life cycle events-have provided some of the strongest evidence that many organisms have been or will be affected by global changes in climate (Parmesan and Yohe, 2003;Menzel et al., 2006). Plants are sensitive to changes in climate, especially changes in temperature, and plant phenology has been monitored and tracked through time using a variety of approaches, including long-term in situ observations of living plants (Sparks and Carey, 1995;Chmielewski and Rötzer, 2001;Rutishauser et al., 2009), citizen science networks (Mayer, 2010;Haggerty et al., 2013), satellite imagery (Stöckli and Vidale, 2004;Studer et al., 2007;White et al., 2009), and herbarium specimens (Lavoie and Lachance, 2006;Panchen et al., 2012;Hufft et al., 2018). Because of their long temporal record and expansive geographic range, herbarium specimens have been used to detect species-specific shifts in phenology through time in response to changing climate (Lavoie, 2013;Willis et al., 2017;Jones and Daehler, 2018). Herbarium-based studies have detected temporal advancement in phenology and have quantified the sensitivity of phenology to climatic parameters such as temperature and precipitation. Given the value of herbarium specimens in studying the effects of climate change on the seasonal cycles of plants, several recent collaborative efforts have aimed to digitize and to provide electronic access to the images and label information of millions of herbarium specimens currently housed in separate herbaria (Willis et al., 2017;Yost et al., 2018). If these efforts are successful, then herbarium specimens will be widely available for study and provide a wealth of easily accessible new data with which to investigate phenological patterns over space and time. Herbarium-based studies designed to link phenology to local climatic conditions typically rely on the day of year of collection (DOY) of specimens that were collected in flower. In these studies, DOY is considered to be a proxy for first flowering date (FFD) or the date of peak flower (DPF) (Primack et al., 2004;Diskin et al., 2012;Davis et al., 2015), two phenological events that are commonly used to track phenology in field-based observations. The DOY is then used as a dependent variable and regressed against either the year of collection or one or more climate parameters during the year of specimen collection (or during the months preceding it) in order to detect temporal shifts in phenology or to quantify the sensitivity of plants to specific climatic parameters. Using DOY as a proxy for flowering time is problematic because reproductive herbarium specimens may have been collected at any point between bud formation and fruit ripening; therefore, the DOY may not accurately represent either FFD or DPF. We can use a hypothetical regression to visualize two inaccuracies that may occur by using the DOY of reproductive specimens as a proxy for either of these phenological metrics (Fig. 1). First, the DOY of a flowering specimen will always and necessarily be on or after its true FFD (Fig. 1A). Second, the DOY may be before, after, or on the true DPF ( Fig. 1B-D). Specimens may be preferentially collected before the true DPF if the floral structures are fragile or ephemeral, and may be preferentially collected after the true DPF if fruits are necessary for correct plant identification or are particularly showy (Fig. 1B and 1C,respectively). If specimens are collected evenly throughout their reproductive period, then DOY may accurately predict the true DPF (Fig. 1D). Figure 1 demonstrates a case in which DOY and FFD or DPF are strongly positively correlated among specimens, but DOY does not accurately predict either FFD or DPF because specimens may not be collected on their true FFD or DPF. If this situation is common, then assuming that DOY accurately represents FFD or DPF when investigating relationships between phenology and climate would reduce the explanatory power of the resulting models because of the high variation in phenological stage among herbarium sheets, and the fact that variation in DOY caused by variation in the actual phenophase of collection (i.e., FFD or DPF) is conflated with variation in the timing of collection of a given specimen relative to the actual timing of FFD or DPF. This effect is likely to be particularly intense among species that exhibit long flowering durations, as longer flowering durations increase the maximum potential difference in the timing of collection DOY from the day of year of actual FFD or DPF. We can see an example of how variation among sheets impacts these analyses by looking at studies that investigate relationships between phenology and climate using both the estimated peak flowering date from herbarium specimens and the true peak flowering date from field observations. Robbirt et al. (2011) compared sensitivities of Ophrys orchids using both herbarium specimens and field data. They recorded the DOY of collection of herbarium specimens that were assumed to be in peak flower (excluding those specimens for which fewer than 60% of flowers were open) but likely included specimens that were collected both pre-and post-peak flowering. Data recorded from field-based observations, by contrast, represented the true dates of peak flower. When the flowering date derived from each data set was regressed (separately) on temperature, both data sets showed a negative relationship between flowering date and temperature, but temperature explained four times more variation in flowering date in the field data-based model than the herbarium data-based model (58.6% vs. 13.4%, respectively), presumably because it did not conflate variation in actual DPF with variation caused by sample collection that occurred before or after DPF. High variation among the phenological status of herbarium sheets is one potential reason for the low explanatory power of models constructed with herbarium-derived data. Another potential factor includes the possibility that herbarium-derived data, which are often distributed across broader spatial extents than field-based data, may therefore also differ from many field-based data with respect to the range of climatic conditions represented. Reducing-or controlling statistically for-variation among herbarium or living specimens in their phenological status could help to improve models and to clarify relationships between climate and flowering or Figure 1A shows that the FFD is necessarily earlier than the DOY (there are no values of DOY that are above the line representing the 1 : 1 relationship between FFD and DOY). Figure 1B-D show the three hypothetical relationships between DPF and DOY where DOY may be (B) before, (C) after, or (D) on the true DPF but is rarely an accurate representation of the true DPF. collection date. This could be done by either (1) restricting data sets to include only those specimens collected at a specific phenological stage or (2) incorporating into statistical models a quantitative metric that estimates the phenological status of individual specimens or plants. Given that herbarium specimens are rarely collected precisely at first flower or at peak flowering, the first approach would drastically reduce the sample size used to estimate relationships between phenology and climate. This reduction in sample size might preclude the analysis of species represented by relatively few specimens (e.g., <100 sheets). In addition, because herbarium specimens represent an instantaneous snapshot of an individual's phenological status, it is nearly impossible to determine whether an individual specimen was collected at peak flower. However, we can easily quantify a specimen's phenological status by determining the numbers of the different classes of reproductive organs (e.g., buds, flowers, fruits) present on each sheet, and then converting those counts into a proportional weighted mean. Here, our objectives are (1) to present a protocol designed to score and to record the numbers of reproductive structures representing successive developmental stages on imaged herbarium specimens using a plugin (Cell Counter) developed for the image analysis software ImageJ; (2) to introduce a new integrated metric of a specimen's phenological status-the phenological index (PI)which is calculated using the counts derived from Cell Counter and allows us to control for the variation in the phenological status of collected specimens when testing statistical models for the effect of climatic conditions on the DOY of specimen collection; (3) to demonstrate how the PI can be used to construct and improve pheno-climatic models using a herbarium-derived data set composed of mountain jewelflower (Streptanthus tortuosus Kellogg, Brassicaceae) specimens; and (4) to discuss how parameterized models that include the PI as an independent variable can be used as a predictive model and as a means to quantify the length of the reproductive period. In addition to demonstrating the usefulness of incorporating the PI into pheno-climatic models, we tested the following three predictions with herbarium-derived data for S. tortuosus. First, given that many studies of plant phenology report that an increase in local winter or spring temperatures (over time or space) induces individual plants or populations to flower earlier (Parmesan and Yohe, 2003;Menzel et al., 2006;Cleland et al., 2007), we predict that, across the localities from which herbarium specimens have been collected, elevated spring temperatures will be associated with earlier flowering in S. tortuosus. The relationship between flowering date and precipitation remains unclear and likely differs among species and communities (Hart et al., 2014;Munson and Sher, 2015;Rawal et al., 2015;Matthews and Mazer, 2016). Because the majority of the S. tortuosus records analyzed here were collected from localities that experience a Mediterranean climate, their growth or reproduction in the spring and summer may be strongly influenced by winter water availability. Where winter precipitation is relatively low, soils dry out more quickly during the following spring, and this may select for earlier flowering genotypes or induce earlier flowering as a plastic response (Franks, 2011;Hamann et al., 2018). Consequently, our second prediction is that flowering date will be positively correlated with total winter precipitation. Third, as differences in PI among herbarium specimens will account for a portion of the variation in the DOY, we predict that, for data sets comprising specimens among which there is wide variation in the PI, including the PI as an independent variable will result in a model with a higher predictive power than models that do not include PI. The phenological index The PI is an integrative metric derived from the proportions of each class of reproductive units (in this case buds, flowers, immature fruits, and mature fruits) present on a preserved plant on a herbarium sheet. The proportion of a given class is then weighted by an index representing the degree of phenological advancement of that class (e.g., buds = 1; open flowers = 2; immature fruits = 3; and mature fruits = 4). The following equation can be used to calculate the PI for each plant: where p x is the proportion of reproductive units in phenophase x and i is the index assigned to reproductive unit x. The value of PI therefore represents a weighted mean of all of a specimen's reproductive units, where lower values are associated with early development and higher values are associated with more advanced development. For example, if a plant has 50 buds, 40 open flowers, 10 immature fruits, and zero mature fruits, the specimen would have a PI of 1.6, indicating that it is fairly early in its phenological progression. Scoring specimens One hundred twenty S. tortuosus (Brassicaceae) herbarium specimens from the California Academy of Sciences (CAS) and the University of California, Santa Barbara (UCSB), were imaged using an ORTECH Photo e-Box Plus 1419 imaging station (ORTECH Professional Lighting, Chula Vista, California, USA) at the Cheadle Center for Biodiversity and Ecological Restoration at UCSB. Each plant on the imaged sheets was scored with ImageJ using the plugin Cell Counter by counting the number of buds, flowers, immature fruits, and mature fruits present on each plant (ImageJ version 1.52a available at https ://imagej.nih.gov [Abramoff et al., 2004]; Cell Counter plugin available at https :// imagej.nih.gov/ij/plugi ns/cell-count er.html; Fig. 2). Cell Counter, originally developed for counting cells on microscope images, is a simple, fast, and reliable way to score imaged specimens. To score each specimen, the user places digital colored markers that correspond to each reproductive structure and then the program sums the total number of markers in each category, thereby providing an accurate count of the number of buds, flowers, immature fruits, and mature fruits present on each plant (Fig. 2). Cell Counter also allows the user to save the X-Y coordinates of each marker in an XML file that can later be recalled or edited. The protocol we developed and used to score S. tortuosus is provided in Appendix 1. The 120 S. tortuosus specimens were scored using Cell Counter according to definitions for each reproductive unit specific to this species (Table 1). One specimen sheet did not have any reproductive plants, and consequently our final data set contained 119 specimens. The counts obtained from Cell Counter for S. tortuosus specimens were converted into a phenological index for each plant using Equation 1. For herbarium specimens with more than one plant present on the sheet, the phenological index was averaged across all plants. Climatic data Each herbarium specimen was georeferenced by downloading the coordinates and the error radius from the California Consortium of Herbaria (CCH, http://ucjeps.berke ley.edu/conso rtium/ ), which is a database that contains location information for many California herbarium records (Fig. 3). These coordinates are georeferenced based on the description of the location on the specimen label. These coordinates were then used to download site-specific climatic data from PRISM (available at http://prism. orego nstate.edu) during the year and previous year that each herbarium specimen was collected. Specifically, we extracted total winter precipitation (cumulative precipitation during December, January, and February of the previous winter) and the spring (March, April, and May) mean maximum temperature (T max ). Winter precipitation was selected because the California Floristic Province, where S. tortuosus occurs, receives the majority of annual rainfall during winter months. Maximum temperature was selected instead of mean or minimum temperatures because this parameter has been shown to have a higher predictive power (R 2 ) than other temperature parameters in large-scale phenological models (Park and Mazer, 2018). (1), flower (2), immature fruit (3), or mature fruit (4). This specimen (which is assumed to represent one plant) has 0 buds, 12 flowers, seven immature fruits, and 26 mature fruits. It has an integrated phenological index of 3.31, which indicates a relatively late stage of phenological progression. The x-y coordinates of all of the individual markers can be saved as an XML text file, which can then be recalled or edited. A B Statistical analyses In the analyses presented here, we analyzed a small proportion (n = 119 specimens) of all S. tortuosus specimens available from the CCH for which the exact collection date (day, month, and year) was recorded (Fig. 3). Despite this seemingly small sample size, Park and Mazer (2018) demonstrated that increasing the number of specimens included in pheno-climatic models past 100 specimens does not further improve model predictive power. For each specimen, that date was converted into a day of the year of collection (DOY; e.g., July 4 is day 185, or 186 on leap years). The DOY was evaluated for normality with a quartile-quartile plot. We used multiple linear regressions to investigate the relationship between DOY and local climatic conditions in the year of collection using two distinct models. In the first model, we made no attempt to account for variation in phenological status among sheets; as such, we did not include PI in this model. This model represented the manner in which phenological responses to local climate have historically been examined using herbarium specimens. In the second model, however, we controlled for variation in phenological status among sheets by including the PI for each specimen as a main effect in the model. By comparing the results of this second model against the baseline model that does not incorporate PI as a main effect, we were able to evaluate the degree to which the addition of PI as a main effect improved model performance or adjusted the predicted phenological responsiveness to differences in local climate. We validated the predictive power of both models using 10-fold cross-validation. Multiple linear regression analyses were performed in JMP Pro 13 (SAS Institute, Cary, North Carolina, USA) and multiple regressions using 10-fold cross-validation were performed using Python version 2.7.11 (Oliphant, 2007). RESULTS The S. tortuosus herbarium specimens analyzed here were collected between 12 July 1898 and 9 May 1999. The DOY ranged from 88 to 253 (29 March to 10 September; x = 182.03 or 1 July, SD = 34.55 days; Fig. 4A). The PI ranged from 1.05 to 3.89 (x = 2.10, SD = 0.69; Fig. 4B). Despite a relatively small sample size (n = 119 specimens), we were able to capture a wide variety of collection dates and phenological progressions in our sample (Fig. 4). The mean number of plants per herbarium sheet used in this study was 2.67 (SD = 2.04 plants). To investigate the relationship between DOY and climate, we ran two multiple linear regressions. The first model (Model 1) includes temperature and precipitation parameters as main effects, whereas the second model (Model 2) includes the same climatic parameters in addition to the PI as main effects. Model 2 explains 31% more variation in DOY than Model 1 (R 2 = 0.47 vs. 0.36, respectively) and has a lower overall corrected Akaike information criterion (AICc, 1111.18 vs. 1132.97; Table 2). In order to test the power of these models to predict the DOY of collection among specimens not used in model construction, both models were validated using 10-fold cross-validation. The resulting models resulted in an even more dramatic increase in predictive power among models that incorporated PI relative to those that did not (measured by the mean R 2 across all folds; Model 1 R 2 = 21%; Model 2 R 2 = 41%; Appendix S1). Both models detected a significant and quantitatively similar relationship between DOY and spring maximum temperature; DOY advances with increased temperature. Model 1 parameter estimates indicate that DOY advances 4.23 ± 0.53 days/°C (F 1,116 = 63.47, Table 2). In both models, DOY is delayed in response to increased spring precipitation. Model 1 parameter estimates indicate that flowering time is delayed by one day for every 58.8-mm increase in winter precipitation (0.017 ± 0.01 days/mm of precipitation, F 1,116 = 7.05, P < 0.01), whereas Model 2 detected that DOY is delayed by one day for every 50-mm increase in winter precipitation (0.02 ± 0.01 days/ mm, F 1,115 = 11.53, P = < 0.01; Table 2). Model 2 indicates that PI increases with DOY, independent of variation in the climatic variables included in the model. Among the herbarium specimens, a specimen advances one phenological stage (e.g., from buds to flowers or from flowers to immature fruits) every 17.08 ± 3.37 days (F 1,115 = 25.66, P < 0.01; Table 2B). This means that, on average, a mean of 51.24 days elapses between the appearance of buds and the complete conversion of these buds to mature fruits. Although both models predicted qualitatively similar relationships between DOY and climate, the proportion of variance in DOY explained by each parameter in the models differed. In Model 1, the error variance in DOY was 22% higher than in Model 2 (62.9% vs. 51.4%, respectively; Fig. 5). Model 2 has a lower portion of unexplained variance because some of the unexplained variance in Model 1 is explained by the PI in Model 2. The PI explains 11.5% of the variance (Model 2; Fig. 5B). Spring maximum temperature explains a lower proportion of the total variance in Model 2 than in Model 1, likely because some of the variance in PI was incorrectly attributed to spring T max in Model 1 (32.9% vs. 34.4%, respectively; Fig. 5). Including the PI allows us to use the parameterized phenoclimatic model to predict the day of year of peak flowering of S. tortuosus at a given location under either current conditions or future projected climate change scenarios. Given Model 2, for example, we may predict the day of year on which S. tortuosus will be at peak flower (estimated here by a value of PI = 2.5) at a given location with a given set of climatic parameters, in this case winter precipitation (winter PPT) and maximum spring temperature (spring T max ), from the following equation: where 2.5 is a hypothetical value of PI for peak flowering. By inputting forecasted temperature and precipitation parameters for a given location from projected climate models, we can predict a species peak flowering time-or any other phenophase identified by a particular value of the PI-at that location. DISCUSSION The work presented here was motivated by four primary objectives. First, we developed a protocol to score the phenological status of imaged herbarium specimens by first counting the number of reproductive organs representing different developmental stages (e.g., buds, open flowers, immature fruits, and mature fruits). This process was facilitated by the use of Cell Counter, a plugin available through the free image processing and analysis software ImageJ. This protocol provides users a fast and easy way to reliably score imaged herbarium specimens. Second, we used these counts to develop a new quantitative metric of a specimen's phenological status: the phenological index. We then demonstrated how it can be used to construct and improve pheno-climatic models in our analysis of a herbarium-derived data set composed of S. tortuosus specimens. We found that while including the PI as an independent variable in a pheno-climatic model does not appear to dramatically alter the resulting model coefficients, it does provide a substantial improvement to the model's predictive power by accounting for variation in DOY caused by collection of specimens at different phenological stages. Third, we tested a series of predictions concerning the phenological response of S. tortuosus to local climate. We found that warmer spring maximum temperatures and drier winters during the year of specimen collection advance the reproductive phenology of S. tortuosus across its range. Finally, we demonstrated how pheno-climatic models constructed with the PI as an independent variable can be used to estimate the length of the reproductive period as well as forecast the day of year of onset of any reproductive phase for any given set of climatic conditions. Relationship between climate and flowering date Even given the relatively small sample size analyzed here, we were able to detect highly significant associations between local climatic conditions in the year of specimen collection and the DOY of our focal specimens of S. tortuosus. The DOY of sampled herbarium specimens advances with increased temperature and is delayed DOY = 17.08 * (2.5) + 0.02 * (winter PPT) − 4.14 * (spring T max ) + 179.2 with increased precipitation, which corroborates our predictions concerning the relationship between flowering date and climate. The sensitivity of DOY to temperature observed in S. tortuosus is consistent with that observed in other herbarium-based studies of intraspecific variation in phenology in relation to climate. For example, Matthews and Mazer (2016) found that, among herbarium specimens collected in flower, the sensitivity of Trillium ovatum Pursh to temperature is −4.74 days/°C. Similarly, Gaira et al. (2014) investigated species' sensitivity to temperature in Rhododendron arboreum Sm. using herbarium specimens and found that increasing temperature advanced flowering date (−4.26 days/°C). Both of these studies detected similar sensitivities to temperature to that detected in S. tortuosus (−4.14 days/°C in Model 2; Table 2B). In many species, the relationship between phenology and precipitation remains unclear and can be highly species-or communityspecific (Hart et al., 2014;Munson and Sher, 2015;Rawal et al., 2015;Matthews and Mazer, 2016;Hufft et al., 2018). Similar to the pattern detected here, Matthews and Mazer (2016) found that increased precipitation delays flowering time in T. ovatum. Across a diverse group of alpine species, Hufft et al. (2018) also found that precipitation delayed flowering time (0.02 days/mm). Surprisingly few herbarium-based studies have investigated the impact of precipitation on phenology. Moreover, none have investigated this relationship within water-limited ecosystems such as California, where precipitation may be expected to be an important factor affecting reproductive phenology (Mazer et al., 2015). Expanding herbarium-based studies to investigate phenology-precipitation relationships will help us to gain a deeper understanding of how species will be impacted by future climate changes. Newly available high-resolution climate data (e.g., PRISM and ClimateNA [https ://sites.ualbe rta.ca/~ahama nn/data/ clima tena.html]) will facilitate the testing of more complex models and the detection of more subtle relationships between climate and the timing of distinct phenophases (Wang et al., 2016). Calculating and incorporating the phenological index into phenological models Here, we provide a simple and readily available method to score imaged herbarium specimens using the free image analysis software program, ImageJ, and the available plugin, Cell Counter. Cell Counter allows its user to use point-and-click movements to accurately count the numbers of reproductive organs representing each of any number of distinct phenological phases, as specified by the user. Because of the ease and simplicity of this protocol, it could be easily incorporated into workflows that include scoring by citizen scientists, especially with the forthcoming widespread availability of imaged herbarium specimens through data aggregators such as Integrated Digitized Biocollections (iDigBio; http://www.idigb io.org) and Global Biodiversity Information Facility (GBIF; http://www.gbif.org). The counts derived from Cell Counter may then be used to calculate a PI that represents a weighted mean of the combined counts (as demonstrated in Equation 1). This protocol can be adapted to many species and would work especially well for those with clear, large, and easily counted reproductive structures or compound reproductive structures (such as those found in the Asteraceae family). Species that may be difficult to score are those with small or indistinct reproductive structures or those for which the transitions between phenophases are ambiguous. When PI was included in the pheno-climatic model tested here (Model 2), this variable accounted for 11.5% of the variance in flowering date among S. tortuosus specimens (Fig. 5B). A far higher proportion of the total variance in DOY (38.05%) was explained by climatic parameters. Inclusion of the PI reduced the overall error in the model while improving its predictive power. However, in the data set analyzed here, including the PI did not drastically change the regression coefficients of the climatic parameters in the model. Similarly, Pearson (2019) and Ellwood et al. (2019) both found that models including finer-scale phenological coding (e.g., including only those specimens with >50% flowers) were statistically similar to those models that did not include this finer-scale coding. Thus, these results indicate that herbarium-based phenological models that do not incorporate PI still provide accurate assessments of phenological responsiveness to local climate. At the same time, the inclusion of PI not only reduces the amount of unexplained variance in the resulting pheno-climatic model, but also increases the power of such models to predict the timing of specific phenological events such as flowering onset, peak flowering, or flowering termination. Additionally, inclusion of the PI allowed us to estimate the average total length of the reproductive phase of S. tortuosus specimens as ~51.24 days long. This estimate offers a way to test predictions concerning how climate may influence not only the mean flowering date of focal species but also the length of the reproductive phase, which could be especially useful for investigating intraspecific geographic, temporal, and/or climate-induced variation in the duration of the reproductive phase. For example, we may predict that, among widespread montane species such as S. tortuosus, specimens collected from more alpine environments will have a shorter reproductive phase than those collected from lower elevations due to the shorter growing season at high elevations (Hunsaker et al., 2012). This prediction could be tested by separating conspecific data sets into groups of specimens representing differing elevations (e.g., high and low elevation). The regression coefficient of the PI may differ between models constructed using these data sets, thereby demonstrating how the duration of reproduction may also differ among them. Including the PI in pheno-climatic models allows us to create a predictive model that we may use to forecast the day of year of peak flower (or for plants representing any specific value of the PI) for S. tortuosus. By including only two climatic parameters, we were able to construct a model that predicts the day of year of peak flowering among our sampled herbarium specimens with 47% accuracy (Table 2B, Equation 2). With a larger data set, we can improve these models by including other climatic parameters such as relative humidity, vapor-pressure deficit, or winter or summer temperature. Given the millions of herbarium specimens now available for research, these pheno-climatic models can be constructed for many species, and ultimately combined to give us a broader understanding of how climate change may affect not only the reproductive phenology of individual species but also the collective phenology of plant communities . One of the main goals of herbarium-based studies is to investigate long-term shifts in flowering date through time to determine whether the seasonal cycles of plants have been affected by recent temperature increases. Some of these studies have successfully detected advances in flowering date through time (Molnár et al., 2012;Panchen et al., 2012;Searcy, 2012), whereas others have failed to find an effect even for species that were found to be sensitive to changes in temperature (Hart et al., 2014;Davis et al., 2015;Park and Schwartz, 2015). For example, Hart et al. (2014) found that the flowering date of Rhododendron species was sensitive to changes in annual average temperature (−2.27 days/°C); therefore, they expected that because mean temperature had increased during the study period , they would detect a temporal advance in flowering date. However, they failed to detect a statistically significant phenological shift. The variation in phenological stage among specimens may have obscured the true relationship across the sampled decades. Including the PI may be especially useful in models designed to detect shifts in flowering date through time because such shifts are likely to be small and difficult to detect. Consequently, reducing the error variance in the model due to variation among specimens in their phenological status will likely improve our ability to detect temporal shifts in flowering date while also helping to improve the fit and accuracy of pheno-climatic models. Because of their extensive geographic, temporal, and taxonomic record of plant occurrences, herbarium specimen-based studies provide a promising way to investigate the relationship between flowering time and climate. The new metric introduced here, the phenological index, should reliably reduce error variance in flowering date derived from herbarium collections and improve the predictive capacity of phenological models. Although scoring reproductive phenology using the ImageJ protocol described here does require considerable effort, promising improvements in the automated annotation of specimens with deep learning will expedite the scoring process and ultimately provide us with high-resolution phenological data with which to construct phenological indices (PIs) and to improve pheno-climatic models (Lorieul et al., 2019). In our multivariate models for S. tortuosus, the inclusion of PI as an independent variable reduced the resulting error variance in the DOY among specimens while increasing the model's predictive power and decreasing the AICc. The PI also provides a way to quantify the reproductive period of plants from herbarium specimens and allows us to estimate not only how climate affects flowering dates but also how climate may affect the length of the reproductive period. Moreover, pheno-climatic models constructed with the PI can be used to forecast the day of year of a specific phenophase, given any specified set of climatic parameters. ACKNOWLEDGMENTS The authors would like to thank Allison Lane, Andrea Liu, and Timothy (TJ) Sears for help with scoring herbarium specimens. This work was supported by the National Science Foundation (DEB-1556768 to S.J.M. and I.W.P. and DBI-1802181 to S.J.M. and Katja Seltmann). DATA ACCESSIBILITY All data associated with this manuscript are accessible on Zenodo (Love et al., 2019). SUPPORTING INFORMATION Additional Supporting Information may be found online in the supporting information tab for this article. APPENDIX S1. Ten-fold cross-validation for pheno-climatic Models 1 and 2. LITERATURE CITED Abramoff, M. D., P. J. Magalhaes, and S. J. Ram. 2004. Image processing with ImageJ. Biophotonics International 11 (7) 3. In the dialog box, check the "Keep Original" and "Show Numbers" boxes and then press "Initialize" to start the counting. This will create and open a copy of the image called "Counter Windowfile name". Note: You cannot use the measure tool while you are using Cell Counter. If you need to measure a bud to see if it is greater than 2 mm (or whichever threshold size you are using to identify a given organ type or phenophase), you can measure it on the original image. To measure a bud on the original image, select the Straight segment tool and draw a line along the length of the bud. Select Analyze → Measure. This will display a results table with the length of the line in the last column. Recall that, in this example, the units are in centimeters (i.e., a length of 0.68 = 6.8 mm). Any bud greater than 0.2 cm in length should be counted. You do not need to save measurements. 4. You will have to expand the window and select Image → Zoom → Scale to fit to make the new Counter Window image as large as possible. 5. You can begin counting in the first grid cell that contains reproductive organs (Fig. A5). Select "Type 1" to start counting the buds. Click on a bud to add a marker. For consistency, always add the marker to the tip of the bud. For each marker you add, notice that the number in the Cell Counter window to the right of "Type 1" increases by one. Buds will be classified as "Type 1", open flowers as "Type 2", immature fruits as "Type 3", and mature fruits as "Type 4. " Because we are only counting four types of reproductive organs, you can delete the other marker types by clicking "Remove" in the Cell Counter window. Note: If you place an erroneous marker, you can press the "Delete" button, which will delete the last point you made, or you can check the "Delete Mode" box, which will allow you to delete any point of the selected marker on which you click using the cursor. 6. After you count the buds in the first grid cell, change the counter to "Type 2" to count the flowers. Add a point to the top each flower. 7. Change the counter to "Type 3" to count the immature fruits. In each grid cell, place a marker at the distal end of each immature fruit (Fig. A6). Note: Because Streptanthus fruits often span more than one grid cell, it is important to make consistent decisions regarding FIGURE A3. Test the measurement calibration after you set the scale using the ruler on the imaged herbarium specimen. Here, a 2-cm length is drawn along the scale bar in green. You can see the measured length in the "Length" column. The units are centimeters, which were chosen during Part III, Step 1 above. FIGURE A4. To keep track of regions within the imaged herbarium specimen, add a grid by selecting "Analyze → Tools → Grid" from the menu. FIGURE A8. In the XML file, the "Image_Filename" (shown in the red box) should match exactly the name of the imaged herbarium specimen to which the scored points belong. 4. Click "Load Markers" and navigate to the folder where the XML files are saved and select the corresponding XML file for the current image. Note: If you receive the error "These Markers do not belong to the current image" then there may be a fixable error in your XML 5. You can also edit the XML files. To do this, recall the XML files to the image as described above. You can add new markers or delete current markers using the "Delete Mode" function. Click "Save Markers" and save a new version of the XML.
8,617
2019-07-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Assessing Potential Impact of Bt Eggplants on Non-Target Arthropods in the Philippines Studies on potential adverse effects of genetically engineered crops are part of an environmental risk assessment that is required prior to the commercial release of these crops. Of particular concern are non-target organisms (NTOs) that provide important ecosystem services. Here, we report on studies conducted in the Philippines over three cropping seasons with Bt eggplants expressing Cry1Ac for control of the eggplant fruit and shoot borer (EFSB), Leucinodes orbonalis, to examine potential effects on field abundance, community composition, structure and biodiversity of NTO’s, particularly non-target arthropod (NTA) communities. We document that many arthropod taxa are associated with Bt eggplants and their non-Bt comparators and that the number of taxa and their densities varied within season and across trials. However, we found few significant differences in seasonal mean densities of arthropod taxa between Bt and non-Bt eggplants. As expected, a lower abundance of lepidopteran pests was detected in Bt eggplants. Higher abundance of a few non-target herbivores was detected in non-Bt eggplants as were a few non-target beneficials that might control them. Principal Response Curve (PRC) analyses showed no statistically significant impact of Bt eggplants on overall arthropod communities through time in any season. Furthermore, we found no significant adverse impacts of Bt eggplants on species abundance, diversity and community dynamics, particularly for beneficial NTAs. These results support our previous studies documenting that Bt eggplants can effectively and selectively control the main pest of eggplant in Asia, the EFSB. The present study adds that it can do so without adverse effects on NTAs. Thus, Bt eggplants can be a foundational component for controlling EFSB in an Integrated Pest Management (IPM) program and dramatically reduce dependence on conventional insecticides. Introduction Control of lepidopteran pests often relies on the use of broad spectrum insecticides which can negatively affect beneficial insect populations, often leading to pest resurgence, outbreaks of secondary pests, risk of off-farm movement of pesticides and environmental contamination [1][2][3][4][5][6][7]. An Integrated Pest Management (IPM) program for eggplant fruit and shoot borer (EFSB), Leucinodes orbonalis, the most damaging insect pest of eggplant (Solanum melongena L.) in South and Southeast Asia, has been proposed that would utilize resistant plant varieties, sex pheromones for trapping adults and disrupting mating, cultural controls such as removing infested plant parts and selective use of chemical insecticides [8]. Using resistant varieties, either developed through conventional breeding or genetic engineering means, should be the foundation of IPM [9]. However, conventional breeding has been unable to identify significant EFSB-resistance genes from cultivated eggplants and has not produced any commercial variety of eggplant conferring high level of resistance to the EFSB [10]. Furthermore, the cost of pheromones and labor-intensive cultural practices inhibits adoption of these pest management practices and so growers in Asia have become largely dependent on the frequent use of insecticides [11]. In the Philippines, farmers resort to frequent spraying (up to 72 times per 180 days cropping season) of mixtures of insecticides to control EFSB [12][13][14][15][16]. Broad-spectrum insecticides including profenofos, triazophos, chlorpyrifos, cypermethrin, and malathion are often used in eggplant production [14,17,18]. Such an insecticide-dependent strategy to control EFSB poses both environmental and health concerns. Use of genetic engineering to develop insect-resistant plants offers a solution to the oftenlimited availability of highly insect-resistant germplasm [4,6,19,20]. Plants expressing insecticidal crystal (Cry) proteins from the bacterium Bacillus thuringiensis (Bt) have become a foundation for IPM [21] and were grown on 83.7 million ha globally in 2015 [22]. These crops have enabled more effective control of lepidopteran pests and led to increases in productivity while simultaneously reducing insecticide use and their associated negative environmental impacts [4,[23][24][25][26]. However, concerns have been raised that long term and extensive use of Bt crops could directly or indirectly affect biodiversity and beneficial non-target organisms, particularly arthropods [27][28]. Therefore, assessment of the environmental consequences of transgenic crops is an important prerequisite to their commercialization [29][30][31][32]. Risk of exposure to non-target arthropods (NTAs) by a Bt protein can be through direct feeding on plant tissues or consuming arthropods that have fed on plant tissues [23,31,33,34]. Agriculture depends on several arthropod groups performing ecological functions such as decomposition, pollination and biological control that are essential to soil health and crop productivity. This is especially true with eggplant, a crop producing lush growth over a long growing period, where high species diversity and interaction among and between herbivores and predators have been documented [35]. The eggplant non-target arthropod community includes predators, parasitoids, pollinators, sucking and chewing herbivores, and vagrant insects that are only temporary residents of the crop. Our studies included all these groups because it is important to understand how the dynamics of pests and beneficial species in eggplant fields may be affected so that the management practices can be adjusted as needed. In the Philippines, field trials and an insect resistance management (IRM) plan are required prior to commercial release of insect-protected GM crops [36]. Data from field trials are needed to assess bioefficacy against the target pest and potential adverse effects on NTOs, particularly beneficial NTAs, and to formulate an appropriate IRM plan. The data presented in this report documents species abundance, diversity and community dynamics (composition and structure) of canopy-dwelling arthropods and soil micro-fauna in Bt and non-Bt eggplants in a study site located in Pangasinan, the largest eggplant growing province in the Philippines. These studies contain the first publicly available data on NTOs for Bt eggplants used to control L. orbonalis. The information generated here will contribute significantly to the theoretical and practical basis for environmental risk assessment of Bt eggplants in South and Southeast Asia. Description of Trial Site The studies were conducted at the same site in Bgy. Paitan, Sta. Maria, Pangasinan for three successive growing seasons from March 2010 to October 2012. The field trial site (15°58' 35.07'' N, 120°40' 33.62'' E), located in the province of Pangasinan, the Philippines, best represents the agro-climatic conditions and production practices of the largest eggplant growing region (Region I or the Ilocos Region) in the country. Based on the climate map of the Philippines [37], Pangasinan has Type 1 climate characterized by two pronounced seasons: dry, from November to April; wet, during the rest of the year. Farmers plant rice during the wet season. Eggplant cultivation in Pangasinan is primarily done during the dry season (DS). The province of Pangasinan has the largest production area (18.43%) and produces the largest volume of eggplants (31.95%) in the country (2005-2014 PSA data) [38]. Most importantly, the Pangasinan site represents the conditions that small-holder farmers are likely to experience relative to very high natural incidence of EFSB pressure that requires frequent insecticide applications. Plant materials The NTO studies were conducted in the same Confined Field Trials (CFT) for Bt eggplant as described in Hautea et al. [39]. The experimental materials used in the series of three Confined Field Trial (CFT) experiments are listed in Table 1. Maharashtra Hybrid Seeds Co. Pvt. Ltd. (Mahyco) inserted the cry1Ac gene under the control of the constitutive 35S CaMV promoter into an eggplant elite line to control feeding damage caused by EFSB. The transformation event was designated as 'EE-1' [40,41]. The Bt eggplant lines (D2, D3, M1, M4, M8 used as test entries) in the field trials are advanced breeding lines (BC 3 F 4 to BC 3 F 6 ) derived from initial crosses of Mara selection x Mahyco EE-1 and DLP selection x Mahyco EE-1. The Cry1Ac protein levels expressed in the terminal leaves (shoots) of Bt eggplant lines ranged from 10.58- 24.87 ppm dry weight, and with < 1% EFSB shoot damage compared with up to 46.6% shoot damage in non-Bt comparators [39]. Experimental design and field lay-out Each field experiment was laid out in a randomized complete block design (RCBD) with four replications in each season. Each plot/entry consisted of 4 rows in Trial 1 and in Trial 2, and 6 rows in Trial 3. Each row had 10 plants. Planting distances were 1 m between rows and 0.75 m between plants. The perimeter of the field experiment was surrounded by five rows (1 m between rows) of conventional non-Bt eggplant varieties as pollen trap plants. A 200-meter radial distance isolated the field trial site from the nearest eggplants in the area. The field had been fallow for at least a year before it was used in the experiment. No plants were grown in the trial field until transplanting. Between trials, the field was fallow for at least 60 days before the next trial. Cultural and pest management Seeds were sown in pots with sterilized soil and the seedlings were maintained inside the biosafety level 2 (BL2) greenhouse at UP Los Baños. At 28-30 days after sowing (DAS), representative seedlings of each entry were tested for presence or absence of Cry1Ac using immunoassay or a gene strip test kit, DesiGen Xpresstrip (DesiGen, Maharashtra, India), as described in Ripalda et al. [42]. Seedlings were transplanted in the field 30-34 DAS. The Confined Field Trials were managed based on the guidelines provided for the Vegetable National Cooperative Trial [43] and typical cultural practices for eggplant production in the area. No insecticide sprays specific against eggplant fruit and shoot borer (EFSB) were applied during the growing period of the trials. Management of other arthropod pests and diseases was done by application of recommended IPM practices, primarily sanitation and witholding of pesticide use as long as possible to enable the proliferation of natural enemies. Only when it was necessary to reduce pest damage and preserve crop health, highly selective insecticides (i.e., thiamethoxam for leafhopper and whitefly and sulphur for mites) were applied. Spraying was always done after data collection and spray records were kept. All weeds were controlled regularly by manual weeding. Permissions. All field trials were conducted according to the Biosafety Permit for Field Testing in Pangasinan (BPI Biosafety Permit No. 10-011b) issued by the Bureau of Plant Industry (BPI), Philippines on March 16, 2010. Prior to issuance of the field trial permit, the proposed trial site was inspected and approved by the BPI Post-Entry Quarantine Service (BPI-PEQS) office. The field inspection report on indicative conditions of the proposed field test site (BPI-FTI 001) contains information on the physical, biological and social environments of the site [44]. Required permission from the owner of the field trial site was also complied with. Various public participation activities (posting, municipal council meetings, public hearing, field visits, communications and outreach) were held before and through the duration of the field trials. All field trial activities were conducted under the supervision of the Institutional Biosafety Committee (IBC) and the BPI-PEQS office. During the conduct of the field trials, all biosafety conditions indicated in the Biosafety Permit were complied with. An IBC completion report was submitted at the end of the field trial period. Field sampling and species identification Canopy-dwelling arthropods. Visual counts of non-target arthropods were taken from 16 plants (eight/row) from the two inner rows of each test plot/entry to minimize border effects. On each sample plant, easily visible and highly mobile non-target arthropods like spiders, coccinellids, and bees were counted without touching any plant part. Visual counting of minute arthropods was done by examining both surfaces of one young fully expanded leaf, one leaf near the middle of the canopy and one old leaf near the bottom of the canopy. For aerial predatory species like syrphid flies (Parragus seratus), the larvae, which are also plant canopy residents, were sampled assuming they would more likely be exposed to Bt protein than the adults. Sampling was conducted early in the morning (5:00-7:00 A.M.) when the field had not yet been disturbed by any field operations. Sampling weeks varied from 5 to 17 in each season. Whenever possible, common and frequently occurring arthropods were identified to species level. For less common species, identifications were made to family or order. Soil microfauna. For minute ground-dwelling arthropods, one garden-trowel full of top soil, including litter and decaying debris, were collected from four randomly selected areas within the two inner rows (15 m 2 ) of each plot/entry and pooled. Transparent plastic bags were used to hold the samples. From the pooled soil samples, 500 grams were taken and placed in a Berlese funnel 24 hours after bagging and brought to the Crop Protection Laboratory for extraction. The soil samples were subjected to 48 hour heat exposure using 80-watt incandescent bulb placed directly on top of the funnels. A small plastic bottle containing 50 ml of 70% ethanol was positioned at the bottom opening of each funnel to capture soil arthropods. Samples were collected two to three times throughout the eggplant growing season. Statistical Analysis The mean abundance of individual NTAs in every test plot/entry per replication was computed. Then the mean abundance for Bt and non-Bt eggplants per replicate were computed by dividing the total number of individuals per taxa by the number of entries per crop type (Bt vs. non-Bt). For both trials 1 and 2, five Bt lines (D1, D3, M1, M4, M8) and two non-Bt near-isolines (DLP, Mara selections) plus the check (Mamburao) entries were considered. For trial 3, three Bt lines (D2, M1, M8) and three non-Bt near-isolines (DLP, Mara S1, Mara S2) were used. All arthropod species found in Bt and non-Bt eggplants were classified, grouped and recorded into the following functional guilds: predators, herbivores or non-target pests, parasitoids, pollinators and vagrants. Non-target herbivores or pests were further classified into sucking and chewing arthropods. Vagrants refer specifically to those insects, including accidental visitors, with no clear association with eggplant (e.g. herbivores or pests from other plants in surrounding areas, or adults whose immatures are saprophytes or living in aquatic environment). The composition and relative proportion between each guild were calculated. Differences in the composition of taxa among functional guilds and taxa within guilds in Bt and non-Bt eggplants were analyzed using Mann-Whitney U-test in PROC NPAR1WAY in SAS [45]. Based on the Wilcoxon statistic, normal approximation with two-sided p-value was used at 5% level of significance. Univariate Analysis. Analyses on seasonal mean NTAs abundance were carried out using a mixed model, repeated measures ANOVA in PROC MIXED in SAS [45], with block as a random effect, week as repeated measure and eggplant type (Bt and non-Bt) as a fixed effect. An autoregressive heterogenous (ARH1) covariance structure was modelled. Separate analyses were conducted for each season. Differences in LSMEANS were used to test for differences in abundance between Bt and non-Bt eggplants for each sampling date for each taxon. NTAs abundance data were log transformed (log [x + 1]) prior to analysis to meet the assumptions for normality and homogeneity of residuals, but untransformed means are presented. Principal Response Curve (PRC) analysis. The effect of Bt eggplants on the community of non-target arthropods was evaluated by principal response curve (PRC) analysis using CANOCO for Windows v4.56 [46]. PRC is a multivariate ordination method designed to test and display treatment effects, relative to a standard (here non-Bt eggplant), that change across time [23,47]. To test whether crop type was significant a Monte Carlo permutation test (499 permutations, restricted for split plot design) on the first canonical axis of the RDA was conducted [48]. This process permutes within treatment plots but does not permute across time [23] NTA abundance data were log-transformed to reduce the effect of weights inflated because of highly abundant species [33]. Crop type was considered as environmental variable, blocks and sampling weeks were defined as co-variables and the interaction of crop type and sampling weeks as explanatory variable. Diversity index. The Shannon-Wiener index [49] was used to measure diversity and evenness of non-target arthropods. A diversity index provides more information about community composition than simply species richness. The Shannon-Wiener index takes the relative abundances of different species into account thus providing information about rarity and commonness of species in a community. The Shannon-Wiener diversity (H') and Shannon's equitability (E) indices were calculated. Shannon indices for non-target arthropods for each week were compared for each season using repeated measure ANOVA in SAS [45]. Rank abundance. Rank abundance diagrams were constructed by plotting the relative abundances of species against their rank in the samples [50]. The outlines of this diagram characterize the structures of non-target arthropod communities in Bt and non-Bt eggplants. Spearman rank correlation coefficient (r) was computed to measure the strength of linear relationship of rank abundances of non-target arthropods between crop types, with two significance levels: P = 0.05 and P = 0.01. Spearman rank correlation was calculated using the PROC CORR procedure in SAS [45]. NTA abundance in Bt and non-Bt eggplants A total of 91 taxa were observed in Bt and non-Bt eggplants during the three-season duration of the study. The full lists of arthropods observed per trial, classified according to functional guilds, and results of univariate analyses of their seasonal mean abundance are presented (S1 Table). There were more taxa observed during the dry season (Trial 2), which is the main planting season for eggplant in Pangasinan, than during the wet/off-season trials (Trials 1 and 3). No significant differences in the seasonal mean abundance were detected in 81.3% (84/91) of the total NTAs observed between Bt and non-Bt eggplants. Significant differences were observed in some hemipterans (jumping plant bug (Halticus minutus), leafhoppers (Amrasca biguttula), mirid bugs (Campylomma sp., Cyrtopeltis sp.), whitefly (Bemisia tabaci)), non-target lepidopterans (leaf folder (Homona coffearia), lepidopteran leafminer (Phycita sp.), semilooper (Chrysodeixis eriosoma), tomato fruitworm (Helicoverpa armigera) and coccinelids (Coccinelidae). Of these taxa that showed significant differences in seasonal mean abundance, the differences were observed only in one or two weeks out of the 5 to 17-week sampling periods each season (S1 Fig). Furthermore, some of these species were not consistently detected in every trial, and some were associated alternately with either Bt or non-Bt eggplants. Composition of NTA communities in Bt and non-Bt eggplants The eggplant arthropod community recorded in Bt and non-Bt eggplants consisted of herbivores or non-target pests, predators, parasitoids and pollinators, and vagrant insects (Fig 1). Herbivores were by far the most abundant guild, followed by predators while parasitoids and pollinators were rare. Among the different functional guilds, significant differences were only detected in the herbivore guild between Bt and non-Bt eggplants in every trial ( Table 2). Analyses of the distribution of the species within guilds confirmed that the most abundant taxa observed were mostly the ones also detected to have significant differences in seasonal mean densities (S1 Table). Among the herbivores (Fig 1b) whiteflies (B. tabaci) and leafhoppers (A. biguttula) were the most abundant in all three trials and spiders (Araneae) and coccinelids (Coccinellidae) were the most abundant among the predators (Fig 1c). Among the parasitoids and pollinators (Fig 1d), an ichneumonid wasp (Ichneumonidae) and honeybee (Apis sp.) were common in trials 1 and 2, while a cutworm parasitoid (Snellenius manilae) and honeybee (Apis sp.) were common in trials 2 and 3. The composition of vagrant species was not presented because these are mostly occasional arthropod visitors from surrounding plants, which have no clear association with eggplant. NTA community dynamics in Bt and non-Bt eggplants The Principal Response Curve (PRC) analyses of NTA abundance data in the three trials revealed no significant difference between Bt and non-Bt eggplants (Fig 2). A large proportion of the total variance was explained by sampling weeks and only a small portion was attributed to crop type (Bt vs. non-Bt) in the first axis of the redundancy analysis ( Table 3). Analyses of the distribution of the species weight (b k ) confirmed that the taxa with high species weight were the same ones with significant differences in seasonal mean densities detected by univariate analysis (S1 Table). The most abundant species in the NTAs communities detected in Bt and non-Bt eggplants were lepidopteran leafminer (Phycita sp), leafhopper (A. biguttula), whitefly (B. tabaci), red fire ant (S. geminata), Phaneroptera sp., tomato fruit worm (H. are not shown because they are likely to show a weak response or a response that is unrelated to the principal response curve [48]. Other descriptors of NTA communities Diversity, indicated by species richness and evenness was also monitored using two descriptors of NTA community structure: Shannon diversity index and evenness [49] and rank abundance curves [50]. There were no significant differences in the Shannon diversity index ( Fig 3A) and evenness (Fig 3B) of NTA communities between Bt and non-Bt eggplants for either measure (P>0.05). Temporal changes in mean values of the diversity and evenness indices also showed no significant differences between the NTAs communities in Bt and non-Bt eggplants except in only one out of the 12-week sampling periods (week 6) in trial 1 of the Shannon diversity index (Fig 3A). Spearman rank correlation coefficient (r = 0.99) indicates very strong positive correlation between rank abundances of NTA communities in Bt and non-Bt eggplants ( Fig 3B). The first and second ranked species, represented by whiteflies and leafhoppers, were consistently the most dominant species in both Bt and non-Bt eggplants. This was evident in the sudden decline to the third ranked species (Fig 3B). Abundance of soil-dwelling arthropods No statistically significant differences were observed in the mean density of collembolans and mites between Bt and non-Bt eggplants (Fig 4). Discussion Numerous studies, reviews and meta-analyses have assessed the impacts of Bt cotton and maize on non-target organisms, in particular non-target arthropods (NTAs) [1-7, 25, 29-32, 52-58]. Limited work has examined the impact of Bt eggplants producing coleopteran-active Cry3Bb [59,60], but to our knowledge this is the first report of a field study that assessed the impact of Bt eggplants expressing Cry1Ac on NTAs and other organisms. This study helps to address concerns on the potential environmental risks to NTAs of Bt eggplant cultivation in the Philippines and similar areas. Herein, we monitored the abundance of canopy-and soildwelling NTAs in Bt and non-Bt eggplants for three seasons over 2.5 years at a field trial site located in Pangasinan, the largest eggplant growing area in the country. We found no significant impact of Bt eggplants on the abundance of most canopy-dwelling NTAs. Seasonal mean abundance of more than 80% of the taxa observed in Bt and non-Bt eggplants were similar. Of taxa that showed significant differences in seasonal mean abundance between Bt and non-Bt eggplants, the differences were detected over time indicating that changes in NTAs were driven more by temporal dynamics rather than crop type. Some species were alternately associated with either Bt or non-Bt eggplants suggesting normal species variation seen in agricultural fields and not associated with the experimental treatments. The preference of some of the NTAs could have been affected by the difference observed in a few morphological traits (e.g. leaf shape, size, lateral branches) between Bt and non-Bt eggplants. Although the two crop types have related genetic backgrounds, the observed difference in leaf type (broad and narrow) in the non-Bt cultivar Mara, which was a selection from a farmer's variety, could be attributed to the inherent heterogeneity in open-pollinated autogamous species [61]. The backcross breeding that developed the Bt eggplant lines derived from Mara only selected for the narrow-leaf type characteristic of the Mara recurrent parent. It is likely that the damage caused by EFSB in the non-Bt eggplants resulted in production of more lateral branches due to the suppression of apical dominance [62]. Overall, these findings suggest that Bt eggplant did not adversely affect species abundance in the NTA community. The analysis of functional guilds revealed that the composition of common and rare guilds are similar in Bt and non-Bt eggplants except in the herbivore guild, but that differences were attributed only to a few species, mostly lepidopteran non-target pests and hemipterans. PRC analysis revealed no significant impact of Bt eggplants on NTA communities through the growing season when compared to non-Bt eggplants in all three trials. The large proportion of the total variance was accounted for by sampling weeks and much less by crop type, indicating that changes in abundance of NTAs was driven by time rather than due to exposure to Bt eggplant expressing the Cry1Ac insecticidal protein. Finally, we found little difference in the diversity, evenness and rank abundance of NTA communities in Bt and non-Bt eggplants. If Bt eggplants had a negative impact, we would have expected lower species richness and evenness in comparison to non-Bt eggplants. Previous studies on Bt cotton expressing Cry1Ac or Bt maize expressing Cry1Ab found similar results [25,30,34,58,63,64]. Mites (Acari) and collembolans have been used as representative soil invertebrates for monitoring the environmental impacts of transgenic plants [57,65,66]. Here, we found no differences in abundance of these taxa between Bt and non-Bt eggplants. This is consistent with previous work on long-term cultivation of Bt cotton (Cry1Ac), which showed no significant effect on the abundance of soil invertebrates including collembolans, mites and spiders [57,58]. Similar results were also observed in Bt maize (Cry1Ab) where activity and abundances of ground-dwelling invertebrates, spiders, carabid and rove beetles, did not differ in Bt crops compared with near-isogenic control plots [33,[65][66][67]. Herbivores and predators were the most abundant functional guilds found in Bt and non-Bt eggplants. Among the herbivores, hemipterans and secondary lepidopteran pests were the most abundant species. As expected, significantly lower abundance of secondary lepidopteran pests was detected on Bt eggplants compared with non-Bt eggplants because Cry1Ac expressed in Bt eggplants is known to be efficacious against many Lepidoptera and the trials were not sprayed with lepidopteran-specific insecticides. The two most abundant sucking insect pests, leafhopper (A. biguttula) and whitefly (B. tabaci), had lower abundance in Bt compared with non-Bt eggplants. This result is consistent with previous reports that showed decreases in abundance of some hemipterans, including cicadellids or leafhoppers, on Bt cotton compared to those on non-Bt cotton [23,33]. Mechanisms causing such difference could be varied, and one such study demonstrated that herbivore-induced plant compounds can affect a secondary pest (6). A meta-analysis of effects of Bt crops on NTOs [55] also showed that when fields of insecticide-free Bt crops were compared with insecticide-free control fields, certain non-target taxa were less abundant in Bt fields, including coleopterans and hemipterans in Bt cotton, and hymenopterans in Bt maize. This latter effect was due entirely to the expected reductions in a specialist parasitoid of the main lepidopteran target of Bt maize [56]. In contrast, many studies have shown that Bt cotton producing Cry1Ac did not affect the densities of many non-lepidopterans including leafhopper and whitefly [68][69][70][71][72]. A possible explanation for the higher abundance of leafhopper and whitefly observed in non-Bt eggplants in the present study was the production of more lateral branches in non-Bt eggplants (M. Navasero, personal observation) resulting from damage in the terminal shoots caused by the primary target pest, EFSB. Suppression of the apical dominance of the plant likely induced more lateral bud outgrowth, giving rise to lateral branches [62]. The resulting dense canopy may have provided a more favorable microclimate conducive to growth and multiplication of these pests. In the case of predators, the most abundant were coccinelids and spiders. Coccinelids showed significantly higher abundance in non-Bt than in Bt eggplants and this was likely the result of higher prey abundance in the non-Bt eggplants. The prey consisted not only of lepidopterans, but the higher abundance of leafhoppers and whiteflies. Our findings are consistent with previous reports on Bt cotton where reduced number of prey, particularly of lepidopterans [73] and sucking insect pests [23,33] were observed. Our results also agree with previous research syntheses [25,56] in which the abundance of members of the predatory arthropod guild were slightly reduced in unsprayed Bt cotton expressing Cry1Ac compared to the unsprayed non-Bt control. This pattern was driven by the abundance of very few taxa, but the consequences of such reductions likely do not significantly affect the biological control services provided by the predator community overall [70,74]. In conclusion, our non-target studies of Bt eggplants over three growing seasons in the largest eggplant production province of the Philippiines with the highest EFSB pest pressure showed that arthropod communities, except for the target pest species, would be largely unaffected by the cultivation of this new crop. We reported previously that Bt eggplant demonstrated nearly 100% control of its major pest, EFSB, without the use of supplemental sprays [39]. Ex-ante studies for Bt eggplant in the Philippines [12,13] indicated that producers and consumers would be benefited by Bt eggplant technology adoption. At the farm level, Bt eggplant adoption has high potential to increase marketable yield, reduce costs, and increase profits. Farmers would gain profits because the technology would reduce EFSB damage, increase the marketable yield and lower production costs. Consumers would have an adequate supply of safer eggplant at a lower price. The adoption of Bt eggplant is projected to greatly reduce pesticide use on eggplant, thereby reducing both pesticide loading in the environment and hazards to farm laborers and consumers. Bt eggplant presents a more efficacious, environmentally benign and profitable alternative to the current practice of intense use of chemical insecticides in eggplant production. We thank the following institutions for various critical support: the United States Agency for International Development (USAID) through Cornell University Agricultural Biotechnology Support Project II (ABSPII), the Republic of the Philippines Department of Agriculture-Biotechnology Program Office (DA-Biotech BPO) and the Instiute of the Plant Breeding, College of Agriculture, University of the Philippine Los Baños (UPLB) for the funding support; the Maharashtra Hybrid Seeds Co. Pvt. Ltd. (Mahyco) for providing access to eggplant event EE-1 and regulatory-related information and for various technical assistance/advice in the conduct of laboratory and field activities; and Cornell University and Sathguru Management Consultants for facilitating the technology transfer. We also acknowledge the assistance of UPLB Foundation Inc., the executing agency for the ABSPII project in the Philippines.
6,923
2016-10-31T00:00:00.000
[ "Biology", "Environmental Science" ]
Continuous characterizations of Besov-Lizorkin-Triebel spaces and new interpretations as coorbits We give characterizations for homogeneous and inhomogeneous Besov-Lizorkin-Triebel spaces in terms of continuous local means for the full range of parameters. In particular, we prove characterizations in terms of Lusin functions and spaces involving the Peetre maximal function to apply the classical coorbit space theory due to Feichtinger and Gr\"ochenig. This results in atomic decompositions and wavelet bases for homogeneous spaces. In particular we give sufficient conditions for suitable wavelets in terms of moment, decay and smoothness conditions. Introduction This paper deals with Besov-Lizorkin-Triebel spacesḂ s p,q (R d ) andḞ s p,q (R d ) on the Euclidean space R d and their interpretation as coorbits. For this purpose we prove a number of characterizations for homogeneous and inhomogeneous spaces for the full range of parameters. Classically introduced in Triebel's monograph [28, 2.3.1] by means of a dyadic decomposition of unity, we use more general building blocks and provide in addition continuous characterizations in terms of Lusin and maximal functions. Equivalent (quasi-)normings of this kind were first given by Triebel in [29]. His proofs use in an essential way the fact that the function under consideration belongs to the respective space. Therefore, the obtained equivalent (quasi-) norms could not yet be considered as a definition or characterization of the space. Later on, Triebel was able to solve this problem partly in his monograph [30, 2.4.2, 2.5.1] by restricting to the Banach space case. Afterwards, Rychkov [23] completed the picture by simplifying a method due to Bui, Paluszyński, and Taibleson [3,4]. However, [23] contains some problematic arguments. One aim of the present paper is to provide a complete and self-contained reference for general characterizations of discrete and continuous type by avoiding these arguments. We use a variant of a method from Rychkov's subsequent papers [24,25] which is originally due to Strömberg and Torchinsky developed in their monograph [27,Chapt. 5]. In a different language the results can be interpreted in terms of the continuous wavelet transform (see Appendix A.1) belonging to a function space on the ax + b-group G. Spaces on G considered here are mixed norm spaces like tent spaces [5] as well as Peetre type spaces. and define the differential operators Dᾱ and ∆ by If X is a (quasi-)Banach space and f ∈ X we use f |X or simply f for its (quasi-)norm. The space of linear continuous mappings from X to Y is denoted by L(X, Y ) or simply L(X) if X = Y . Operator (quasi-)norms of A ∈ L(X, Y ) are denoted by A : X → Y , or simply by A . As usual, the letter c denotes a constant, which may vary from line to line but is always independent of f , unless the opposite is explicitly stated. We also use the notation a b if there exists a constant c > 0 (independent of the context dependent relevant parameters) such that a ≤ c b. If a b and b a we will write a ≍ b . 2 Function spaces on R d Vector valued Lebesgue spaces The space L p (R d ), 0 < p ≤ ∞, denotes the collection of complex-valued functions (equivalence classes) with finite (quasi-)norm with the usual modification if p = ∞. The Hilbert space L 2 (R d ) plays a separate role for our purpose (Section 3). Having a sequence of complex-valued functions {f k } k∈I on R d , where I is a countable index set, we put where we modify appropriately in the case q = ∞. Maximal functions For a locally integrable function f we denote by M f (x) the Hardy-Littlewood maximal function defined by where the supremum is taken over all cubes centered at x with sides parallel to the coordinate axes. The following theorem is due to Fefferman and Stein [6]. Theorem 2.1. For 1 < p < ∞ and 1 < q ≤ ∞ there exists a constant c > 0, such that holds for all sequences {f k } k∈Z of locally Lebesgue-integrable functions on R d . Let us recall the classical Peetre maximal function, introduced in [19] . Given a sequence of functions {Ψ k } k∈N ⊂ S(R d ), a tempered distribution f ∈ S ′ (R d ) and a positive number a > 0 we define the system of maximal functions Since (Ψ k * f )(y) makes sense pointwise (see the following paragraph) everything is well-defined. However, the value "∞" is also possible for (Ψ * k f ) a (x). This was the reason for the problematic arguments in [23] mentioned in the introduction. We will often use dilates Ψ k (x) = 2 kd Ψ(2 k x) of a fixed function Ψ ∈ S(R d ), where Ψ 0 (x) might be given by a separate function. Also continuous dilates are needed. Let the operator D Lp t , t > 0, generate the p-normalized dilates of a function Ψ given by D Lp t Ψ := t −d/p Ψ(t −1 ·). If p = 1 we omit the super index and use additionally Ψ t := D t Ψ := D L 1 t Ψ. We define (Ψ * t f ) a (x) by We will refer to this construction later on. It turned out that this maximal function construction can be used to interpret classical smoothness spaces as coorbits of certain Banach function spaces on the ax + b-group, see Section 4. Tempered distributions, Fourier transform As usual S(R d ) is used for the locally convex space of rapidly decreasing infinitely differentiable functions on R d where its topology is generated by the family of semi-norms The space S ′ (R d ), the topological dual of S(R d ), is also referred as the set of tempered distributions on R d . Indeed, a linear mapping f : The convolution ϕ * ψ of two integrable (square integrable) functions ϕ, ψ is defined via the integral ). It makes sense pointwise and is a C ∞ -function in R d of at most polynomial growth. As usual the Fourier transform defined on both S(R d ) and The mapping F is a bijection (in both cases) and its inverse is given by In order to deal with homogeneous spaces we need to define the subset S 0 (R d ) ⊂ S(R d ). Following [28,Chapt. 5] we put i.e., to an element of S ′ (R d ) . However, this fact is not trivial and makes use of the Hahn-Banach theorem in locally convex topological vector spaces. We may identify S ′ 0 (R d ) with the factor space S ′ (R d )/P(R d ), since two different extensions differ by a polynomial . Besov-Lizorkin-Triebel spaces Let us first introduce the concept of a dyadic decomposition of unity, see also [28, 2.3.1]. Now we are ready for the definition of the Besov and Lizorkin-Triebel spaces. See for instance [28, 2.3.1] for details and further properties. In case q = ∞ we replace the sum by a supremum in both cases. The homogeneous counterparts are defined as follows. For details, further properties and how to deal with occurring technicalities we refer to [28,Chapt. 5]. In case q = ∞ we replace the sum by a supremum in both cases. Inhomogeneous spaces Essential for the sequel are functions Φ 0 , Φ ∈ S(R d ) satisfying for some ε > 0, and Dᾱ(FΦ)(0) = 0 for all |ᾱ| 1 ≤ R. (2.5) We will call the functions Φ 0 and Φ kernels for local means. Recall that Φ k = 2 kd Φ(2 k ·), k ∈ N, and Ψ t = D t Ψ. The upcoming four theorems represent the main results of the first part of the paper. Theorem 2.6. Let s ∈ R, 0 < p < ∞, 0 < q ≤ ∞, a > d/ min{p, q} and R+1 > s. Let further Φ 0 , Φ ∈ S(R d ) be given by (2.4) and (2.5). Then the space F s p,q (R d ) can be characterized by with the usual modification in case q = ∞. Furthermore, all quantities f |F s p,q (R d ) i , i = 1, ..., 5, are equivalent (quasi-)norms in F s p,q (R d ) . For the inhomogeneous Besov spaces we obtain the following. Theorem 2.7. Let s ∈ R, 0 < p, q ≤ ∞, a > d/p and R + 1 > s. Let further Φ 0 , Φ ∈ S(R d ) be given by (2.4) and (2.5). Then the space B s p,q (R d ) can be characterized by with the usual modification if q = ∞. Furthermore, all quantities f |B s p,q (R d ) i , i = 1, ..., 4, are equivalent quasi-norms in B s p,q (R d ) . Homogeneous spaces The homogeneous spaces can be characterized similar. Here we do not have a separate function Φ 0 anymore. We put Φ 0 = Φ . Remark 2.10. Observe, that the (quasi-)norms · |Ḟ s p,q (R d ) 3 and · |F s p,q (R d ) 3 are characterizations via Lusin functions, see [30, 2.4.5] and [28, 2.12.1] and the references given there. We will return to it later when defining tent spaces, see Definition 4.1 and (4.1). Particular kernels For more details concerning particular choices for the kernels Φ 0 and Φ we refer mainly to Triebel [30, 3.3]. The most prominent nontrivial examples (besides the one given in Remark 2.3) of functions Φ 0 and Φ satisfying (2.4) and (2.5) are the classical local means. The name comes from the compact support of Φ 0 , Φ, which is admitted in the following statement. Proofs We give the proof for Theorem 2.6 in full detail. The proof of Theorem 2.8 is similar and even less technical. Let us refer to the respective paragraph for the necessary modifications. The proofs in the Besov scale are analogous, so we omit them completely. The proof technique is a modification of the one in Rychkov [23], where he proved the discrete case, i.e., that (2.9) and (2.10) characterize F s p,q (R d ). However, Hansen [15,Rem. 3.2.4] recently observed that the arguments used for proving (34) in [23] are somehow problematic. The finiteness of the Peetre maximal function is assumed which is not true in general under the stated assumptions. Consider for instance in dimension d = 1 the functions and, if a > 0 is given, the tempered distribution f (t) = |t| n with a < n ∈ N. Then (Ψ * k f ) a (x) is infinite in every point x ∈ R. The mentioned incorrect argument was inherited to some subsequent papers dealing with similar topics, for instance [1], [17] and [33]. Anyhow, the stated results hold true. There is an alternative method to prove the crucial inequality (34) which avoids Lemma 3 in [23]. It is given in Rychkov [24] as well as [25] . A variant of this method, which is originally due to Strömberg, Torchinsky [27, Chapt. V], is also used in our proof below. We start with a convolution type inequality which will be often needed below. The following lemma is essentially Lemma 2 in [23]. Lemma 2.13. Let 0 < p, q ≤ ∞ and δ > 0. Let {g k } k∈N 0 be a sequence of non-negative measurable functions on R d and put Then there is some constant C = C(p, q, δ), such that and hold true. We are going to prove the relations We just give the proof of f |F s p,q 1 ≍ f |F s p,q 2 in detail since the remaining equivalences are analogous. We need a bit more. Fix a 1 ≤ t ≤ 2. Clearly, we also have We dilate this identity with 2 ℓ , i.e., g ℓ (η) = g(2 −ℓd η(2 −ℓ ·)) for η ∈ S(R d ). An elementary calculation gives for every g ∈ S ′ (R d ). Obviously, we can rewrite (2.13) to obtain Plugging this into (2.15) we end up with the pointwise representation (ℓ ∈ N) for all y ∈ R d . Let us mention that the case ℓ = 0 plays a particular role. In this case we have Substep 1.2. Let us prove the following important inequality first. For every r > 0 and every N ∈ N 0 we have x ∈ R d and ℓ ∈ N 0 . Again the case ℓ = 0 has to be treated separately according to the remark after (2.17). The representation (2.17) will be the starting point to prove (2.18). Namely, we have for Elementary properties of the convolution yield (compare with (2.34)) with the appropriate modification in case ℓ = 0. Next, we apply the elementary inequalities where 0 < r ≤ 1. Let us define the maximal function and estimate Observe that we can estimate the term (...) 1−r in the right-hand side of (2.26) by where we again used the inequality (compare with (2.23)) and put Hence γ k+ℓ gives us only two different functions from S(R d ). This implies the boundedness Observe that the right-hand side of (2.18) decreases as N increases. Therefore, we have (2.18) on the left-hand side and (Φ k+ℓ ) t by Φ k+ℓ = Φ 0 for k = 0 on the right-hand side. We proved, that the inequality (2.29) holds for all t ∈ [1,2] where c > 0 is independent of t. If we choose r < min{p, q}, we can apply the norm on both sides and use Minkowski's inequality for integrals, which yields If ar > d then we have Now we use a well-known majorant property in order to estimate the convolution on the righthand side by the Hardy-Littlewood maximal function (see Paragraph 2.2 and [26, Chapt. 2]). This yields An index shift on the right-hand side gives Choose now d/a < r < min{p, q}, N > max{0, −s} + a and put We obtain for ℓ ∈ N Now we apply Lemma 2.13 in L p/r (ℓ q/r , R d ) which yields The Fefferman-Stein inequality (see Paragraph 2.2/Theorem 2.1, having in mind that p/r, q/r > 1) gives Hence, we obtain The summand (Φ * 0 f ) a |L p (R d ) can be estimated similar using (2.29) in case ℓ = 0. This proves f |F s p,q (R d ) 2 f |F s p,q (R d ) 1 . With slight modifications of the argument we prove This finishes the proof of (2.12). Step 2. Let Ψ 0 , Ψ ∈ S(R d ) be functions satisfying (2.5). Indeed, we do not need (2.4) for the following inequality which holds true for all f ∈ S ′ (R d ). We decompose f similar as in Step 1. Exploiting the property (2.4) for the system (Φ 0 , Φ) we find S(R d )-functions λ 0 , λ ∈ S(R d ) such that supp λ 0 ⊂ {ξ ∈ R d : |ξ| ≤ 2ε} and supp λ ⊂ {ξ ∈ R d : ε/2 ≤ |ξ| ≤ 2ε} and for ξ ∈ R d . Putting Λ 0 = F −1 λ 0 and Λ = F −1 λ we obtain the decomposition for every g ∈ S ′ (R d ) . We put g = Ψ ℓ * f for ℓ ∈ N 0 and see Now we estimate as follows We first observe that for x ∈ R d and functions µ, η ∈ S(R d ) the following identity holds true for u, v > 0 This yields in case ℓ ≥ k (with a minor change if k = 0) where we used Lemma A.3 for the last estimate. If k > ℓ we change the roles of Ψ and Λ to obtain again with Lemma A.3 (minor change if ℓ = 0) where L can be chosen arbitrary large since Λ satisfies (M L ) for every L ∈ N according to its construction. Let us further use the estimate Consequently, Plugging this into (2.33), choosing L ≥ a + |s| and δ = min{1, R + 1 − s} we obtain the inequality for all x ∈ R d . Applying Lemma 2.13 gives (2.31) . Step 3. What remains is to show that (2.8) is equivalent to the rest. We return to (2.29) in Substep 1.3. If |z| < 2 −(ℓ+k) t formula (2.29) implies by shift in the integral the following Indeed, we have 1 + 2 ℓ |x − y| ≤ 1 + 2 ℓ (|x − (y + z)| + |z|) Where the last estimate follows from the fact that k ∈ N 0 in the sum. Instead of the integral ( 2 1 | · | q/r dt/t) r/q we now take on both sides of (2.37) the norm The integration over z does not influence the left-hand side. Instead of (2.30) we obtain (1 + 2 ℓ |x − y|) ar dy . We continue with analogous arguments as after (2.30) and end up with (2.36) . Indeed, it is easy to see, that we have for all t > 0 and we are done. The proof is complete Proof of Theorem 2.8 The proof of Theorem 2.8 is almost the same as the previous one. It is less technical since we do not have to deal with a separate function Φ 0 which causes several difficulties. However, there are still some technical obstacles which have to be discussed. 1. Although we are in the homogeneous world, we use the same decomposition as used in (2.14), even with the inhomogeneity Φ 0 . In the definition of Λ m,ℓ (x) in (2.16) we have to put in addition Φ(x), if ℓ = 0 and m > 0. The consequence is equation (2.17) for every ℓ ∈ Z. Hence, the inhomogeneity is shifted to Λ m,ℓ . This yields (2.29) for all ℓ ∈ Z, where k still runs through N 0 . We need this for the argument in Substep 3.1. Proof of Corollary 2.11 and 2.12 1. The proof of Corollary 2.11 is immediate. We know that ∆ N gives ( d k=1 |ξ k | 2 ) N as factor on the Fourier side. This gives (2.5) immediately and together with (2.11) we have (2.4) for ε > 0 small enough. 2. In the case of Corollary 2.12 the situation is a bit more involved. Clearly, Condition (2.5) holds true. But the problem here is, that (2.4) may be violated for all ε > 0. However, we argue as follows. In Step 2 in the proof above we have seen, that we do not need (2.4) for the system (Ψ 0 , Ψ). Hence, we can estimate (2.9) and (2.10) from above by a further characterization of F s p,q (R d ) . For the remaining estimates we apply Theorem 2.6 with the system Classical coorbit space theory In [7,8,9,14] a general theory of Banach spaces related to integrable group representations has been developed. The ingredients are a locally compact group G with identity e, a Hilbert space H and an irreducible, unitary and continuous representation π : G → L(H), which is at least integrable. One can associate a Banach space CoY to any solid, translation-invariant Banach space Y of functions on the group G. The main achievement of this abstract theory is a powerful discretization machinery for CoY , i.e., a universal approach to atomic decompositions and Banach frames. It allows to transfer certain questions concerning Banach space or interpolation theory from the function space to the associated sequence space level, see [8,9,18]. In connection with smoothness spaces of Besov-Lizorkin-Triebel type the philosophy of this approach is to measure smoothness of a function in decay properties of the continuous wavelet transform W g f which is studied in detail in the appendix. Indeed, homogeneous Besov and Lizorkin-Triebel type spaces turn out to be coorbits of properly chosen spaces Y on the ax + b-group G. There are some more examples according to this abstract theory. One main class of examples refers to the Heisenberg group H, the short-time Fourier transform and leads to the well-known modulation spaces as coorbits of weighted L p (H) spaces, see [7, 7.1] and also [10]. Function spaces on G Integration on G will always be with respect to the left Haar measure dµ(x). The Haar module on G is denoted by ∆. We define further L x F (y) = F (x −1 y) and R x F (y) = F (yx), x, y ∈ G, the left and right translation operators. A Banach function space Y on the group G is supposed to have the following properties The continuous weight w is called sub-multiplicative if w(xy) ≤ w(x)w(y) for all x, y ∈ G. The space L w p (G), 1 ≤ p ≤ ∞, of functions F on the group G is defined via the norm where we use the essential supremum in case p = ∞ . If w ≡ 1 then we simply write L p (G) . It is easy to show that these spaces provide left and right translation invariance if w is submultiplicative. Later, in Paragraph 4.1 we are going to introduce certain mixed norm spaces where the translation invariance is not longer automatic. Sequence spaces Definition 3.1. Let X = {x i } i∈I be some discrete set of points in G and V be a relatively compact neighborhood of e ∈ G . (ii) X is called relatively separated if for all compact sets K ⊂ G there exists a constant C K such that sup j∈I ♯{i ∈ I : (iii) X is called V -well-spread (or simply well-spread) if it is both relatively separated and V -dense for some V . Definition 3.2. For a family X = {x i } i∈I which is V -well-spread with respect to a relatively compact neighborhood V of e ∈ G we define the sequence space Y b and Y ♯ associated to Y as Remark 3.3. For a well-spread family X the spaces Y b and Y ♯ do not depend on the choice of V , i.e. different sets V define equivalent norms on Y b and Y ♯ , respectively . For more details on these spaces we refer to [8] . Coorbit spaces Having a Hilbert space H and an integrable, irreducible, unitary and continuous representation π : G → L(H) then the general voice transform of f ∈ H with respect to a fixed atom g is defined as the function V g f on the group G given by where the brackets denote the inner product in H . Definition 3.4. For a sub-multiplicative weight w(·) ≥ 1 on G we define the space A w ⊂ H of admissible vectors by Finally, we denote with (H 1 w ) ∼ the canonical anti-dual of H 1 w , i.e., the space of conjugate linear functionals on H 1 w . We see immediately that A w ⊂ H 1 w ⊂ H. The voice transform (3.1) can now be extended to H w × (H 1 w ) ∼ by the usual dual pairing. The space H 1 w can be considered as the space of test functions and the reservoir (H 1 w ) ∼ as distributions. Let now Y be a space on G such that (i) -(iii) in Paragraph 3.1 hold true. We define further where the operator norms are considered from Y to Y . Definition 3.5. Let Y be a space on G satisfying (i)-(iii) in Paragraph 3.1 and let the weight w(x) be given by (3.2). Let further g ∈ A w . We define the space CoY , which we call coorbit space of Y , through 3) The following basic properties are proved for instance in [20,Thm. 4.5.13]. Theorem 3.6. (i) The space CoY is a Banach space independent of the analyzing vector g ∈ A w . (ii) The definition of the space CoY is independent of the reservoir in the following sense: Assume that S ⊂ H 1 w is a non-trivial locally convex vector space which is invariant under π. Assume further that there exists a non-zero vector g ∈ S ∩ A w for which the reproducing formula holds true for all f ∈ S ∼ . Then we have Discretizations This section collects briefly the basic facts concerning atomic (frame) decompositions in coorbit spaces. We are interested in atoms of type {π(x i )g} i∈I , where {x i } i∈I ⊂ G represents a discrete subset, whereas g denotes a fixed admissible analyzing vector. in some suitable topology. (c) If {λ i } i∈I ∈ B ♯ then i∈I λ i g i ∈ B and there exists a constant C 2 > 0 such that (a) We have {h i (f )} i∈I ∈ B b for all f ∈ B and there exist constants C 1 , C 2 such that Remark 3.9. This setting differs slightly from the understanding of Triebel in [30,31] . The following abstract result for the atomic decomposition in CoY is due to Feichtinger and Gröchenig (see [8,Thm. 6.1]). Theorem 3.10. Let Y be a function space on the group G satisfying the hypotheses (i)-(iii) from Paragraph 3.1 and let w(x) be given by (3.2). Furthermore, the element g ∈ A w is supposed to satisfy Then there exists a neighborhood U of e ∈ G and constants C 0 , C 1 > 1 such that for every U -well-spread discrete set X = {x i } i∈I ⊂ G the following is true. with coefficients {λ i } i∈I depending linearly on f and satisfying the estimate (ii) (Synthesis) Conversely, for any sequence {λ i } i∈I ∈ Y ♯ the element f = i∈I λ i π(x i )g is in CoY and one has In both cases, convergence takes place in the norm of CoY if the finite sequences are norm dense in Y ♯ , and in the weak * -sense of (H 1 w ) ∼ otherwise. Remark 3.11. According to Definition 3.7 the family {π(x i )g} i∈I represents an atomic decomposition for CoY . Theorem 3.12. Under the same assumptions as in Theorem 3.10 the system {π(x i )g} i∈I represents a Banach frame for CoY , i.e., The following powerful result goes back to Gröchenig [13] and was generalized by Rauhut [21]. Theorem 3.13. Suppose that the functions g r , γ r , r = 1, ..., n, satisfy (3.4). Let X = {x i } i∈I be a well-spread set such that for all f ∈ H . Then expansion (3.5) extends to all f ∈ CoY . Moreover, f ∈ (H 1 w ) ∼ belongs to CoY if and only if { π(x i )γ r , f } i∈I belongs to Y b for each r = 1, ..., n . The convergence is considered in CoY if the finite sequences are dense in Y b . In general we have weak *convergence. Proof. The proof of this result relies on the fact, that there exists an atomic decomposition {π(y i )g} i∈I by Theorem 3.10 with a certain g satisfying (3.4) and a corresponding sequence of points Z = {y i } i∈I . This has to be combined with Theorem 3.12 and Theorem 3.10/(ii) and we are done. See [13] for the details. . Coorbit spaces on the ax + b-group Let G = R d ⋊ R * + the d-dimensional ax + b-group. Its multiplication is given by (x, t)(y, s) = (x + ty, st) . The left Haar measure µ on G is given by dµ(x, t) = dx dt/t d+1 , the Haar module is ∆(x, t) = t −d . Giving a function F on G the left and right translation L y = L (y,r) and R y = R (y,r) are given by t)(y, r)) = F (x + ty, rt) . Peetre type spaces on G The present paragraph is devoted to the definition of certain mixed norm spaces on the group. Such spaces have been considered in various papers, see [5,7,13,14]. In particular, so-called tent spaces have some important applications in harmonic analysis. Indeed, it is possible to recover Lizorkin-Triebel spaces as coorbits of tent spaces. Here we use a different approach and define a new scale of function spaces on the group G. We call them Peetre type spaces since a quantity related to the Peetre maximal function (2.1) is involved in its definition. It turned out that they are straight forward to handle in connection with translation invariance. In contrast to the tent space approach they represent the more natural choice for considering Lizorkin-Triebel spaces as coorbits. Additionally, they seem to be suitable for inhomogeneous spaces and more general situations like weighted spaces and general 2-microlocal spaces, which will be studied in a further contribution to the subject. Step 1. The left and right translation invariance ofL s p,q (G) andṪ s p,q (G) was shown in [20, Lem. 4.7.10]. Step 2. Let us considerṖ s,a p,q (G). Clearly, we have for F ∈Ṗ s,a p,q (G) Hence, we obtain L (z,r) :Ṗ s,a p,q (G) →Ṗ s p,q (G) = r d(1/p−1/q)−s . The right translation invariance is obtained by Observe that This yields and consequently R (z,r) :Ṗ s,a p,q (G) →Ṗ s,a p,q (G) ≤ r s+d/q max{1, r −a }(1 + |z|) a . Remark 4.3. Note, that we did neither use the translation invariance of the Lebesgue measure nor any change of variable in order to prove the right translation invariance ofṖ s,a p,q (G). This gives room for further generalizations, i.e., replacing the space L p (R d ) by some weighted Lebesgue space L p (R d , ω) for instance. New old coorbit spaces We start with H = L 2 (R d ) and the representation where T x f = f (· − x) and D L 2 t f = t −d/2 f (·/t) has been already defined in Paragraph 2.2. This representation is unitary, continuous and square integrable on H but not irreducible. However, if we restrict to radial functions g ∈ L 2 (R d ) then span{π(x, t)g : (x, t) ∈ G} is dense in L 2 (R d ). Another possibility to overcome this obstacle is to extend the group by SO(d), which is more or less equivalent, see [7,8] for details. The voice transform in this special situation is represented by the so-called continuous wavelet transform W g f which we study in detail in Paragraph A.1 in the appendix. Recall the abstract definition of the space H 1 w and A w from Definition 3.4. The following result implied by our Lemma A.3 on the decay of the continuous wavelet transform. It states under which conditions on the weight w the space H 1 w is nontrivial. for some r, s, s ′ ≥ 0 then S 0 (R d ) ֒→ H 1 w . This is a kind of minimal condition which is needed in order to define coorbit spaces in a reasonable way. Instead of (H 1 w ) ∼ one may use S ′ 0 (R d ) as reservoir and a radial g ∈ S 0 (R d ) as analyzing vector. Considering (3.2) we have to restrict to such function spaces Y on G satisfying (i),(ii),(iii) in Paragraph 3.1 where additionally holds true for some r, s, s ′ ≥ 0 . The following theorem shows, how the spaces of Besov-Lizorkin-Triebel type from Section 2 can be recovered as coorbit spaces with respect to G. [7,13,14] and rely on the characterizations given by Triebel in [29] and [30, 2.4, 2.5], see in particular [30, 2.4.5] for the variant in terms of tent spaces which were invented in [5]. From the deep result in [5,Prop. 4] it follows thatṪ s p,q (G) are translation invariant Banach function spaces on G, which makes them feasible for coorbit space theory (b) Assertion (iii) is indeed new and makes the rather complicated tent spacesṪ s p,q (G) obsolete for this issue. We showed that Y = P s,a p,q (G) is a much better choice since the right translation invariance is immediate and gives more transparent estimates for its norm. Once we are interested in reasonable conditions for atomic decompositions this is getting important, see Section 4.5. Sequence spaces In the sequel we consider a compact neighborhood of the identity element in G given by , where α > 0 and 1 < β. Furthermore, we consider the discrete set of points This family is U -well-spread. Indeed, Note that in this case the spaces Y ♯ and Y b coincide. We will further use the notation χ j,k (x) = 1 : x ∈ Q j,k 0 : otherwise . Definition 4.7. Let Y be a function space on G as above. We put Theorem 4.8. Let 1 ≤ p, q ≤ ∞, s ∈ R and a > d/ min{p, q}. Then and Proof. We prove the first statement. The proof for the second one is even simpler. Let Discretizing the integral over t by t ≍ β −ℓ we obtain . and estimate (4.4) In order to include also the situation min{p, q} ≤ 1 we use the following trick. Obviously, we can rewrite and estimate (4.4) with 0 < r < 1 in the following way We continue with the useful estimate sup w |χ ℓ,k (x + w)| (1 + β ℓ |w|) ar (4.6) Indeed, the first estimate is obvious. Let us establish the second one Note, that the functions g ℓ (x) = β ℓd (1 + β ℓ | · |) ar belong to L 1 (R d ) with uniformly bounded norm, where we need that ar > d . Putting (4.7) and (4.6) into (4.5) we obtain Now we are in a position to use the majorant property of the Hardy-Littlewood maximal operator (see Paragraph 2.2 and [26, Chapt. 2]), which states that a convolution of a function f with a L 1 (R d )-function (having norm one) can be estimated from above by the Hardy-Littlewood maximal function of f . We choose r < min{p, q} and apply Theorem 2.1 for the L p/r (ℓ q/r ) situation. This gives and finishes the upper estimate. Both conditions, ar > d and r < min{p, q}, are compatible if a > d/ min{p, q} is assumed at the beginning. For the estimate from below we go back to (4.2) and observe A further use of (4.3) gives finally The proof is complete. Atomic decompositions The following theorem is a direct consequence of the abstract results in Theorems 3.10, 3.12. is a Banach frame forḞ s p,q (R d ) in the sense of (3.5) . Proof. Let us prove (a). First of all, we apply Theorem 4.5/(i). Afterwards, we use Proposition 4.2 in order to estimate the weight w Y (x, t) for Y =L Let us distinguish the cases s ≥ 0 and s < 0. In the first case we can put Finally (4.9), (4.10) and Theorem 3.13 yield (a) . Step 2. We prove (b). We apply Theorem 4.5/(iii) and afterwards Proposition 4.2 and obtain for Y =Ṗ This yields the lower bound in (b) and we are done. The following corollary is a consequence of Theorem 4.12 and the facts in Section A.2. (D) For every N ∈ N there exists a constant c N such that (M L ) We have vanishing moments DᾱFΨ(0) = 0 for all |ᾱ| 1 ≤ L . Remark A.2. If a function g ∈ L 2 (R d ) satisfies (S K ) for some K > 0 then by well-known properties of the Fourier transform we have g ∈ C ⌊K⌋ (R d ). The following lemma provides a useful decay result for the continuous wavelet transform under certain smoothness, decay and moment conditions, see also [12,23,16] for similar results in a different language. It represents a continuation of [23,Lem. 1] where one deals with S(R d )functions Lemma A.3. Let L ∈ N 0 , K > 0 and Φ, Ψ, Φ 0 ∈ L 2 (R d ). (i) Let Φ satisfy (D), (M L−1 ) and let Φ 0 satisfy (D), (S K ). Then for every N ∈ N there exists a constant C N such that the estimate holds true for x ∈ R d and 0 < t < 1 . We exploit property (S K ) for Φ 0 and proceed analogously as above. This proves (A.2). This completes the proof. Corollary A.4. Let Φ, Ψ belong to the Schwartz space S 0 (R d ). By Lemma A.3/(ii) for every L, N ∈ N there is a constant C L,N > 0 such that Additionally, we obtain for Φ ∈ S 0 (R d ) and Φ 0 ∈ S(R d ) that A.2 Orthonormal wavelet bases The following Lemma is proved in Wojtaszczyk [34, 5.1]. is an orthonormal basis in L 2 (R d ). Spline wavelets As a main example we will consider the spline wavelet system. The normalized cardinal Bspline of order m + 1 is given by the generator of an orthonormal wavelet system is defined. For m = 1 it is easily checked that −ψ 1 (x − 1) is the Haar wavelet. In general these functions ψ m have the following properties: • ψ m restricted to intervals [ k 2 , k+1 2 ], k ∈ Z, is a polynomial of degree at most m − 1. In particular, ψ m satisfies (M L ) for 0 < L ≤ m and ψ m , ϕ m satisfy (D) and (S K ) for K < m − 1.
8,975.4
2010-07-20T00:00:00.000
[ "Mathematics" ]
H-Watch: An Open, Connected Platform for AI-Enhanced COVID19 Infection Symptoms Monitoring and Contact Tracing The novel COVID-19 disease has been declared a pandemic event. Early detection of infection symptoms and contact tracing are playing a vital role in containing COVID-19 spread. As demonstrated by recent literature, multi-sensor and connected wearable devices might enable symptom detection and help tracing contacts, while also acquiring useful epidemiological information. This paper presents the design and implementation of a fully open-source wearable platform called H-Watch. It has been designed to include several sensors for COVID-19 early detection, multi-radio for wireless transmission and tracking, a microcontroller for processing data on-board, and finally, an energy harvester to extend the battery lifetime. Experimental results demonstrated only 5.9 mW of average power consumption, leading to a lifetime of 9 days on a small watch battery. Finally, all the hardware and the software, including a machine learning on MCU toolkit, are provided open-source, allowing the research community to build and use the H-Watch. I. INTRODUCTION COVID-19 has been declared a pandemic by the World Health Organization (WHO) and poses a significant challenge for healthcare infrastructure around the world.Continuous vital sign monitoring for symptoms of severe pneumonia and sepsis, such as blood [1] oxygen saturation level (SpO2 <93%), respiratory rate (>30 breaths/minute), heart rate, body temperature, fatigue, coughing detection, and blood pressure can assist in the early recognition of high-risk patients [2].For example, saturation values below 95% are a symptom of hypoxemia (reduction in the presence of oxygen in the blood), usually due to a decrease in gas exchange at the pulmonary alveoli level, a typical symptom of the worsening of some viral pneumonia [3].Continuously monitoring those parameters and tracking users in their movements is crucial not only to early detect infections but also to follow its diffusion [4].However, continuous patient monitoring and tracking are ultra-challenging for many key issues, such as privacy [5], complexity of the monitoring, early recognition [4], long-term operation with mobile battery-operated devices, and discontinuous connectivity, among others. Thanks to technology advancements in low-power integrated circuits (ICs), sensors, and wireless protocols have enabled the practical and daily use of light-weighted and unobtrusive wearable devices [6], where electronics are worn on the human body or hidden into clothes.Among other products, smartwatches are a massive commercial reality with hundreds of products specifically designed for fitness and entertainment applications.As a containment measure for the COVID-19 pandemic, back-tracing contacts and personal interactions via smartphones is becoming increasingly important.Its active contribution in identifying potential outbreaks has been demonstrated [7].While GPS tracking, in addition to non-negligible privacy and security implications [7], poses serious limits to adequate indoor coverage, the use of BLE technology is considered as a good compromise [8] between the tracing precision (in a range from 10 cm to 10 m) and user privacy [9]; for this reason, it is the most widely adopted technology in consumer devices.However, today most of commercial systems are not ready to be used in a pandemic situation as the COVID-19.The most critical weaknesses are the limited energy supply due to the battery small size and limited computational resources. A new recent trend is to couple low power design with machine learning on microcontrollers (MCUs), energy-efficient wireless communication, and energy harvesting to accurately re-design smartwatches to not only help the management of chronic diseases but to potentially play a key role in providing early infection warnings [10]. This paper presents the design and implementation of a hardware-firmware open-source smartwatch called Health Watch (H-Watch).H-Watch combines multi-sensors for health monitoring, an ARM-Cortex-M4F for data acquisition and processing, wireless communication, and energy harvesting to achieve a long-lasting intelligent device.The main H-Watch features are low-power in the range of few mW peak, sub mW average, the capability to run artificial neural networks on-board, a multi-source energy harvesting, high integration resulting in a small form factor, and its novel 5G Narrow Band Internet of Things (NB-IoT) communication.Due to the ultralow power design and the aggressive power management, H-Watch can automatically and continuously measure dissolved oxygen, heart rate, temperature, respiration rate, motion, and audio signals requiring an average power of 5.9 mW.Thanks to the advanced low-power design and photovoltaic energy harvesting, the device lifetime reaches up to 10 working days arXiv:2407.21501v1[eess.SY] 31 Jul 2024 H-Watch supports a wide range of connectivity options that can also be used to track and exchange alarms and data with local devices, for example, a smartphone, through the BLE 5.0 connection.Direct global connectivity is also available, using NB-IoT for tracking.NB-IoT is a novel protocol standardized by 3GPP.It is also known as LTE Cat-NB1(NB2) and belongs to Low Power Wide Area Network (LPWAN) technologies that could work virtually anywhere if the 4G (or 5G) infrastructure is present.It can send alarms and data directly to secure servers, such as time traces from on-board sensors. To allow researchers and engineers to exploit the features and the hardware-firmware co-design of H-Watch, this paper open-sources1 all the hardware schematics and the layout as well as software libraries for sensors and peripherals, providing a platform to analyze, classify and study a broad infected population sample helping for remote health assistance and diagnosis.Moreover, the repository will provide a library and tools to run Artificial Intelligence (AI) algorithms for on-board data analysis.Indeed, the H-Watch aims to run pre-trained AI models, such as neural networks, for in-situ feature extraction and classification.In this work, we also provide system energy consumption to estimate the battery life with and without the energy harvester. II. SYSTEM ARCHITECTURE Figure 1 shows the H-Watch architecture.From left to right, the figure illustrates the power management (PM), which is specifically designed to achieve extremely low power consumption and to handle both a Li-Ion 370 mAh battery and a 7 cm2 solar panel.The sub-system includes the STM32WB55RGV6 SoC (STM32 hereafter) from ST Microelectronics that manages the sensor acquisition and the wireless connectivity.In parallel with four internal sensors, namely 6-axes IMU (LSM6DSM), 6-axes magnetometer (LSM303AGR), a MEMS Microphone (MP34DT05TR), skin temperature and pressure sensors (LPS22HB), the H-Watch features a low power LCD display (LS012B7DH02).In addition, it features the MAX30101, an integrated SoC for pulse oximetry and heart rate monitoring at low power consumption, 5.5 mW.H-Watch can communicate with local devices, such as smartphones and laptops, and directly to the cloud through an embedded NB-IoT transceiver (Quectel BC95-G) and the BLE interface. A. Power supply and Energy Harvesting The power supply sub-system is designed around the BQ25570 and the TPS63031 from Texas Instruments.BQ25570 is today the most efficient energy harvester chip on the market.It is used to recharge the Li-Ion battery with the flexible solar panel wrapped around the wrist.The BQ25570 achieves high conversion efficiency (90%) as it periodically adjusts its internal input-impedance to handle Maximum Power Point (MPP) of the energy source; indeed, this parameter changes with the illumination environment.The BQ25570 has the function of charging the batteries using an integrated boost-converter, and, simultaneously, TPS63031 supplies the system exploiting an integrated high-efficiency buck-boost converter (Table I -V DD ), specifically selected to compensate input voltage oscillations coming from the discontinuous energy source. The whole supply voltage has been chosen to be 3.3 V, providing a single supply cluster for ICs.This was a specific design choice to decrease the PCB size, as fewer external components are required.Furthermore, it is the lowest voltage supported by the majority of ICs used in the circuit, minimizing the system energy consumption.From a wearable application point of view, the effective light spectrum, the size, the flexibility of the cells, and output power, are essential requirements.We selected a flexible panel, SP3-12 by Flexsolarcells 2 , that measures only 7 cm 2 , which is directly used as H-Watch wrist band.The energy harvesting capability of the small-sized cell is shown in Figure 2.For the measurement, a Roline RO1332 lux meter has been placed with the solar cell in a darkened chamber artificially illuminated by a controlled broadband light source.The cell output power in matched conditions is measured for illumination from 0 to 2000 lx, which corresponds with the recommended lighting conditions for offices or workspaces.The importance of maximum power point tracking is highlighted in the inset of Figure 2 measured at the static illumination of 1900 lx. In a typical indoor scenario of 500 lx with artificial lighting, the circuit allows to harvest 73 µW with a conversion efficiency of 92 % and a storage voltage of 3.7 V.In outdoor light conditions of 10 klx more than 15 mW can be achieved.The power generation of each energy source and the power consumption of each internal sub-system (Table I) have been evaluated separately under controlled conditions.We measured the power intake through the Keysight B2900A Series Precision Source/Measure Unit (SMU). B. Smart Module and on-board Processing As demonstrated in previous works [11], the IMU can be beneficial to improve the quality of the measurement when the user is moving [12].The pulse oximeter non-invasively measures the blood oxygenation by shining light at two different wavelengths into the wrist and by analyzing the pulsatile component of the reflected signal [13].The oximeter sensor controls two LEDs and converts the analog signal from its photodiode into a digital representation for the connected microcontroller.The case form ensures adequate contact with the skin. The on-board processing capability consists of the STM32 that integrates multiple hardware accelerators (data processing, controlling sensing, communication) in a miniaturized (8 mm × 8 mm) single IC.The microcontroller has a dualcore architecture with an ARM Cortex M4F for processing and an ARM Cortex M0+ dedicated to the Bluetooth stack.The power-optimized ARM Cortex M0+ runs autonomously, allowing the rest of the processor to stay in standby mode.Its radio controller is a further reason for selecting this MCU, the patient contact tracing from the BLE advertising mode can be acquired and stored in memory without involving the ARM Cortex M4, improving the energy efficiency of the whole system.On-board classification and feature extraction are available through two different tools: X-CUBE-AI from ST Microelectronics is an expansion package that extends the standard STM32 capabilities with automatic conversion of pre-trained Neural Network and ANSI C code generation, and FANN-ON-MCU, ARM Cortex-M, and low power microcontrollers [14].The latter is a free open source neural network library, which implements instruction optimized (exploiting ARM CMSIS-NN) artificial neural networks in fixed and floating-point computations.It supports C code for both fully connected and sparsely connected networks. In the GitHub repository 3 we release two machine learning examples using both tools.Classification and feature extraction go beyond this paper's scope, which focuses on the hardware and energy profile description. From our experience, temporal convolutional networks, based on 1D-Convolutional layers, and more widely used MLP feed-forward fully connected networks, need an execution time between 21 ms and 500 ms on the STM32 @ 64 MHz.These values are defined by sampling and classifying heart rate, SpO2, and accelerometer data to extract the patient's health condition. C. Wireless Connectivity H-Watch hosts a dual-radio sub-system, including BLE 5.0 and NB-IoT.In particular, NB-IoT can stream at 170 Kbps with 164 dB of link budget, enabling a wide signal coverage.Despite the long-range communication capabilities, the transceiver needs just 31 uJ per bit when transmitting [15].H-Watch supports BLE 5.0 for privacy-preserving contact tracing [4], to be compliant with standard protocols used by smartphone Apps, and the NB-IoT for optional geo-tracking and bridge-less communication to secure remote servers, to stream results from on-board processing.However, the NB-IoT features a non-negligible energy overhead due to the cellular complex infrastructure.NB-IoT has the dual advantage of communicating through the cellular network like regular smartphones and transferring sensitive data by connecting directly to secure servers, even for low-cost miniaturized devices without the need to go through complex software stacks.In [15], authors extensively studied and characterized the NB-IoT protocol, where the energy is lightly dependent on the packed size, and heavily affected by the signal strength indicator (RSSI).The average energy per packet is consequently modeled in Table I, where three different I BC95G active coverage conditions are provided. III. EXPERIMENTAL RESULTS H-Watch has been developed to evaluate functionalities and performances in terms of low power and lifetime.Figure 3 shows the H-Watch mechanical cross-section in which it presents the flexible solar panel, PCB board, the display, and the clock case.The proposed solution is fully wearable, plug & play, and it is comfortable to wear, allowing long term measurements without annoying the observed patient.Table I collects and presents the sub-system power consumption for each H-Watch ICs.In this paper, we consider four operation modes.Sleep mode in which all the sensors and radios are off, but the real-time clock and the display are active to enable periodic wake-up, it needs 97 µW .Advertising mode, it is similar to Sleep mode, but the BLE is advertising at 1 Hz and 0 dBm.It is dedicated to contact tracing and consumes only 226 µW .In motion detection mode, with the accelerometer and skin temperature enabled, the H-Watch needs 1.75 mW on average for collecting and processing the data.Lastly, with the full operation mode, in which the watch performs human activity and health classification (oximeter and Heartrate enabled), the power consumption increases up to 10 mW.These results consider the DC/DC efficiency, named V DD and I.In full operation mode, the battery can support the H-Watch for 5 days, while in advertising mode, it exceeds 1 month of operations.Duty cycling between full and motion at 50%, the H-Watch lifetime is 9 days.One NB-IoT packet per day is considered in these conditions, which is used to send highly compressed information to secure servers after local processing.In the worst case, when the application layer requires long-range connectivity, the battery longevity is heavily affected.With RSSI > −95 dBm, each uplink needs approximately the equivalent amount of energy to run the full operation mode for 1 minute, and with RSSI < −110 dBm it grows up to 6 minutes.With good coverage (RSSI > −95 dBm), sending 2.2 MB of data requires approximately 50% of the battery capacity, while, in the worst case, the uplink volume decreases to 0.6 MB.It is clear that the NB-IoT supports only few packets per day, mainly streaming alarms or pre-extracted features from onboard processing. With the help of the energy harvester, H-Watch extends the monitoring time.Considering an indoor scenario with 500 lx, the captured power support sleep and advertising modes, whose life can reach up to two months (8 hours at 500 lx -16 hours no energy).However, considering outdoor activities, sleep and advertising mode are fully covered by solar panel, while the 50% duty-cycled mode between full and motion reaches up to 20 days (6 hours at 10 klx and 4 hours at 500 lx, 14 hours no energy). IV. CONCLUSIONS This paper proposed the design and implementation of an open-source wearable long-lasting smart monitoring platform for health monitoring and tracking, which can provide direct cloud connectivity through state-of-the-art NB-IoT cellular technology.The H-Watch is based on widely available offthe-shelf components; however, it is designed with low power and on-board intelligence in mind.By continuously measuring the blood oxygenation and heart rate with a sampling rate of 50 Hz, accurate results can be achieved with a battery life of 9 days or 20 days, respectively non-using and using the solar energy harvester. Finally, H-Wach is also a low-cost platform.The MCU and the energy harvester cost approximatively 10e 1KQty, while the BC95-G, the NB-IoT radio transceiver, is the most expensive part, 15e.Both external sensors are commonly used in wearable devices, which cost just 2e each.The total cost of off-the-shelf electronics and battery is below 40e.The solar panel costs 4e and the battery 2e, while the display is included in the watch frame: 35e.Future works will focus on the sensor data acquisition to verify the accuracy versus a medical device and, on machine learning, to validate early detection. Fig. 2 . Fig. 2. Solar cell output power in matched condition.The inlet shows the influence of the transducer load on the output power in a single harvesting point at 1900 lx. TABLE I H -WATCH: POWER AND ENERGY PROFILE
3,698.8
2021-05-01T00:00:00.000
[ "Computer Science" ]
Quantifying pollution transport from the Asian monsoon anticyclone into the lower stratosphere . Pollution transport from the surface to the stratosphere within the Asian monsoon circulation may cause harmful effects on stratospheric chemistry and climate. Here, we investigate air mass transport from the monsoon anticyclone into the stratosphere using a Lagrangian chemistry transport model. We show how two main transport pathways from the anticyclone emerge: (i) into the tropical stratosphere (tropical pipe), and (ii) into the Northern hemisphere (NH) extra-tropical lower stratosphere. Maximum anticyclone air mass fractions reach around 5% in the tropical pipe and 15% in the extra-tropical lowermost 5 stratosphere over the course of a year. The anticyclone air mass fraction correlates well with satellite hydrogen cyanide (HCN) and carbon monoxide (CO) observations, corroborating that pollution is transported deep into the tropical stratosphere from the Asian monsoon anticyclone. Cross-tropopause transport occurs in a vertical chimney, but with the emissions transported quasi-horizontally along isentropes above the tropopause into the tropics and NH. Introduction The Asian summer monsoon circulation provides a pathway for anthropogenic pollution into the stratosphere (e.g., Randel et al., 2010), where it may crucially affect stratospheric chemistry and radiation.A related phenomenon is the buildup of the Asian tropopause aerosol layer (ATAL; Vernier et al., 2011), which has recently been estimated to cause a significant regional radiative forcing of −0.1 W m −2 (Vernier et al., 2015), cooling the Earth's surface.Hence, transport in the Asian monsoon is likely an important factor for climate change. Transport by the Asian monsoon includes convection over the Bay of Bengal, northern India and the South China Sea (e.g., Tzella and Legras, 2011;Wright et al., 2011;Bergman et al., 2012).At higher levels monsoon transport is dominated by a strong anticyclonic circulation (Randel and Park, 2006) with confinement and slow uplift of air in the upper troposphere and lower stratosphere (UTLS; e.g., Park et al., 2009).Related to this transport are increased mixing ratios of trace gases with tropospheric sources and decreased mixing ratios of trace gases with stratospheric sources (e.g., Park et al., 2008).The detailed upward transport from the convective outflow to higher levels involves a vertical conduit over the southern Tibetan Plateau (Bergman et al., 2013).In addition, convective uplift by typhoons has been shown to inject air masses into the outer region of the anticyclonic circulation (Vogel et al., 2014).The interplay of these processes results in fast upward transport into the lower stratosphere and an enhanced fraction of young air in the monsoon UTLS region (Ploeger and Birner, 2016).Convection over land causes particularly fast upward transport (Tissier and Legras, 2016). Based on global satellite observations of hydrogen cyanide (HCN), Randel et al. (2010) argued that upward transport from the Asian monsoon reaches deep into the tropical stratosphere.Water vapor observations and simulations, on the other hand, show transport from the monsoon anticyclone mainly into the extratropical lower stratosphere (e.g., Dethof et al., 1999).As stratospheric water vapor is strongly controlled by cold temperatures around the tropopause these results are not necessarily contrary.However, recently even tracer-independent model diagnostics have yielded inconclu-Published by Copernicus Publications on behalf of the European Geosciences Union.sive results.On the one hand, the back trajectory study of Garny and Randel (2016) shows strongest transport from the anticyclone directly into the tropical stratosphere.On the other hand, climate model simulations by Orbe et al. (2015) show the tropopause crossing of air masses from the anticyclone largely in the extratropics and subsequent transport into the extratropical lower stratosphere. Here, we use tracer-independent model diagnostics (i.e., independent of species' chemistry and emissions) in combination with satellite observations of the tropospheric tracers hydrogen cyanide (HCN) and carbon monoxide (CO) to investigate the pathways of pollution from the Asian monsoon anticyclone to the lower stratosphere, and quantify the related amount of air originating in the monsoon anticyclone.In Sect.3, first we demonstrate how transport from the anticyclone can be divided into two main pathways directing into (i) the tropical pipe and (ii) the Northern Hemisphere (NH) extratropical lowermost stratosphere, over the course of a year following the monsoon season.Second, we discuss the detailed transport across the tropopause in the monsoon.Finally, in Sect. 4 we argue that regarding air mass transport into the stratosphere, the Asian monsoon acts as a vertical "chimney" with strong horizontal transport on top (above the tropopause). Method We quantify air mass transport from the Asian monsoon anticyclone using simulations with the Lagrangian chemistry transport model CLaMS (McKenna et al., 2002;Konopka et al., 2004;Pommrich et al., 2014).CLaMS uses an isentropic vertical coordinate throughout the UTLS, and the model transport is driven with horizontal winds and total diabatic heating rates from European Centre of Medium-Range Weather Forecasts (ECMWF) ERA-Interim reanalysis (Dee et al., 2011).The horizontal resolution of the model simulation is about 100 km and the vertical resolution about 400 m around the tropical tropopause (see Pommrich et al., 2014, for details).We included an air mass origin tracer in the model to diagnose the fraction of air at any location in the stratosphere which has left the Asian monsoon anticyclone during the previous monsoon season (see below).In addition we consider carbon monoxide (CO), with the CO lower boundary in CLaMS derived from Atmospheric Infrared Sounder (AIRS) version 6 satellite measurements following the method described in Pommrich et al. (2014), with relevant chemistry for the UTLS region included (Pommrich et al., 2014). It has recently been shown that trace gas confinement within the monsoon anticyclone core can be best described by potential vorticity (PV) contours (Garny and Randel, 2013), and that the anticyclone core can be clearly distinguished from the surrounding atmosphere in a layer around 380 K potential temperature (Ploeger et al., 2015;Unger-mann et al., 2016).We therefore apply the method of Ploeger et al. (2015) to determine the PV value related to the anticyclone border from the maximum PV gradient on every day during (boreal) summers 2010-2013 at the 370 and 380 K potential temperature surfaces (see Appendix for further details, and the Supplement for the data). The anticyclone tracer is initialized with unity inside the PV contour enclosing the anticyclone core in the 370-380 K layer, around 16-17 km altitude, on each day during July-August of the years 2010-2013 and is advected as an inert tracer during the following year.On 1 July of the year thereafter, the tracer is set to zero everywhere and is then reinitialized for the following monsoon season.By definition, the tracer mixing ratio at any location in the stratosphere equals the fraction of air which has left the monsoon anticyclone during the previous monsoon season (see Orbe et al., 2013).Initializing the air mass origin tracer in the UTLS part of the Asian monsoon avoids our results being affected by smallscale transport processes in the troposphere (e.g., convection), whose representation in global reanalysis data is uncertain (e.g., Russo et al., 2011).This choice of method is suitable to study the transport of air from the anticyclone, irrespective of where it originated at the surface.The impact of different boundary layer source regions on the Asian monsoon UTLS is an important research topic itself (e.g., Vogel et al., 2015;Tissier and Legras, 2016).The monsoon tropopause is mainly located above 380 K (see Appendix and Fig. 7) such that the tracer is to a good degree initialized in the troposphere and can be used to study transport from the tropopause region into the stratosphere (see Sect. 3). The anticyclone air mass tracer is compared to global HCN measurements from the Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS) satellite instrument (Bernath et al., 2005).These data have been presented and discussed by Randel et al. (2010) and shown to be a valid tracer for Asian monsoon pollution.For the results of this paper we use HCN from the updated ACE-FTS level 2 data version 3.5 (Boone et al., 2005(Boone et al., , 2013) ) during the period between 1 July 2010 and 30 June 2014, which is in good agreement with the results shown by Randel et al. (2010).Physically unrealistic outliers in the ACE-FTS data have been filtered out following Sheese et al. (2015), discarding data with a quality flag greater than 3. Furthermore, we use CO observations from the Microwave Limb Sounder (MLS) on board the Aura satellite (Pumphrey et al., 2007;Livesey et al., 2008) for validating Asian monsoon transport in the model simulation.While the vertical resolution for HCN from ACE-FTS (3-4 km) is almost twice as good as for HCN from MLS (about 6 km), MLS has a much higher sampling rate (about 3500 profiles per day) compared to ACE-FTS (maximum 32 occultations per day).Hence, for the considerations of climatological zonal mean HCN it is advantageous to use HCN from ACE-FTS (see Sect. 3), whereas for maps of CO within the monsoon region the higher sampling density of MLS is beneficial.(For further comparison of the two instruments see, e.g., Pumphrey et al., 2007.) 3 Results Figure 1 presents the anticyclone air mass fraction and compares with HCN satellite observations from ACE-FTS.During July-September (Fig. 1a) the anticyclone air is transported into the lower stratosphere mainly in the subtropics (between 20 and • N).During fall (October-December, Fig. 1b), the anticyclone air disperses throughout the NH lower stratosphere, even reaching the tropics and Southern Hemisphere (SH).Strong wintertime tropical upwelling related to the stratospheric Brewer-Dobson circulation lifts the anticyclone air in the tropics during the following winter (Fig. 1c).Related downwelling in the extratropics flushes the anticyclone air out of the NH lower stratosphere.During spring (Fig. 1d), the anticyclone air in the tropical pipe rises further while the extratropical lower stratosphere is cleaned.Hence, two main pathways emerge for air from the Asian monsoon anticyclone into the stratosphere.First, a fast transport pathway is directed into the NH extratropical stratosphere (extratropical pathway).Second, a slower pathway is directed into the tropical stratosphere and deep into the stratosphere related to ascent within the tropical pipe (tropical pathway).Contours of ACE-FTS-measured HCN show that the simulated anticyclone air mass fraction correlates well with satellite-observed pollution (for a discussion of these data as a tracer for pollution from the Asian monsoon see Randel et al., 2010).In analogy to the model tracer, observed HCN peaks in the subtropical and extratropical lower stratosphere during and directly following the monsoon season (Fig. 1a, b).During winter and spring (Fig. 1c, d), both enhanced HCN mixing ratios and anticyclone air mass fractions rise in the tropical pipe and are flushed out of the NH lower stratosphere.The good correlation between the maxima of HCN and anticyclone air mass fraction in the tropical pipe during April-June (Fig. 1d) renders an origin of enhanced HCN mixing ratios in the Asian monsoon very likely, as proposed by Randel et al. (2010).The fact that the ascending tropical HCN signal slightly lags the model tracer signal (Fig. 1d) is consistent with the overestimated tropical upwelling in ERA-Interim (e.g., Dee et al., 2011).During July-September (Fig. 1a) no agreement between the anticyclone air mass tracer and HCN mixing ratios in the tropical pipe can be expected due to the reset of the anticyclone air mass tracer to zero on 1 July (see Sect. 2).Similarly, the poor agreement between the anticyclone air mass tracer and HCN in the NH lower stratosphere during April-June (Fig. 1d) is to be expected, because the enhanced HCN mixing ratios around the tropopause are related to young air masses while the anticyclone tracer originates in the previous monsoon season almost 1 year ago. HCN exhibits enhanced concentrations also in the SH subtropics during austral spring to summer (Fig. 1b, c PAS; Glatthor et al., 2015).Hence, a contribution from the SH to stratospheric HCN cannot be ruled out.Furthermore, the irregular tape-recorder signal in the deseasonalized anomaly of tropical HCN during 2005-2008 (Pumphrey et al., 2007) has been linked to irregularly occurring biomass burning in Indonesia (Pommrich et al., 2010).Compared to these studies, the focus here is on the annually repeating seasonal signal discussed by Randel et al. (2010).The qualitative agreement between the transport pathways of HCN and air mass from the monsoon indicates that transport from the Asian monsoon anticyclone has the potential to significantly contribute to the annual signal in HCN concentrations in the stratosphere.In the following, we focus on air mass transport from the Asian monsoon, which clearly reaches the tropical pipe (Fig. 1) and therefore may cause substantial pollution transport deep into the stratosphere.The time series of air mass fractions in Fig. 2 show that the amount of anticyclone air peaks in the NH extratropical lowermost stratosphere in October, reaching around 15 % at 380 K.In the tropics at 460 K (above the layer of frequent exchange between tropics and midlatitudes; see Rosenlof et al., 1997) the amount of air which originated in the anticyclone peaks in December, reaching around 5 %.This later timing of the peak in the tropics compared to the extratropics is related to the higher potential temperature level (460 vs. 380 K) and slow tropical upwelling.At lower levels (here 400 K, red dashed) the tropical anticyclone air mass fraction peaks earlier, around October.The anticyclone air fraction in the extratropical stratosphere peaks with a value that is more than twice as high compared to the tropical anticyclone air fraction.However, the anticyclone air transported to the tropics remains much longer in the stratosphere and exceeds the extratropical amount after about half a year (at levels higher than 460 K the anticyclone air fraction peaks after January with peak values above the extratropical anticyclone air fraction; see Fig. 2).The large standard deviation (from the zonal averaging) around the extratropical zonal mean value (grey shading in Fig. 2) indicates strong variability in the extratropical lowermost stratosphere tracer distribution, related to the frequent occurrence of smaller-scale structures in the midlatitude tracer distribution due to various processes (e.g., Rossby-wave breaking).At the lower end of the tropical pipe (460 K), the tracer distribution is more homogeneous as reflected in a smaller standard deviation. To further understand the details of transport from the monsoon anticyclone into the stratosphere we investigate the direction of tropopause crossing.Recently, a question was raised regarding whether the air confined within the monsoon anticyclone crosses the tropopause vertically or horizontally or, in other words, whether the monsoon acts mainly as a vertical "chimney" or as an isentropic "blower" for crosstropopause transport (Pan et al., 2016).The good agreement of carbon monoxide distributions in the monsoon region at 380 K between the CLaMS simulation and MLS satellite observations shows that the model reliably simulates transport in the monsoon anticyclone (Fig. 3a, b upper panels).Note that the figure shows the deviation of CO from the zonal mean to emphasize the anomalous character of monsoon transport.In particular, the positive CO anomaly in the monsoon agrees well between model and observations, and even the weak positive anomalies to the northwest and northeast of the monsoon indicate regions of frequent eddy shedding (Hsu and Plumb, 2001;Popovic and Plumb, 2001). In order to clearly separate tropospheric and stratospheric air we transform the data to a tropopause-based vertical coordinate, chosen as the distance to the local tropopause in potential temperature before calculating all averages (for using this method in a different context, see, e.g., Birner et al., 2002;Hoor et al., 2004).The distributions in the monsoon region change substantially when viewed in tropopausebased coordinates along a surface at 10 K above the local tropopause (Fig. 3a, b lower panels).The positive CO anomaly significantly weakens, as an effect of the averaging procedure following the tropopause, indicating that a considerable part of the trace gas anomaly in the monsoon is related to the upward-bulging tropopause in the monsoon region.However, the fact that parts of the anomaly remain indicates upward transport across the tropopause above the monsoon.Also, for the tropopause-based map, CO distributions from CLaMS and MLS observations agree reliably well in the monsoon region.Significant differences between CLaMS and MLS exist only at midlatitudes (already observed by Pommrich et al., 2014) and above the west Pacific and Maritime continent. Figure 3c shows analogous maps as for CO for the anticyclone air mass fraction.Again, tropopause-based averaging weakens the positive monsoon anomaly.However, a clear maximum remains centered in the monsoon region above the tropopause.This indicates that cross-tropopause transport into the stratosphere in the monsoon occurs to a large degree in the vertical direction.Vertical transport, diagnosed from the ERA-Interim total diabatic heating rate, is consistent with this finding showing maximum upward velocity in the anticyclone (grey contours in Fig. 3).The stronger degradation of the monsoon anomaly for CO as compared to the inert air mass tracer is related to the finite (∼ 4 months) lifetime of CO.As a consequence, CO mixing ratios degrade rapidly at levels around the tropopause, where vertical transport is slow.At levels about 30 K above the local tropopause the positive CO anomaly above the monsoon anticyclone almost vanishes, whereas the inert model tracer still shows clearly enhanced values (not shown). An unambiguous picture of air mass transport across the tropopause can only be deduced from the inert air mass origin tracer in the model.Figure 4 shows the anticyclone air mass fraction averaged over the zonal section of the Asian monsoon (40-100 • E) and over periods of about a week (with all averages carried out in tropopause-based coordinates).Directly after the main monsoon season at the end of August (Fig. 4a) the largest amount of anticyclone air is located in the subtropics between 20 and 40 • N around and above the tropopause.One month later, this air has been further transported upwards and resides clearly above the tropopause (Fig. 4b).Hence, cross-tropopause transport of anticyclone air occurs mainly vertically across the subtropical tropopause, like in a chimney (using the terminology of Pan et al., 2016).Above the tropopause, however, in a layer between about 380 and 430 K the air from the anticyclone is strongly affected by horizontal transport processes and is largely mixed into the NH extratropics and into the tropics (Fig. 4b-d).Strong horizontal transport above about 380 K in NH summer and fall is likely related to enhanced subtropical Rossby-wave breaking during this season (see Homeyer and Bowman, 2012).Fastest uplift in the subtropics is consistent with largest upward velocity in that region (black contours in Fig. 4a).Note that ERA-Interim cross-isentropic vertical velocities in August show even downwelling equatorwards of about 10 • N in the 380-410 K layer. Discussion There has been a recent scientific debate on if and how the air masses from the Asian monsoon anticyclone reach the lower stratosphere.Garny and Randel (2016) concluded from 60-day backward trajectory ensembles that the preferred pathway of air masses is to travel from within the upper-tropospheric anticyclone region to the tropical lower stratosphere, but they did not further investigate where (relative to the tropopause) horizontal mixing from the monsoon region to low and high latitudes occurs.Orbe et al. (2015) analyzed air mass origin tracers in a climate model.They found that Asian surface air is transported upwards in the monsoon, reaches the extratropical tropopause within a few days, and is first transported quasi-horizontally into the extratropical lower stratosphere before eventually being transported subsequently into the tropics.A very recent study by Pan et (2016) also shows mainly quasi-horizontal isentropic transport out of the monsoon anticyclone into the lower stratosphere. Here, we focus on transport from the anticyclone deep into the stratosphere.Using a PV-gradient-based definition of the anticyclone edge, we trace the anticyclone air over an entire year following the monsoon season.Our analysis shows that the air from the anticyclone crosses the subtropical tropopause vertically (here cross-isentropical) and is subsequently transported horizontally (along isentropes) in the .4, 1.401, 1.402, 1.403, 1.404, 1.6, 1.8, 1.9, 2, 2.2, 2.6, 3.5, 6, 9, 13 %).Black contours show zonal wind (±15, 25 m s −1 solid/dashed), blue contours diabatic heating rates (from 1 K day −1 increasing in 0.2 K day −1 steps), thin black geopotential height, and thick grey line the (WMO) tropopause, all from ERA-Interim for June-August and zonally averaged over the monsoon region (40-100 • E). stratosphere to both the tropics and to NH extratropics, as illustrated in Fig. 5.The vertical nature of cross-tropopause transport is consistent with the findings of Garny and Randel (2016), but with the addition that above the tropopause a substantial amount of anticyclone air is mixed into the NH extratropics.This strong horizontal transport is, on the other hand, consistent with Orbe et al. (2015) and Pan et al. (2016), but with the difference that horizontal transport (either isentropic advection or mixing) in our case occurs mainly above the tropopause.It is important to note that we defined vertical and horizontal transport with respect to potential temperature as the vertical coordinate.Therefore, horizontal transport can be directly interpreted as isentropical mixing. Hence, in summary we refine the findings of Orbe et al. (2015), Garny and Randel (2016) and Pan et al. (2016) by describing transport from the Asian monsoon anticyclone into the stratosphere as a "blowing chimney", using the terminology of Pan et al. (2016).This characterization emphasizes the vertical "chimney-like" nature of cross-tropopause transport (with respect to potential temperature as vertical coordinate), but with the pollutants transported away quasi-horizontally along isentropes above the tropopause (see Fig. 5).This quasi-horizontal transport pathway from the monsoon into the UTLS is supported by recent in situ measurements (Mueller et al., 2016).At lower levels below the tropical tropopause (about 380 K) horizontal transport from the anticyclone core to the NH extratropics is very weak due to strong gradients in PV, in agreement with the findings of Garny and Randel (2016). So far, our conclusions concern air masses from the anticyclone core.To investigate differences in transport from the anticyclone edge, we initialized an anticyclone edge tracer in CLaMS (between PV contours of the anticyclone border PV * and PV * + 2 PVU; see Appendix), whose mixing ratio by definition yields the fraction of air originating from the anticyclone edge during the last monsoon season.Figure 4e and f show the air mass fraction from the anticyclone edge at the end of August and at the end of November.Comparison to the air mass fraction from the anticyclone core shows that directly after the monsoon season (Fig. 4a, e) air from the anticyclone edge is transported faster in the horizontal direction into the tropics and into NH extratropics.This is a consequence of air masses in the anticyclone edge region being less well confined as compared to air masses in the anticyclone core.After a few months, however, the two distributions of anticyclone edge and core air align (Fig. 4d, f), showing that, in the long term, air masses ascending in the anticyclone core and air masses injected into the anticyclone edge (e.g., by typhoons; see Vogel et al., 2014) follow the same transport pathways.The higher fraction of air from the anticyclone edge compared to the core is likely a result of the larger area of the edge region.Note that air masses in the anticyclone edge may have originated in the anticyclone core at lower levels, as suggested by the vertical transport conduit pathway proposed by Bergman et al. (2013). Conclusions The anticyclone air fraction of 5 % in the tropical pipe appears small if compared to the 15 % fraction in the NH extratropical lowermost stratosphere (see Fig. 2).However, as tropical air ascends deep into the stratosphere with the rising branch of the Brewer-Dobson circulation while extratropical air is flushed out of the stratosphere within a few months, the impact of this tropical anticyclone air on stratospheric chemistry and climate may be substantial.Our model simulation shows that the tropical anticyclone air correlates well with the annual cycle in satellite observed HCN over the course of a year.Hence, the Asian monsoon likely causes pollution transport deep into the stratosphere and contributes to the stratospheric aerosol loading.Therefore, changes in these two pathways of pollution from the Asian monsoon anticyclone into the stratosphere likely affect chemistry and radiation and may be important for causing feedback effects in a changing climate. Appendix A: Asian monsoon anticyclone border from PV gradient To separate the Asian monsoon anticyclone core region from its surroundings we follow the method of Ploeger et al. (2015).This method is based on the existence of an enhanced PV gradient indicating a transport barrier between the core and the surrounding region, similar to but weaker than the polar vortex edge (see, e.g., Nash et al., 1996).The anticyclone core is defined as the region enclosed by the PV contour PV * corresponding to the maximum gradient of PV with respect to a monsoon-centered equivalent latitude (Ploeger et al., 2015).Note that the PV field has to be smoothed by averaging over a time window around the given date before the calculation for a clear gradient maximum to emerge, due to strong dynamic variability of the monsoon circulation.The situation for 6 July 2011 at 380 K is illustrated in Fig. 6a, showing the time-averaged PV field (averaged over 5-7 July 2011), with the anticyclone core (region of lowest PV) enclosed by the deduced transport barrier (thick black line). The calculation yields a well-defined PV value for most days of the summers 2010-2013 (red symbols in Fig. 6b, c).Missing data in the time series of the anticyclone border of each summer have been filled in by linear interpolation (black symbols) in time from the neighboring values.At days before the first day when the PV gradient criterion holds (at beginning of July) and after the last day when the criterion holds (at end of August), no anticyclone border PV value has been estimated (no extrapolation), and the time series ends.This procedure results in a smooth PV time series of the anticyclone border during July-August (Fig. 6b, c).The model tracer has been initialized with unity within the anticyclone core in the 370-380 K layer during July-August.Note that we used the time-averaged PV field for the initialization criterion.The anticyclone border PV value PV * calculated from ERA-Interim for the summers of 2010-2013 is available from the Supplement.The tracer mixing ratio, by definition, yields the mass fraction of air from the anticyclone core region during the previous monsoon season (see Sect. 2, and e.g., Orbe et al., 2013).In analogy, the anticyclone edge tracer is initialized with unity between PV contours of the anticyclone border PV * and PV * + 2 PVU (see Fig. 6a), providing the mass fraction of air from the anticyclone edge region during the previous monsoon season. The use of the anticyclone tracer for studying the details of cross-tropopause transport appears questionable at first, as the tropopause in the monsoon may be located below 380 K at specific locations.Figure 7 presents the occurrence of tropopause potential temperatures in the Asian monsoon anticyclone.The figure shows that the tropopause in the monsoon anticyclone (defined inside the PV gradient barrier) occurs between potential temperatures of about 360 and 420 K, with a peak around 380 K.The frequency of tropopause occurrence above 380 K (58 %) is significantly larger than below (32 %; see cumulative PDF in Fig. 7) or even below 370 K (8 %).Hence, between 8 and 32 % of the tracer is initialized above the tropopause.However, as the tropopause in the monsoon occurs only very rarely below 370 K (Fig. 7), the initialization for these cases is also very close to the tropopause.Hence, initializing the anticyclone tracer between 370 and 380 K is mainly characterizing tropospheric air masses and the model tracer can well be used for studying cross-tropopause transport. Figure 1 . Figure 1.Seasonal evolution of climatological (2010-2013) zonal mean monsoon air mass fraction from CLaMS (color-coded) and HCN from ACE-FTS observations (black contours) during July-September (a), October-December (b), January-March (c), and April-June (d).Regions with HCN values above 215 pptv are hatched.The thick black line shows the (WMO) tropopause, and thin black lines show altitude levels (2 km spacing). Figure 3 . Figure 3. Maps of (a) carbon monoxide from CLaMS simulation, (b) CO from MLS satellite observations, and (c) monsoon air mass fraction from CLaMS, all for July-September.Top panels show maps at 380 K potential temperature, bottom panels show maps along a surface at 10 K above the local (WMO) tropopause.For CO the deviation from the zonal mean is shown in percent ( CO).Black contours show the potential temperature of the local WMO tropopause, and grey contours cross-isentropic (diabatic) vertical velocity dθ/dt (solid: 1, 1.3 K day −1 ; dashed: 0 K day −1 ).Note that dθ/dt is shown at 380 K. CO climatologies were calculated for the period 2004-2016, and air mass fraction climatologies for 2010-2013. Figure 4 . Figure 4. Latitude section of monsoon air mass fraction averaged over longitudes between and 100 • E for the (climatological 2010-2013) periods 25-30 August (a), September (b), October (c), and November (d).The averaging has been carried out in tropopause-based vertical coordinates, and the data have afterwards been adjusted vertically for plotting by adding the mean tropopause potential temperature (grey line).(e, f) Same as (a, d) but for the monsoon edge fraction, calculated from the monsoon edge tracer (see text).Thin black contours show total diabatic vertical velocity dθ/dt (positive values solid, negative values dashed, contour spacing 0.2 K day −1 ), the thick black line shows the mean tropopause.All quantities are averaged between 40 and 100 • E. Figure 6 . Figure 6.(a) Map of time-averaged PV field at 380 K on 6 July 2011, calculated as the average over the PV distribution for 5-7 July 2011.The thick black contour shows the calculated PV-gradient-based anticyclone border PV * (4 PVU for that date), and the thin black contour shows PV * +2 PVU.Thin white contours show selected Montgomery stream function values.Filled circles show CLaMS air parcels between 379 and 380 K, with parcels inside the anticyclone core colored red and those at the anticyclone edge colored green.The black rectangle indicates the regional restriction of the calculation (see text).(b) Time series of PV-gradient-based anticyclone border PV value at 370 K, with the calculated barrier as red circles and interpolated barrier (at days where the calculation did not work, interpolated from existing neighbor values) as black crosses.(c) Same as (b) but for 380 K. Figure 7 . Figure 7. Tropopause potential temperature frequency of occurrence inside the monsoon anticyclone, calculated as frequency distribution of (WMO) tropopause potential temperature at grid points inside the anticyclone (identified from PV based boundary definition; see Appendix) from all days during July-August 2010-2013.The black dashed line shows the cumulative PDF (scaled by 0.1), the integrated fraction of tropopause occurrence below a certain level.Grey dashed lines highlight the 370-380 K layer where the anticyclone tracer was initialized. ), consistent with independent satellite observations from the Michelson Interferometer for Passive Atmospheric Sounding (MI-F.Ploeger et al.: Asian monsoon transport into the stratosphere
7,085.4
2017-04-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Modeling and verifying clustering properties in a vehicular ad hoc network protocol with Event-B Vehicular ad hoc network (VANET) routing protocols resort to clustering in order to optimize broadcast traffic flooding. Clustering schemes usually rely on rules which apply to each vehicle in order to reach a targeted organization in a VANET. Most of the literature works which evaluate clustering for VANET focus on performance analysis. However, with autonomous vehicles coming to roadways, more rigorous relationships will be required between clustering rules and the resulting organization, so as to anticipate road safety in a better way. We propose a formal description of the properties which are expected in a VANET, while considering the rules of a given clustering scheme. Using Event-B, we first present a description of the VANET, the vehicles movement and the traffic generated by both routing and application messages. Then, based on an Event-B model of a basic routing protocol of the literature, we describe how the specific rules of a clustering scheme can be modeled along with the properties expected in the resulting organization. Finally, we propose a validation process of the model. This paper aims at showing how our proposals have been applied to the Chain-Branch-Leaf scheme, although they can be adapted to any rule-based clustering scheme for VANET. Results This section presents the main result of this work, which is a formal model of the Chain-Branch-Leaf clustering scheme allowing verifying its properties in a VANET. As for every clustering schemes, several rules are taken into account by the vehicles involved in the construction of both the clusters and the overall backbone. For example, the uniqueness of the branch node choice for each leaf node is a fundamental property of the Chain-Branch-Leaf organization 24 . However, without a formal description of the CBL scheme, such a requirement can only be envisioned, but not formally expressed nor verified. The assessment of the CBL clustering scheme implies two steps, which are requirement validation for verifying that the protocol specification fulfills the functional needs envisaged by the designer 25 , and consistency checking for ensuring that the clustering scheme does not introduce contradictions in the relations linking the nodes. Event-based modeling is particularly suitable for protocol engineering. Event-B is precisely an event-based formal method which has shown its capacity to master the system design complexity through successive refinements 15,26 . The stepwise refinement produces a correctby-construction model by formally proving the different properties introduced up to each step 26 . Regarding CBL, Event-B provides the tools necessary to perform an incremental verification by checking the properties and constraints defined at each execution step. The different execution steps are characterized by the introduced events. In order to guarantee the invariants preservation by these events, Event-B defines the concept of proof obligation. Therefore, the approach proposed in this paper should allow checking and proving the correctness of the CBL protocol as well as its requirements and properties. Then, once the CBL protocol has been verified, the proposed model guarantees that its execution does not face failure or inconsistency. Formal description of CBL organization in a VANET. In this section, we first describe CBL functioning along with the related formal definitions. CBL is a distributed algorithm performed by each VANET node 27 , considering that the latter may not have a global knowledge of the ad hoc network. Thus, a node can only communicate directly only with its one-hop neighbors which are used as routers in order to reach the rest of the network. A complete description of CBL functioning is presented in Ref. 24 . Figure 1 shows the different elements of the CBL structure in a VANET, which can be summarized as follows: • Road configuration in any road configuration, CBL builds one backbone in each traffic direction. In our example, a 3-lane 2-way highway, CBL builds two separate backbones (Fig. 1). • A branch node is a cluster-head node elected by other nodes (branch or leaf). It is the only one allowed to retransmit the broadcast traffic to the entire network, through its downstream branch, its upstream branch or both. • A leaf node is an ordinary node which attaches itself to the closest branch node. If no branch node is detected, the leaf nodes perform a branch choice process in order to elect one of them. www.nature.com/scientificreports/ similar to that which should be obtained with a fixed infrastructure. It provides branch nodes with a path for more than one-hop communications. In this work, the term Nodes refers to the set of all the nodes in the network. Without loss of generality, we will focus in this paper on two types of packets which are sufficient to describe CBL functioning. The hello packets are used by several VANET protocols for neighborhood discovery. The other type of packet refers to the applications traffic. The term Hello refers to the set of all the hello packets sent during the VANET scenario. We formally define VANET links using the following functions: An element n1 → n2 of the set links ( n1 → n2 ∈ links ) expresses that the node n1 has received a hello packet sent by the node n2. We formally define the local neighbors of nodes using the following function: For a given node n, the following property must be fulfilled: One of the main data required by CBL is node position. We assume that each node is aware of its position through a global positioning system such as GPS or Galileo. However, we avoid any terrestrial localization infrastructure since ad hoc networks should not rely on any infrastructure. The node's position is a two-dimensional vector, that the node transmits to its neighbors through hello packets. We formally model the nodes positions data using the following function: Let us consider a node n, the set positionTable(n) includes the position of n and those of its local neighbors. A node can be positioned in the downstream or the upstream side of any of its neighbors. We formally define the downstream/upstream neighbors of nodes using the following functions: To determine whether a node is upstream or downstream from another, we use the following operator: Let n1 and n2 be two neighboring nodes, and let p1 and p2 be their respective positions. We state the following rules (1) links ∈ P(Nodes × Nodes), (3) neighbors ∈ Nodes → P(Nodes). When a node does not receive any hello message from a neighbor within a specific period of time, this neighbor is considered to be unavailable and it will be deleted from the neighbors table. This time interval called neighbor expiry time is defined as: In the CBL clustering scheme, a node can be either a leaf or a branch node. We formally define the node types as follows: A branch node n ( hasType(n)(n) = 1 ) is a cluster-head node elected by the other nodes in its one-hop neighborhood. It emits hello messages like every node. A leaf node n ( hasType(n)(n) = 0 ) is an ordinary node which has to connect to the closest branch node. A CBL chain is a sequence of branch nodes. We formally define this chain of branch nodes as follows: These two functions are semantically opposed. Hence, chainUP (resp. chainDO) function defines a local upstream (resp. downstream) chain of branch nodes. If a branch b2 is an upstream node of another branch b1, then b1 is a downstream node of b2. In a CBL organization, each leaf node shall elect its associated branch. We formally define this election as follows: For a given branch node n, branchChoice(n), chainUP(n) (resp. chainDO) refer to the branch choice (if the electing node is a leaf node) or the upstream (resp. downstream) branch nodes (if the electing node is a branch node) included in the neighbors table of n. A hello packet can contain different data about its sender, namely the node's position, the node's type, and according to the latter, the elected branch up/down, or the branch choice. To formally define all these data, we use the following functions: For a given hello packet h sent by a node n, helloPosition(h), helloBrChoice(h), helloChainUP(h), helloChainDO(h) and helloSrType(h) represent respectively the sets of the position, branch choice, elected branch up, elected branch down and the type of n and its neighbors. All these data are very useful in the construction of the CBL scheme. The connection time is the expected communication duration of two nodes according to their movement. More formally, we define the following function: CBL properties and rules. Now that CBL organization has been clarified, it is possible to express the expected properties, and the rules established to that end. The main requirements and resulting properties expected from CBL organization in a VANET are: • REQ 1 If a node does not have any local neighbor, it must be a leaf. • REQ 2 The branch node choice is made according to leaf nodes only. • REQ 3 The self-branch election is not possible. • REQ 4 Each branch shall have at most one downstream branch neighbor. • REQ 5 Each branch shall have at most one upstream branch neighbor. • REQ 6 Each branch shall be elected by at least another node (branch or leaf). • REQ 7 A node having a downstream branch shall be of type branch. • REQ 8 The electing node of an upstream branch shall be a branch-type one. • REQ 9 If a node n1 is a downstream branch of a node n2, then n2 is an upstream branch of n1. • REQ 10 An upstream branch of a node shall be one of its upstream neighbors. • REQ 12 One of the neighbors of each leaf node shall become a branch. Different constraints shall be satisfied while electing branch nodes, such as the following node election rules (NER): • NER1 The self-node election is not possible (i.e., the elected node and the electing one must be different). • NER2 The elected node must be a neighbor of its electing node. • NER3 The elected node must be an upstream neighbor of its electing branch node. • NER4 A branch node must have at most one upstream neighbor branch. • NER5 When the electing node is a leaf node which does not have any branch node neighbor, it must elect the leaf node neighbor having the maximum connection time as its branch choice. • NER6 When the electing node is a branch node which does not have any upstream branch node neighbor, it must elect the upstream neighbor node having the maximum connection time as its upstream branch node. All nodes are initially leaf nodes. Some of the nodes can be turned into branch nodes, while others must be kept as leaf nodes. In addition, a branch can be turned into a leaf. The type changing rules (TCR) are the following: • TCR1 If a leaf node is elected by another node, it must be turned into a branch. • TCR2 If the electing node is a branch, then it shall be added to the chain as a downstream branch of the elected one. • TCR3 The branch node which overtakes its downstream electing branch shall be turned into a leaf. A correct-by-construction model of CBL (CCM4CBL). In this section, is presented the proposed Event-B model of the CBL clustering scheme, which implements the aforementioned properties and rules (see "CBL properties and rules" section). Figure 2 illustrates the architecture of the resulting model. It shows three abstraction levels which will be detailed in the following subsections. The two first levels can be used for modeling any other VANET routing protocol. Level 1: a basic routing protocol model (CBL_c0, CBL_m0). This model inspired from Ref. 18 is an Event-B formal model of a basic routing protocol. It includes the definitions of the set of nodes, that of the links, and also the events related to packet status and operations (Fig. 3). It can be extended and refined in order to model any other routing protocol. Although we rewrote the model (Fig. 4), a similar one is available in the literature 23 . This model has been implemented through the contex CBL_c0 and the machine CBL_m0 illustrated in Fig. 2. CBL_m1). This second abstraction level of the CCM4CBL model formally defines the concepts related to VANET node dynamics and communications, notably neighborhood management, nodes positioning in the road traffic, and packet broadcasting. It consists in the CBL_m1 machine and the CBL_c1 context (see Fig. 5). In addition to the definitions in "Formal description of CBL organization in a VANET" section, the CBL_c1 context extends the initial CBL_c0 context by introducing the finite set of all the hello packets ( Hello ⊂ Packets ). Given the CBL_c1 context, the CBL_m1 machine refines CBL_m0 by introducing the variables, invariants and events modeling VANET node communications, vehicles movement in the road traffic, and neighborhood discovery and management. Modeling VANET node communications. Communications in a VANET not only include the forwarding of application packets, but also the broadcasting of hello packets for neighborhood discovery and link detection. To formally model ad hoc communications in a VANET, in addition to the definitions proposed in "Formal description of CBL organization in a VANET" section, we introduce the following variables : • nextNodeToReceiver a variable determining the next receiver node of each sent packet (see inv2, Fig. 5). This variable is used for forwarding packets from its source to its destination. • lastGotHello: a variable determining the last hello a node received from another. We define this variable as follows: • receptionTime A variable determining the reception time of a packet according to its destination node. We formally define this variable using an invariant as the following function: • neighboringTime a variable determining the time when a node becomes another's neighbor. We formally define this variable as the following function: These variables time, receptionTime and neighboringTime are mainly used in order to control the availability of a node's neighbors, as it will be detailed in the neighborhood management phase. The sendPacket, forwardPacket and receivePacket abstract events are also refined. Two refined versions are proposed for the packet sending event, namely sendPacket and broadcast. The first one is used for sending application packets, while the second allows broadcasting of some specific packets such as hello. As Fig. 6 shows, the abstract parameter destinations has disappeared from the packet broadcasting event, while new guards and actions have been added. In addition, a witness clause (with) is included in order to define the link between the abstract event and the refined one. This witness states that broadcast packets are able to be received by any node in the ad hoc network, with the exception of the sender. This requires a replacement of every occurrences of destinations in the action clause with Nodes \ {source}. Action act6 illustrates an example of such replacement. Guard grd3 ensures that the TTL of a Hello-type packet is equal to 1 during the broadcast process, in order to avoid its forwarding. Action act7 expresses that the hello packet sender shall broadcast its current position to all its neighbors. A new parameter T representing the local time at the destination node, and new guards are added in the refined version of the packet receiving event. The guard clause is extended with the following three predicates: where cls refers to the mathematical closure operator. For a relation R ∈ A ↔ A , cls(R) is the closure of R, which we define as follows: The actions added in the refinement of the reception event, update time, receptionTime and lastGotHello variables. In the refined packet forwarding event (forwardPacket), we introduce a new parameter nextNode (nextNode ∈ Nodes), new guards and a new action. The added guards are used to define the connection links between a packet's source/destination and the forwarding node, while the action updates the nextNodeToReceiver variable as follows: Modeling the vehicle movement. An updatePosition event is defined in the CBL_m1 machine in order to model the movement of vehicles (described as members of the set Nodes in our model) on the road. As Fig. 7 shows, this event takes the following parameters as input: • node represents a vehicle whose position shall be updated. • XY ∈ N × N refers to the new position of the node. Modeling local neighborhood management. When a node receives a hello packet, it must update its neighbors table according to its content. To model this formally, we add an updateNeighbor event in the CBL_m1 machine (see Fig. 8). This event has six parameters, three of which (newUp, newDown and H) being automatically computed through the guards, based on the first four parameters. The first two parameters are the source (neighbor) and its position XY contained in the last hello H received from it. Parameters H (the last received hello), newUp (the updated upstream nodes) and newDown (updated downstream nodes) are merely used for simplification. The other two parameters are a destination node and its current time T. The above parameters are typed through the guards. As stated in Ref. 24 , each node periodically broadcasts hello packets to declare its availability to all its neighbors. Thus, when a node does not receive a hello packet from a neighbor within a period of time which equals to the neighbor_expiry_time , this neighbor is considered to be unavailable and is deleted from the node's www.nature.com/scientificreports/ neighbors table. More formally, we create a dropNeighbor event in the CBL_m1 machine (see Fig. 8). As input parameters, the dropNeighbor event has a node and its unavailable neighbor. These parameters are well-defined by grd1, grd2 and grd3 guards. The grd3 guard checks the unavailability precondition of the neighboring node before applying the necessary actions. (CBL_c2, CBL_m2). The last abstraction level of our CCM4CBL model introduces the specific properties and rules of the CBL clustering scheme in a VANET. As Fig. 9 shows, this implementation level consists in a CBL_m2 machine which sees a CBL_c2 context. The CBL_c2 context extends the context of the second level ( CBL_c1 ) by introducing the concept of node connection time (Ctime). The latter is axiomatically defined according to the definition proposed in "Formal description of CBL organization in a VANET" section. Figure 9 also depicts a machine which models CBL properties and rules. This machine sees the CBL_c2 context, and refines CBL_m1 machine. The presentation of the refinements is organized in four steps, which are: (1) modeling CBL properties, (2) modeling branch nodes election, (3) refining ad hoc communications in a VANET, and (4) modeling CBL chain update. Level 3: modeling CBL properties and rules Modeling CBL properties. New variables are introduced in the CBL_m2 machine in order to formally model the specific properties and rules of CBL: • hasType a variable determining the type of the node (Branch or Leaf). • branchChoice a variable determining the branch choice of the node. • chainUP a variable determining the upstream branch node. • chainDO a variable determining the downstream branch node. The type of each variable is defined using a typing invariant according to the definitions introduced in "Formal description of CBL organization in a VANET" section. Given the variables, constants and sets introduced, CBL properties (req1, req2, . . . , req12) can be formally expressed as invariants in Event-B. These invariants are illustrated in Table 1. Modeling branch nodes election. This election is a key step in the construction of the CBL structure. As stated in Ref. 24 , two election types are possible: branch choice by leaf nodes, and upstream branch election by branch nodes. To model these two operations formally, we created an electBranch event, taking three parameters as input (see Fig. 10). The first two parameters, electing and elected, refer to both the electing and elected nodes. The other parameter, V, is a binary number ( V ∈ {0, 1} ) used to simplify the verification of the NERi rules while electing branch nodes. To achieve this verification, each NERi rule is defined as a guard in the electBranch event. Each www.nature.com/scientificreports/ one of these guards is detailed below. The first rule, NER1, states that a node (leaf or branch) cannot elect itself as a branch, while the second one (NER2) stipulates that the elected node must be a neighbor of its electing one. These two rules are defined as follows : Rule NER3 states that the elected node must be an upstream neighbor of its electing node, which shall be a branch. This rule is expressed through the following predicate: Rule NER4 is used to ensure the uniqueness of the CBL chain, in order to avoid parallel chains in the same area. Hence, it expresses that a branch node must have at most one upstream neighbor branch. We define this rule as follows: Rule NER5 states that, when there is no neighboring branch node, the electing leaf must choose the neighbor having the maximum connection time to itself. We formally define this constraint through the following predicate: Rule NER6 concerns the connection time of the elected upstream branch. This rule is applied when the electing branch node does not have any upstream branch-type neighbor. It is formally defined similarly to rule NER5 as follows: Refining VANET node communications. As stated previously, CBL is a distributed algorithm which builds a backbone of branch nodes, and clusters of leaf nodes around the latter. In order to achieve this task, CBL relies only on the information exchanged through hello packets. A hello packet contains information about its sender, such as its position, its type (branch or leaf), its branch choice when it is a leaf node, its upstream and downstream branch nodes when it is a branch node, and its direct neighbors. The following variables are added: • helloBrChoice a variable determining the branch choice of the hello sender. • helloChainUP a variable determining the upstream branch of the hello sender. • helloChainDO a variable determining the downstream branch of the hello sender. • helloSrType a variable determining the type of the hello sender (branch or leaf). These variables are defined as partial functions using invariants in the CBL_m2 machine according to their definition (see "Formal description of CBL organization in a VANET" section). The packet broadcasting event is refined in order to integrate the information contained in the hello packet. A guard related to CBL property REQ12 is added in order to enforce the leaf nodes which have neighbors, so that they can choose their branch nodes before broadcasting their hello packet: elected = electing ∧ elected ∈ neighbors(electing) ∧ electing � → 0 ∈ hasType(electing). REQ 11 ∀a, n · a ∈ Nodes ∧ a � → n ∈ chainDO(n) ∧ a ∈ up(n) ⇒n � → a ∈ chainUP(n) www.nature.com/scientificreports/ New substitution actions are also introduced in the refined version of the broadcast event, in order to update the status of the helloBrChoice, helloChainUP, helloChainDO and helloSrType variables as follows: Modeling CBL chain update. Several events trigger the update of the CBL chain locally at the VANET node level, such as the unavailability of some neighbors, or the overtaking of a branch node by its upstream branch node. In this work, we consider three events, namely: a neighbor which is no longer available, changes in the nodes' positions, and a received hello packet indicating some changes in the CBL chain (election of new branch up/down, branch choice, etc.). In order to model these events formally, we refine updatePosition, dropNeighbor and updateNeighbor abstract events, and introduce two new events, turnIntoBranch and turnIntoLeaf, which are used to change the node type according to TCR rules ("CBL properties and rules" section). The updateNeighbor variable is refined as follows: where newTypes, newChainDO, newChainUP and newBrChoice refer to the parameters introduced in the event's refinement, and are the new values of functions hasType(node), chainDO(node), chainUP(node) and branchChoice(node) after updating or adding the information (type, branch choice, and so on) about the neighbor. For example, the values of newChainUP and newChainDO parameters are computed as follows: The other parameters are computed similarly. The dropNeighbor refined event allows the local update of the CBL chain in case of unavailability of a neighbor. Compared to its abstract version, it introduces the following actions: Updating the position of a node Ni requires checking whether Ni has overtaken the node Nj that elected it as its upstream branch node. In this case, the chain link between Ni and Nj must be deleted by adding an action in the updatePosition event: where newChainUP is a new parameter referring to the updated upstream branch node, automatically computed as follows: Provided that the following conditions (expressed as a guard) are satisfied: Event turnIntoBranch allows turning a leaf node into a branch node when it has been elected by at least one of its neighbors (see grd1, Fig. 11). This type changing occurs after receiving a hello packet from a neighbor. Guard grd3 checks CBL property req6 which states that self-election is not authorized (a node cannot be elected by itself). Event turnIntoLeaf allows turning a branch node into a leaf when the latter overtakes its electing downstream branch node after updating the node's position (see Fig. 11). Guard grd1 defines a precondition of this type changing according to rule TCR 3. Discussion and methods The methods adopted during the validation of the CCM4CBL model rely on a two-step verification approach: validation of the model by animating it using the ProB model-checker, and proving its correctness by discharging proof obligations. neighbors(source) = ∅ ∧ source � → 0 ∈ hasType(source) ⇒ 1 ∈ hasType(source)[neighbors(source)]. www.nature.com/scientificreports/ Animation-based validation. The ProB 28 animation tool allows both the validation of the requirements, and the detection of errors in order to fix them, before starting the proof phase which can be long and complex. It cannot be performed on an abstract Event-B specification, and requires a concrete model. For that reason, we created a new CBL_m3 machine by extending the CBL_m2 machine. This machine does not introduce new events or variables. It sees a concrete CBL_c3 context, which is an extension of the CBL_c2 context. New constants and axioms are defined in this extended context for the concretization of all the sets and functions introduced in the contexts of the proposed CCM4CBL model. Figure 12 depicts the ProB animation window which is composed of three main parts. The first part (1) describes event triggering and constraint checking. The second part (2) presents the status of the model. The last part (3) allows signaling potential invariant violations and specification errors. After setting up the animation context, only the initialization event is enabled. After that, updatePosition and broadcast events are successfully activated. Once a hello packet is broadcast by a node, the receivePacket and losePacket events are enabled. Event updateNeighbor is triggered several times after executing receivePacket. Regarding the animation scenario Fig. 12 shows, we note that broadcast and turnIntoBranch events are disabled after executing both events updateNeighbor and electBranch. No node will be turned into a branch and the clustering scheme cannot proceed. This situation suggests that a given node is not able to broadcast hello www.nature.com/scientificreports/ packets to inform the neighbors of its branch choice. It is caused by guard grd7 (see Fig. 12), which states that one of the neighbors of each leaf shall be a branch. In order to solve this issue, we modified the related guard: We used the ProB counter-examples as guides to rectify our model and trace back the specification errors, which could have caused the prover failures. The performed rectifications concern invariant violations and both guard and action alterations. We validated our model based on representative scenarios, including several cases which had not been treated previously. Correctness of the model by discharging proof obligations. In order to verify the model during its design process, proof obligations (POs) are discharged in a way guaranteeing that: • Model initialization leads to a state where the invariant is valid. • When the machine is in a state where the invariant is valid, every enabled event leads to a state preserving this validity. • The concrete events can only occur in the circumstances in which the abstract events occur. • The occurrence of any concrete event implies an occurrence of the related abstract event in such a way that the state verifies all related invariants. POs refer to the proofs applicable to an Event-B model. Figure 13 illustrates examples of proof obligations. Those discharged are marked with , while those undischarged can be recognized thanks to the symbol . The POs which are dischared automatically also show an "A" letter. Some proof obligations need interactive discharging by users, when the automatic prover cannot perform it (Fig. 13c shows the repartition between automatically and manually dischared POs in our model). For example, using the cardinal operator implies a finite set as operand, thus making more difficult the discharging of proof obligations. As well, the universal ( ∀ ) and existential ( ∃ ) quantifiers may be sources of concerns for the instantiation of the quantified hypothesis. In the proposed model, an example of such a PO is turnIntoBranch/req5/INV, which ensures that the turnIntoBranch event preserves the req5 invariant of the CBL_m2 machine. As depicted in Fig. 13a, the sequent of such a PO is unproved due to a lack of hypothesis. For that reason, we modified the turnIntoBranch event, shown in Fig. 11, by adding the following new guard, which let to success: Conclusion and prospective work In this paper, we have proposed an approach using Event-B to validate the properties and rules of a VANET routing protocol. Targeting the CBL clustering scheme, a correct-by-construction model CCM4CBL is proposed for that purpose as a proof of concept. This model includes three abstraction levels. The first one is an initial specification containing the basic functions of any network routing protocol. The second level introduces specific concepts of the VANET environment such as spatial relations between the vehicles according to their positions, vehicle movement, and management of both routing tables and vehicle communications. At this step, the proposed model can be reused for modeling any VANET routing protocol. The last level formally defines the specific properties and rules of the CBL clustering scheme regarding VANET organization. The proposed model is gradually verified using proof obligation mechanisms offered by the event-B method and finally validated using the ProB animator to repair several behavioral errors. These processes are illustrated thanks to concrete examples. In our future work, we will target the coupling of this formal approach to others, such as discrete neighbors(source) = ∅ ∧ source � → 0 ∈ hasType(source) ⇒ {source} ⊳ branchChoice(source) = ∅.
7,155.4
2021-09-02T00:00:00.000
[ "Computer Science" ]
META-GUI: Towards Multi-modal Conversational Agents on Mobile GUI Task-oriented dialogue (TOD) systems have been widely used by mobile phone intelligent assistants to accomplish tasks such as calendar scheduling or hotel reservation. Current TOD systems usually focus on multi-turn text/speech interaction, then they would call back-end APIs designed for TODs to perform the task. However, this API-based architecture greatly limits the information-searching capability of intelligent assistants and may even lead to task failure if TOD-specific APIs are not available or the task is too complicated to be executed by the provided APIs. In this paper, we propose a new TOD architecture: GUI-based task-oriented dialogue system (GUI-TOD). A GUI-TOD system can directly perform GUI operations on real APPs and execute tasks without invoking TOD-specific backend APIs. Furthermore, we release META-GUI, a dataset for training a Multi-modal convErsaTional Agent on mobile GUI. We also propose a multi-model action prediction and response model, which show promising results on META-GUI. The dataset, codes and leaderboard are publicly available. Introduction Recent years have witnessed the rapid development of task-oriented dialogue systems (Zhang et al., 2020;Ni et al., 2022;Chen et al., 2022Chen et al., , 2017)).They have been widely applied to customer support, booking system and especially intelligent personal assistant.These task-oriented dialogue systems work in a similar pipeline: firstly identify the user intent, then extract necessary information by the process of slot-filling.After getting enough information for the task, the agent will call the backend APIs (provided by APP developers) to fetch infor- mation, and then generate a response based on the query result. There are some drawbacks of this framework.Firstly, TODs rely on publicly accessible APIs or APIs designed for TODs to perform tasks, but such APIs may not exist in real-life APPs, which hinders the application of TODs.Secondly, a system should be customized to recognize the pre-defined API-related slots, which limits the generality. Consider how humans perform tasks on smartphones They don't need a parametric API but finish tasks by interacting with the GUI (graphical user interface), indicating that GUI is a more general interface.Previous studies explore how to translate natural language commands into GUI operations (Mazumder and Riva, 2021;Pasupat et al., 2018;Xu et al., 2021a).These studies focus on single query and step-by-step operations, while in Action Description Click(item = x) Click the item with index x on the screen.Swipe(direction = x) Swipe screen towards direction x, which includes "up" and "down". Input(text = x) Input the text x to the smartphone.Enter( ) Press the "Enter" button on the keyboard.Clear( ) Clear the current input box.Back( ) Press the "back" button on the smartphone.End( ) Turn has been finished and it will go to Response Generator module. Table 1: The actions in our dataset.There are 7 different actions with 3 different parameters. real scenarios the query would be multi-turn interaction and there is no clear instruction about how to execute the task.Etan (Riva and Kace, 2021) and SUGILITE (Li et al., 2017) are two systems that support learning GUI operations from demonstrations, but these systems are script-based and are sensitive to the change in GUI and workflow.Duplex on the web (Crunch, 2019) can directly operate the website to perform the required task, for example booking a movie ticket.However, it only supports limited websites, and it's more a unified GUI interface than a task-oriented dialogue system that enables general GUI operation. To this end, we propose the task of GUI-based task-oriented dialogue system (GUI-TOD).It supports multi-turn conversation and direct GUI operation.All tasks would be performed on the GUI of real APPs, which means we no longer need TODspecific APIs to communicate with APPs, and it would be possible to apply TOD on any APPs.Since there is no available benchmark published, We collect META-GUI, a dataset with dialogues and GUI traces on real Android APPs.A GUI trace is a series of GUI operations, including screenshots, Android view hierarchies as well as actions.Android view hierarchy is an XML-style file, which organizes the content of GUI through a hierarchical structure.It also contains the types of items on the screen and their bounding boxes.An example is shown in Appendix C. When a user requests a task, the system should open the related APP and execute the task through multiple operations on GUI.It requires a comprehensive understanding of GUI structure and interaction logic.An interaction example is shown in Figure 1. We focus on building an agent with general ability to operate GUI, rather than optimize for specific APPs.Our proposed GUI-TOD system leverages both the visual information and textual information on the screen to predict the next action to be executed and generate the system response.Our experiments show that the GUI-TOD outperforms heuristic baselines by a large margin, with an action completion rate of 82.74%. Our contributions are followings: • We propose a GUI-based task-oriented dialogue system, which can perform tasks on mobile APPs through multiple operations on GUI. • We collect META-GUI, a dataset with dialogues and GUI operation traces serving as the benchmark for the proposed system. • We conduct thorough experiments on our dataset and validate the importance of multimodal information and history information. We show that it is a promising task but needs further exploration.The overview of GUI-TOD is shown in Figure 2. It consists of two sub-modules: Action Executor (AE) and Response Generator (RG).The traditional task-oriented dialogue system (Chen et al., 2017;Zhang et al., 2020;Yu et al., 2014) splits the task into natural language understanding (NLU) (Zhu et al., 2021), dialogue manager (DM) (Chen et al., 2020a;Zhu et al., 2020;Chen et al., 2018Chen et al., , 2019Chen et al., , 2020b)), and natural language generation (NLG) (Keskar et al., 2019).We omit the NLU module and directly send user utterances to AE.The AE module has similar features with DM, it executes the requested task by interacting with the GUI for multiple rounds, while DM accomplishes this by calling TOD-specific APIs.The RG module will generate the system response based on the execution results, which is the same as NLG.The process of executing a task is a series of GUI operations, including click, swipe, etc.The task of AE module is action prediction, which aims at predicting the next action to be performed on GUI, and the RG module focuses on generating system's response after executing a task.A major improvement of GUI-TOD is that it does not rely on a pre-defined domain ontology.Conventionally, the DM module will identify a set of slot-value from the user utterance, which serves as the parameter for backend APIs.However, GUI-TOD handles task-specific slot-values during the execution of tasks.When the APP requires a certain input (for example, entering the time and destination), the system can obtain the information by understanding the current user utterance or generating a response for further asking.Compared with CUED actions (Young, 2007) in traditional TOD, actions in GUI-TOD are GUI-related operations rather than communication actions between user and system.Formally, the action prediction task can be defined as: given the GUI trace and dialogue history, predict the next action to be performed.We define the set of actions that can be performed on the APPs in Table 1.All the actions would take the form of Action(parameter = * ).There are seven types of Action, including six physical actions: click, swipe, input, enter, clear, back, and one virtual action: end.The corresponding parameters are listed in Table 1.The end action is the last action for every GUI trace, which means the end of GUI operations.After an end action is generated, the GUI-TOD would move to the RG module.We denote the jth action in turn i as A i,j = (t, p), where t is the action type and p is the corresponding parameter.S i,j = (s, v) is the jth screen in turn i, including the screenshot s and the view hierarchy v.The dialogue in turn i is represented as D i = (U i , R i ) where U i is the ith user utter-ance and R i is the ith system response.The action prediction task is formulated as: Task Definition where 1 : i means from turn 1 to i, F is a trainable action model, which we discuss in 4.1.The RG module takes the GUI trace and dialogue history as input, then generates a response based on the execution result and context.Denote the set of actions in turn i as A i , the screens in turn i as S i , the response generation task is formulated as: where G is the response generator model, which we discuss in 4.2. Meta-GUI Creation Our dataset consists of two kinds of data: dialogues and GUI operation traces.In each dialogue, user would ask the agent to complete a certain task through multi-turn interaction.Our tasks involve six different domains: weather, calendar, search, taxi, hotel and restaurant.In this paper, we consider APPs that accomplish the same kind of tasks to be in the same domain.To enhance the diversity of our dataset, we use multiple Apps from the calendar and weather domains.The details of APPs are listed in Appendix A. Collecting GUI traces We collected our data in two-stage: first we collected GUI traces for existing dialogues, then we collected both dialogues and GUI traces. In the first stage, we provided dialogues to annotators and instructed them to perform tasks on real APPs.We started from extracting dialogues from the SMCalFlow dataset (Andreas et al., 2020).SMCalFlow contains multi-turn task-oriented dialogues, which is known for complex reference phenomenon that requires a comprehensive understanding of context.We extract dialogues from calendar, weather and search domains.Six annotators were recruited to label the GUI traces.We built a web-based annotation system, which was connected to a real Android smartphone (see Appendix B).Annotators can see the current screen of the smartphone in the system, and control the smartphone by clicking buttons.A dialogue would be shown in the system.Annotators should first read the dialogue, then they were allowed to explore how to finish the task (e.g.check the weather) on smartphone.If the task requirement in the dialogue conflicted with the real-world scenario (for example, creating an event in the past), the annotators could change the content of the dialogue to make the task achievable.After they were ready, they need to use the annotation system to record the actual process of executing the task.Each operation would be recorded, and the screenshot after each operation was also saved together with the view hierarchy. In the second stage, we collected dialogues and GUI traces for domains of hotel, restaurant and taxi.Because there are no available dialogues of these domains in previous datasets, we asked annotators to write new dialogues.We selected three experienced annotators from the last stage.Different from the last stage, the annotator was shown a task objective, which was generated randomly from all available conditions in APPs.The annotators should act as user and system alternatively to write dialogues according to the task objectives.To avoid annotators writing short and simple dialogues, we added constraints about the number of turns and the behaviors in dialogue, e.g.adding a condition or changing a condition.An example of the generated target is shown in Appendix E. After writing dialogues, the annotators should also record the corresponding GUI operation traces for each turn, which is the same as the last stage. Data Review After annotation, we manually reviewed the data.The checklist includes: whether the recorded GUI traces match the dialogues, whether there are invalid operations due to the system error or misoperation, and whether there are redundant operations in the GUI trace.We manually fixed annotations that only have small mistakes, and discarded the task requiring significant modification.The dialogue level pass rate is about 63.6%, and finally we got 1125 dialogues in total.For more information, please refer to Appendix D. Post-processing The dialogues collected in the second state were created by three annotators, which lack diversity in expression.Therefore, we published a dialog rewritten task on AMT * (Amazon Mechanical Turk) to polish the dialogues.During GUI trace annotation, some APPs can not obtain valid Android hierarchy.To handle this problem, we used the online Optical Character Recog-nition (OCR) service, provided by Baidu Cloud † , to detect all texts on the image with their corresponding positions and generate a pseudo layout file. We extract items from screen using the corresponding layout file.An item is a clickable leaf node.Similar to (Zhou and Li, 2021), we consider an item to be clickable if its clickable attribute is true or its parent node is clickable.An item consists of text content, item type and bounding box.We extract the text content of an item by looking at its text property first.If it is empty, we use its content-desc attribute, otherwise we would use the resource-id property.Based on the extracted items, we can locate the target item for the click action by comparing the click position and the bounding boxes of items. Data Analysis The total number of dialogues in our dataset is 1125, including 4684 turns.The average number of images for each turn is 5.30, and the average number of words for each utterance is 8. On average, there are 23.80 items for each image, and the item text length is 2.48 words.The distribution of item types is shown in Figure 3.We also provide an example for each item type in Appendix F. It is clear that TextView and ImageView are the two most frequent type, which indicates that our dataset is informative. The distribution of actions is listed in Figure 4.The click is the most frequent action, while clear is the least action for the reason that only a small number of tasks require clearing the current input box.For click action, we further compute the type distribution of target items, which is shown in Figure 3. TextView and Button type are mostly clicked, while there are 8 item types never been operated.This implies that the item types may supply some hints for predicting the target items.Besides, the average numbers of words for response and input action are 9 and 3 respectively. Model Design The overview of our system is illustrated in Figure 5. It's composed of four components: encoder, image feature extractor, multi-modal information fusion module and the output module.The output † https://cloud.baidu.com/module can be the Action Module or the Response Module. Action Model We call the combination of encoder, image feature extractor, multi-modal information fusion module and the Action Module as Action Model, which is used to predict the next GUI action based on the history.Next, we will describe these modules respectively.For simplify, for the screen history we only consider the last screen here.We will discuss adding more screen histories later. Encoder The input of encoder consists of two parts: dialog history {D 1:i−1 , U i } = {w 1 , ..., w n } and texts in the items {m 1,1:l 1 , . . ., m k,1:l k }.Items are extracted from the last screen, k is the number of items and l i is the length of the ith item's text: where H = [D; M] and D = {w 1 , w 2 , . . ., w n } represents encoder outputs of the dialogue history, M = {m 1,1:l 1 ; . . .; m k,1:l k } represents encoder outputs of item texts. Image feature extractor Given a screenshot and its corresponding layout file, we use Faster R-CNN (Ren et al., 2015) to extract the feature map. Then we apply ROI pooling based on the bounding box of each item, and get the item-level image features I = {I 1 , ..., I k }. Multi-modal information fusion module Given the encoder output and the regional image feature extracted above, we concatenate them together. The text features from one item m i,1:l k are concatenated with the same item feature I i , and the w 1:n are concatenated with zeros.Then we use a Transformer encoder with M layers to fuse the multi-modal features.For each layer, to enhance the image information, we will concatenate the image features and the output from the last layer again to form the input for the next layer. Action Module For the Action model, we need to predict the action type and its corresponding parameters.As shown in Table 1, there are 7 action types with 3 different parameters.We show some examples of parameter predictions in Appendix G.We use the encoder output of the [CLS] token for action type prediction.We apply a feed-forward Transformer Encoder Bounding Box Image Feature network followed by a Softmax layer to predict the action type: where p a is the probability distribution of action, and FFN represents the Feed-Forward Network. For the action parameter, we use three different classifiers: 1) Input Text Prediction We assume that the input to the APPs must be part of the user utterance, so we formulate the prediction of input text as a span prediction task.We use two classifiers to predict the begin and end positions in the dialogue: where the p ds and p ds are the probability of start and end position respectively. 2) Target Item Prediction The target item classifier is based on the encoding outputs of items.We first computed the item representation by applying average pooling on the encoding outputs, then we use a feed-forward layer to compute the probability of selecting an item followed by a Softmax layer: where p m is the probability distribution of items. 3) Direction Prediction The direction classifier is a two-classes classification layer for the direction up and down: where p d is the probability distribution of swipe direction. Adding history information According to the task definition, besides dialogue histories, we can still use action histories and screen histories.To verify this, we add them to the action model.For action histories, we regard action types as special tokens and add them to the dictionary.We concatenate the most recent H action types {t 1:H } before the dialogue history as input: where X stands for the input of Encoder, t represents the action type. For screenshot histories, we encode all the screenshot in a recurrent way.Assume Îi = [I i,1 , ..., I i,k ] is the image feature for ith screenshot, and Īi is the history image feature for time step i.We compute Īi+1 by: where Ī1 = Î1 , H is the length of history, Attn is the attention mechanism (Vaswani et al., 2017), and W * are trainable parameters.We use the ĪH to replace the image features in Figure 5. Response Model The We process the dataset in the granularity of action.Each data point takes as input the screenshot history, action history, dialogue history and predicts the action to be performed.We obtained 18337 data points in total, and we randomly divide the data into the training set, development set and test set with the ratio of 8:1:1.The data statistics are shown in Table 2. Experiment Setup We train our baselines on the training set and select the best models on the dev set based on the Action completion rate.We use pretrained BERT (Devlin et al., 2019), LayoutLM (Xu et al., 2020) and LayoutLMv2 (Xu et al., 2021b) as our encoder models.‡ BERT is pretrained on pure text corpus by masked languages modeling task, while Lay-outLM and LayoutLMv2 are pretrained on scanned documents by masked visual-language modeling task and incorporate image features. We use a batch size of 4 and fine-tune for 8 epochs.We use Adam optimizer with the learning rate of 1e-5.For Response Model, the number of Transformer Decoder Block is 4. Furthermore, we use three heuristic methods in our experiments: Random We randomly predict action type and its corresponding parameters. Frequency Method (FM) We first calculate the frequency of each action type and its corresponding parameters.Then, we apply the results to the development set and generate the prediction according to the frequency.‡ There are some pre-trained models about GUI understanding, like ActionBERT (He et al., 2021) and UIBERT (Bai et al., 2021).But they are not open-source. Most Frequent Method (MFM) Similar to the frequency method, we generate the prediction with the most frequent result. For the evaluation, we use completion rate for action prediction.We first define two completion rate metrics: action completion rate and turn completion rate.One action is regarded as completed only if the action type and its parameters are correctly predicted.And if all actions in the same turn are completed, then the corresponding turn will be considered completed.For action type prediction, item prediction and direction prediction, we use accuracy.For input prediction, we use token level exact match and F1.And we use BLEU score to evaluate the Response Model. Experiment Result The experiment results of the Action Model are listed in Table 3.We can find that the deep learning methods outperform the heuristic methods by a large margin, which is expected.Comparing the results of BERT backbone and LayoutLM backbone, we find that BERT model yields better performance.The reason is that LayoutLM model was pre-trained on a scanned document image dataset, and there exists a large gap between the Android GUI and the scanned document images.Furthermore, we can find that LayoutLMv2 performs worse than LayoutLM.We hypothesize that LayoutLMv2 uses early-fusion method, which will bring more noises.We can also find that adding multi-modal information to BERT leads to a better performance (52.08% → 53.96%), and the improvements are mainly from the action type prediction, target item prediction and swipe direction prediction.The reason why adding images would help is that the image information contains some action histories that cannot be represented by text.For example, when filtering conditions on hotel reservations, the conditions selected in the previous action can be seen through the image (as a highlighted text), but they can not be reflected through text.An example is illustrated in Appendix H. Besides, the image information can help the model to locate the item more accurately.For example, for a screen with multiple radio buttons, since the BERT model does not take the item position as input, the model cannot distinguish the corresponding button for each option by only textual input.However, we also find that the performance of input text prediction degrades after adding image information.We assume that BERT itself can successfully model text information, but adding visual information will affect the model's ability to understand text. We further verify the importance of history information by adding action histories and screenshot histories.From the experiment results, we find that adding history information to BERT can improve the performance (52.08% → 55.42% after adding action history to BERT, 53.96% → 55.62% after adding screenshot history to BERT+mm).Adding action histories leads to greater performance improvement, which means action sequence is a more effective way to represent history.The screenshots contain higher-level history information, but the screenshot changes a lot before and after operation (sometimes one click may change the screen completely), which will bring difficulties to the information fusion. Finally, we add all information, including multimodal information, action histories and screenshot histories, to the BERT model and get the m-BASH (multi-modal BERT with Action histories and Screenshot Histories), which results in the state-of-the-art performance (56.88%). The results of the Response Model are shown in Table 4. BERT outperforms LayoutLM and Lay-outLMv2 by a large margin, which is consistent with the results of Action Model.We also find that adding multi-modal information and screenshot histories can improve performance, which means the model leverage the information from history to generate response. Method Response Generality According to the design of our system, it does not need to pre-define API-related slots, therefore our system has a strong generality and can be easily adapted to new APPs.To demonstrate this, we re-partition our dataset as followings: app generality Since we use multiple apps in weather domain and calendar domain, we use the data from one APP as the test set, and the other data forms the training set. domain generality We use the data from one domain as the test set, and the other data forms the training set. We evaluate the performance of m-BASH on these datasets.The results are shown in Table 5.We can find that our system can still obtain a reasonable performance, and the results of app generality experiments are even comparable to the main experiment results of LayoutLM.This result shows that common operation logic does exist in APPs, and our system can gain a general comprehension of GUI operations.It is easily applied to a new app or a new domain without modification, which shows the effectiveness and potential of our system.6 Related Work Natural Language Commands on GUI Executing natural language commands on GUI is getting research interests recently.Some studies focused on semantic parsing (Mazumder and Riva, 2021;Pasupat et al., 2018;Xu et al., 2021a), whose task is mapping the natural language query to the operations on websites.Google Duplex (Crunch, 2019) can operate websites to finish tasks like booking movie tickets or making restaurant reservations.However, it only supports limited websites and it's more a unified interface than a general dialogue system with GUI operating ability.Our proposed dataset contains real-world APPs and aims at training models with general GUI understanding. Programming by Demonstration on GUI Programming by Demonstration (PbD) systems focus on learning GUI tasks from human demonstration (Riva and Kace, 2021;Li andRiva, 2021, 2018;Li et al., 2019).SUGILITE (Li et al., 2017) records user's operations on GUI and generates a script for the learned task.APPINITE (Li et al., 2018) proposed to add descriptions for ambitious actions to enhance the robustness of the generated script.These systems generate scripts based on handcrafted rules and XML analysis, which is sensitive to GUI changes and exceptions.In this work, we aim to build a robot that can work with general mobile GUI, rather than repeating operations. Visual Dialogue More and more researchers combine CV and NLP into the dialogue system and are involved inß a more challenging task, visual dialogue (Le and Hoi, 2020;Agarwal et al., 2020;Le et al., 2020).It can be seen as a multi-step reasoning process over a series of questions (Gan et al., 2019).Gan et al. (2019) updated the semantic representation of the question based on the image and dialogue history.Wang et al. (2020) proposed VD-BERT, a simple yet effective framework of unified vision-dialog Transformer that leverages the pre-trained BERT language models for Visual Dialog tasks.Visual dialogue focuses on understanding the image contents.Besides this, our tasks also require understanding the interactions between UIs. Conclusion In this paper, we proposed the task of GUI-based task-oriented dialogue system, which replaces the traditional TOD-specific API calls with GUI operations on real APPs.The advantage is that intelligent agents can perform tasks without the need of backend TOD-specific APIs and it doesn't rely on a domain-specific schema, which means it can be applied to a new domain easily.We collect META-GUI, a dataset with dialogues and GUI traces to serve as a benchmark.Our model shows promising results on the dataset, and we hope this work could stimulate more advanced methods on GUI-TOD. In the future, we will explore how to better incorporate GUI traces into our model and build the GUI semantics based on interactions. Limitations We propose a GUI-based task-oriented dialogue system, which can perform GUI operations on real APPs to complete tasks.To verify the validity of the system, we collect META-GUI dataset, which contains dialogues and GUI operation traces.In real scenarios, an agent may not know how to complete the task presented by the user.In these cases, an agent might reply "It's too hard for me.", or something like this, which are not included in our dataset.In the future, we will augment the dataset to include such cases.Furthermore, the models we used are too large to be applied in mobile phones. It is important to compress the models, which we will attempt in the future. A Details of Apps We list the information of applications used in Table 6.To ensure the diversity of our dataset, we use 4 apps for weather domain, 3 apps for calendar domain, and 1 app each for the last 4 domains.We also list the number of turns belonging to each app.The total number of turns is larger than the actual number of turns, since that one turn may involve several Apps. D Data Review After annotation, we manually reviewed the data.The checklist includes: (1) whether the recorded GUI traces match the dialogues: we will check whether the GUI operations match the tasks proposed by the users, for example, whether the scheduled time is correct.(2) whether there are invalid operations due to the system error or misoperation: during annotation, some annotators may click a wrong position or swipe the screen mistakenly. The annotation system may sometimes run into failure. (3) whether there are redundant operations in the GUI trace: for example, some annotators may take screenshots of the same screen multiple times. EndUser:Figure 1 : Figure 1: An example of the GUI-based task-oriented dialogue system(GUI-TOD).The Action Executor will execute tasks on GUI and the system will generate a response based on the execution result. Figure 3 : Figure 3: The distribution of the total number of items versus the clicked one for each item type. Figure 4 : Figure 4: The distribution of actions. Figure 6 : Figure 6: The illustration of our Annotation System.The annotators can see dialogues in the Dialog Box and the current screen of smartphone in the Figure 7 : Figure 7: An example of the View Hierarchy for a given screen.The "+" button with a red border on the lefthand side corresponds to the highlighted element in the view hierarchy on the right-hand side. Response Model aims to generate the response to user.We use the Response Module as the output module and the other parts are the same as Action Model.Considering the prediction of response is mainly decided by the execution results and dialogues, we do not use action histories for the Response Model.For the Response Module, we use a Transformer Decoder with N layers: Table 3 : The experiment results of the Action Model on the test set.Acc.: accuracy.EM: Exact Match.F1: F1 score.CR: completion rate.MFM: Most Frequent Method.FM: Frequency Method.mm: use the multi-modal information fusion module to add image information.act_h: add action histories.scr_h: add screenshot histories. Table 4 : The experiment results of Response BLEU score on the test set. Table 5 : The results of generality experiments. Table 6 : The information of Apps.The total number of turns is larger than the actual number of turns because some turns involve several APPs. B Annotation System
7,037.8
2022-05-23T00:00:00.000
[ "Computer Science" ]
Weakly-supervised convolutional neural networks of renal tumor segmentation in abdominal CTA images Background Renal cancer is one of the 10 most common cancers in human beings. The laparoscopic partial nephrectomy (LPN) is an effective way to treat renal cancer. Localization and delineation of the renal tumor from pre-operative CT Angiography (CTA) is an important step for LPN surgery planning. Recently, with the development of the technique of deep learning, deep neural networks can be trained to provide accurate pixel-wise renal tumor segmentation in CTA images. However, constructing the training dataset with a large amount of pixel-wise annotations is a time-consuming task for the radiologists. Therefore, weakly-supervised approaches attract more interest in research. Methods In this paper, we proposed a novel weakly-supervised convolutional neural network (CNN) for renal tumor segmentation. A three-stage framework was introduced to train the CNN with the weak annotations of renal tumors, i.e. the bounding boxes of renal tumors. The framework includes pseudo masks generation, group and weighted training phases. Clinical abdominal CT angiographic images of 200 patients were applied to perform the evaluation. Results Extensive experimental results show that the proposed method achieves a higher dice coefficient (DSC) of 0.826 than the other two existing weakly-supervised deep neural networks. Furthermore, the segmentation performance is close to the fully supervised deep CNN. Conclusions The proposed strategy improves not only the efficiency of network training but also the precision of the segmentation. Background Renal cancer is one of the ten most common cancers in human beings. The minimally invasive laparoscopic partial nephrectomy (LPN) is now increasingly used to treat the renal cancer [1]. In the clinical practice, some anatomical information such as the location and the size of the renal tumor is very important for the LPN surgery planning. However, manual delineation of the contours of the renal tumor and kidney in the pre-operative CT images including more than 200 slices is a time-consuming work. In recent years, deep neural networks have been the widely used for organ and lesion segmentation in medical images [2]. However, fully-supervised deep neural networks were trained by a large number of training images with pixel-wise labels, which take a considerable time for radiologists to build. Thus, weakly supervised approaches attract more interest, especially for medical image segmentation. In recent years, several weakly-supervised CNNs have been developed for semantic segmentation in natural images. According to the weak annotations used for CNN training, these approaches can be divided into four main categories: bounding box [3][4][5][6], scribble [7,8], points [9,10] and image-level labels [11][12][13][14][15][16][17]. However, as far as we know, there are only a few weakly-supervised methods reported for the segmentation tasks in medical images. DeepCut [18] adopted an iterative optimization method to train CNNs for brain and lung segmentation with the bounding-box labels which are determined by two corner coordinates, and the target object is inside the bounding box. In another weakly-supervised scenario [19], fetal brain MR images were segmented using a fully convolutional network (FCN) trained by superpixel annotations [20] which refer to an irregular region composed of adjacent pixels with similar texture, color, brightness or other features. Kervadec et al. [21] conducted a size loss on CNN, which was used to obtain the segmentation of different organs from the scribbled annotations which annotate different areas and their classes. These weakly learned-based methods have achieved comparable accuracy on normal organs but have not yet been applied to lesions. The approaches for renal tumor segmentation are mainly based on traditional methods such as level-set [22], SVM [23] and fully-supervised deep neural networks [24,25]. To the best of our knowledge, there is no weakly-supervised deep learning technique reported for renal tumor segmentation. As shown in Fig. 1, the precise segmentation of renal tumors is a challenging task because of the large variation of the size, location, intensity and image texture of renal tumors in CTA images. For example, small tumors are often overlooked since they are difficult to be distinguished from the normal tissue, as displayed in Fig. 1(b). Different pathological types of renal tumors show varied intensities and textures which increases the difficulty of segmentation [26]. Thus, the segmentation of renal tumors by a weakly-supervised method is still an open problem. In this paper, bounding boxes of renal tumors are provided as weak annotations to train a CNN which can generate pixel-wise segmentation of renal tumors. Compared to the other types of annotations, the bounding box is a simple way to be defined by radiologists [27]. The main contributions of this paper are as follows: (1) To the best of our knowledge, we proposed a weakly-supervised CNN for renal tumor segmentation for the first time. (2) The proposed method can accomplish network training faster and overcome the undersegmentation problem compared with the iterative training strategy usually adopted by the other weakly-supervised CNNs [18,28]. The remaining paper is organized as follows: Materials section describes the datasets used in this paper. In Methods section the method is introduced in detail. Experimental results are summarized in Results section. We give extra discussion in Discussion section, a conclusion in Conclusion section and abbreviations section. The last section is the declarations of this paper. Materials The pre-operative CT images of 200 patients who underwent an LPN surgery were included in this study. The CT images were generated on a Siemens dualsource 64-slice CT scanner. The contrast media was injected during the CT image acquisition. The study was already approved by the institutional review board of Nanjing Medical University. Two scan phases including arterial and excretion phases were performed for data acquisition. In this paper, CT images acquired in arterial phase were used for training and testing. The arterial scan was triggered by the bolus tracking technique after 100 ml of contrast injection (Ultravist 370, Schering) in the antecubital vein at a velocity of 5 ml/s. Bolus tracking used for timing and scanning was started automatically 6 s after contrast enhancement reached 250HU in a region of interest (ROI) placed in the descending aorta. The pixel size of these CT images is between 0.56mm 2 to 0.74mm 2 . The slice thickness and the spacing in zdirection were fixed at 0.75 mm and 0.5 mm respectively. After LPN surgery, pathological tests were performed to examine the pathological types of renal tumors. Five types of renal tumors were included in this study, i.e. clear cell RCC (172 patients), chromophobe RCC (4 patients), papillary RCC (6 patients), oncocytoma (6 patients) and angiomyolipoma (12 patients). The volume of the renal tumors' ranges from 12.21 ml to 159.67 ml and the mean volume is 42.58 ml. As shown in Fig. 2(a), each original CT image was resampled to an isotropic volume with the size of axial slice equal to 512*512. The original CT image contained the entire abdomen, whereas only the area of the kidney needed to be considered in this experiment. Thus, the kidneys in the images were firstly segmented by the multi-atlas-based method [29] to define the ROIs of kidneys as shown in Fig. 2(b). The multi-atlas-based method just produce initial segmentation of kidneys, two radiologists checked the contours of kidneys and corrected them if necessary. The contours of tumors were drawn manually by one radiologist with 7-years' experience and checked by another radiologist with 15-years' experience in the cross-sectional slices. However, the pixel-wise masks were only used for bounding boxes generation and testing dataset evaluation. Among 200-patient images, 120 patients were selected to build the training dataset and the other 80 patients were used as the testing dataset. Methods We train our proposed method via bounding boxes of renal tumors to obtain pixel-wise segmentation. Thus, a pre-processing step is performed before the training procedure of weakly-supervised model. In Pre-processing section, the pre-processing including normalization and bounding box generation is briefly introduced. Then the proposed weakly-supervised method is illustrated in detail in Weakly supervised segmentation from bounding box Section. Finally, the parameters of training are explained in Training section. Pre-processing Normalization As is done in other studies, original CT images should be normalized before fed into the neural network. Due to the existence of bones, contrast media and air in the intestinal tract, CT values in the abdominal CT image or extracted ROIs can range from -1000HU to more than 800HU. Thus, Hounsfield values were clipped to a range of − 200 to 500 HU. After thresholding, the pixel values in all images are normalized to 0~1 by Min-Max Normalization: Bounding box generation In this paper, bounding boxes are generated by ground truth of renal tumors. As shown in Fig. 3, the bounding box of ground truth is shown in the dotted line. The parameter d in pixel represents the margin added to the bounding box in our experiment to generate different types of weak annotations. In addition, the reference labels of renal tumors in the training dataset were only used to generate bounding boxes and not used for CNN training, and the reference labels in the testing dataset were used for quantitative evaluation. The bounding boxes with different margins are defined according to the ground truth and used as weak annotations for CNN training. We set d to be 0, 5 and 10 pixels ( Fig. 4(a)-(c)) in our study to simulate the manual weak annotations by radiologists. If the bounding boxes with margin d are beyond the range of images, it will be limited in the region of images. As shown in Fig. 4, the comparison of bounding boxes with different margin values is given. Weakly supervised segmentation from bounding box Three main steps are included in the proposed method as shown in Fig. 5. Firstly, we get pseudo masks from bounding boxes by convolutional conditional random fields (ConvCRFs) [30]. Then, in the group training stage, several CNNs are trained by using pseudo masks. Fusion masks and voxel-wise weight map are generated based on the predictions of the CNNs trained in this stage. In the last stage of weighted training, the final CNN is trained by fusion masks and voxel-wise weighted cross-entropy (VWCE) loss function. These three main stages are described in the following Pseudo masks generation, Group training and fusion mask generation and Training with VWCE loss sections respectively. Pseudo masks generation As adopted by other methods [3,18], the pseudo masks of renal tumors are generated from bounding boxes as initialization for CNN model training. The quality of pseudo masks influences the performance Fig. 3 The bounding box with margin d is defined as weak annotations according to the label of renal tumors of CNN. Inspired by fully connected conditional random fields (CRFs) [31], this problem can be regarded as maximum a posteriori (MAP) inference in a CRF defined over pixels [5]. The CRF potentials take advantage of the context between pixels and encourage consistency between similar pixels. Suppose an image X = {x 1 …x N } and corresponding voxel-wise label Y = {y 1 …y N }, here y i ∈ {0, 1}. y i = 0 means x i is located outside the bounding box, while y i = 1 means x i is located inside the bounding box. The CRF conforms to the Gibbs distribution. Then, the Gibbs energy can be defined as: where the first term is unary potential, representing the energy of assigning class y i to the pixel x i , which is given by the bounding box. The latter term represents the pairwise potential, which is used to represent the energy of two pixels x i and x j in the image whose label are assigned to y i and y j respectively. In the fully connected CRFs, the pairwise potential function is defined as follows: where w is a learnable parameter, g is the gaussian kernel defined by feature vectors f and μ is a label compatibility function. However, because the volumetric image was used in our study, the computation of fully connected CRFs has high time complexity. Thus, inspired by Teichmann et al. [30], ConvCRFs were used for our pseudo masks generation. ConvCRFs adds the assumption of conditional independence into fully connected CRFs. Here, the matrix of gaussian kernel changes to: where θ is a learnable parameter and D is the Manhattan distance between pixels x i and x j , the pairwise energy is zero when the Manhattan distance exceeds D. The complexity of pairwise potential is simplified when conditional independence is added. The merged kernel matrix G is calculated by ∑w · g, and the inference result is ∑G • X which is similar to convolutions of CNNs. This assumption makes it possible to reformulate the inference in terms of convolutions in CRF, which can carry out efficient GPU calculation and complete feature learning. Thus, we can quickly get pseudo masks of renal tumors by minimizing the object function defined by Eq. (2). Group training and fusion mask generation Once we have generated pseudo masks of renal tumors, these masks are fed into CNN as weak labels for parameter learning. Most of weakly supervised segmentation methods used iterative training [5,7] to optimize the accuracy of the weak labels from coarse to fine. However, the preliminary results showed that this iterative strategy is hard to improve the accuracy of pseudo masks due to the difficulties of the renal tumor segmentation mentioned before. To overcome this problem, we proposed a new CNN training strategy instead of iterative training method. In the group training stage, we have input images {X 1 …X M } and pseudo masks {I 1 …I M }. The input training dataset is divided into K subsets {S 1 …S K }. For each subset S k , a CNN f(X; θ k ), X ∈ S k with parameter θ k is trained. In total, we can get K CNNs trained in this stage. After that, for each image X m , we can get K predictions fP 1 m …P K m g of renal tumors by these CNN models. We denote that P k m ¼ f ðX m ; θ k g. Pseudo code of group training is shown in Algorithm 1. One thing worth to be mentioned is that one image in the training dataset is used to train only one CNN model in this stage. Once K CNN models are trained successfully, all the images in the training dataset will be used to test each CNN model and obtain K results for prediction. Thus, the proposed group training strategy can ameliorate the overfitting of the model. In order to alleviate the under-segmentation in the K predictions, a mask image is generated by fusing these predictions. The fusion mask is defined as follows: where FM indicates the fusion masks, and PM indicates pseudo masks generated in Pseudo masks generation section. The ConvCRFs is adopted to refine the union of all prediction masks. The outputs of ConvCRFs will be used as the new weak labels for the next weighted training stage. In addition, a weight map is generated simultaneously which is defined as follows: When the predicted label of a voxel is renal tumor in one prediction result, its v m will be an integer within the range of 1 to K + 1. When v m is equal to 0, its value will be reset to K + 1 to represent the weight of background. Training with VWCE loss After Section Pseudo masks generation and Group training and fusion mask generation, the fusion masks of training dataset are generated for the final CNN model training in this stage. Only the final CNN model will be used for testing dataset evaluation. In this stage, we train the CNN on the whole training dataset with the fusion masks. In addition, a new voxel-wise weighted cross-entropy (VWCE) loss function is designed to constrain the CNN training procedure. The traditional cross-entropy loss is defined as follows: where FM are fusion masks defined in Eq. (5), f(X; θ) are the outputs of CNN, M represents the number of samples and C represents the number of classes. In Eq. (7), pixels belonging to different classes have equal weight. In the case of unbalanced datasets, [32] proposed weighted cross-entropy loss defined as follows: where, w c represents the weight of class c. Considering the weak annotations used in the training procedure, the voxel-wise weight map generated in the previous stage represents the probability of the predicted class given in the fusion mask. Thus, the voxel-wise weights obtained in Eq. (6) are introduced into Eq. (8) which is defined as follows: Finally, we conduct the final CNN model training with VWCE loss function on fusion masks. Our evaluations are all conducted on CNN trained in this stage. Data augmentation The ROIs of the pathological kidneys were cropped from the original images. The size of ROI is fixed at 150*150*N. Due to limited memory of GPU, the original ROIs were resampled to 128*128*64 before fed into the network. For each data, random crops and flipping were used for data augmentation. After data augmentation, the original 120 CT images were augmented into 14,400 images for the CNN training. Parameter settings The input are ROIs of kidneys and bounding boxes without any other annotations. Considering that UNet [32] has been widely used for medical image segmentation, we adopted UNet to be the CNN models in stage2 and stage3 in our experiments. The network parameters are updated by means of the back-propagation algorithm using the Adam optimizer. The initial learning rate was set to be 0.001 and decreased by decayed learning rate ¼ learning rateÃdecay rate global step decay steps . In each epoch of training, it takes 3600 iterations to traverse all the training images with the batch size of 4. The class weights of cross-entropy w c in Eqs. (8) and (9) were set to 1.0 and 0.2 for renal tumor and background respectively. In stage2, we set the number of subset K to 3 for the training dataset of 120 CT images. Each subset contains 40 CT images. Three CNN models were trained to generate corresponding predictions of each training image. And fusion masks were generated by these predictions. The loss used in this stage is WCE loss defined in Eq. (8). In stage3, the final CNN is trained by fusion masks as weak annotation labels. We evaluated the performance of the final CNN model with 80 patient images. In order to remove some misclassified outlier voxels, a connected component analysis with an 18connectivity in 3D was carried out finally. The largest connected component in the output of the final CNN model was extracted as the segmentation results of renal tumors. Existing methods We mainly compared with two weakly-supervised methods, i.e., SDI [5] and constrained-CNN [21]. The SDI method used 2D UNet to generate weak labels from bounding box by recursive training and carry out final segmentation. The weakly-supervised information used in the constrained-CNN method includes scribbles and the volume of target tissue. In this paper, the scribbles annotations used in constrained-CNN were generated by employing binary erosion on ground truth for every slice. Furthermore, the volumetric threshold of renal tumor was used in the loss function of Constrained-CNN. It was set to [0.9 V, 1.1 V], where V represents the volume of renal tumor in ground truth. As the architecture of UNet was used in [5,21], as well as our proposed method, the UNet was trained by all the training dataset with the pixel-wise labels to generate a fully-supervised UNet model for extensive comparison. Results Our method has been implemented using PyTorch framework in version 1.1.0. The network training and testing experiments were performed on a workstation with: CPU of i7-5930K, 128GB RAM and a GPU card of NVIDIA TITAN Xp of 12GB memory. The comparison of different weak labels and training losses As shown in Table 1, DSCs between the different masks and the ground truth of the training dataset are displayed. The DSCs of bounding boxes are 0.666, 0.466 and 0.341 respectively when the margins of bounding box were set to 0, 5 and 10 pixels. The DSCs of pseudo masks generated by ConvCRFs can reach 0.862, 0.801 and 0.679. However, the DSCs of fusion masks generated after group training has even higher DSC than pseudo masks. Obviously, the rectangular bounding boxes were improved significantly by the Stage 1 and Stage 2. Furthermore, the improvements of the weak labels contribute to the training of the final CNN model. Figure 6 shows the training loss of the final CNN model with different parameters. Without group training, the training loss shows the slowest rate and the highest loss value during training. Contrarily, the usage of group training and VWCE loss makes the model converges faster and better. Evaluation of segmentation results of renal tumors in the testing dataset with different parameters The DSC, Hausdorff distance (HD) [33] and average surface distance (ASD) were adopted to evaluate the segmentation results of our proposed method. The segmentation results of renal tumors in the testing dataset were obtained with different settings of parameters, i.e. number of groups, loss function and margin of bounding box. The comparison of DSCs in the testing dataset is displayed in Table 2. k = 0 means that the procedure of stage2 not used. In this situation, the pseudo masks generated by ConvCRFs were used as weak labels directly for the final CNN model training in the stage3. The loss functions used during the final model training is marked in the parentheses. MC represents the connected component analysis in the postprocessing step. The impact of group training According to the values in Table 2, group training can effectively improve the DSC. The DSCs increased by 3.4, 5.1 and 2.5% when the margin of bounding box was set to 0, 5 and 10 pixels respectively. The impact of VWCE loss The usage of VWCE loss made further improvement of the DSC. The DSCs increased by 1.2, 3.6, and 2.1% respectively when the margin of bounding box was set to 0, 5 and 10 pixels. In addition, the application of VWCE loss and MC can alleviate the outliers in the segmentation result. The values of HD and ASD decreased significantly. Finally, the highest DSCs of 0.834, 0.826 and 0.742 can be achieved respectively when different margins of bounding box were set. Figure 7 Shows the 2D visualization of segmentation results with different parameters. Obviously, renal tumors cannot be segmented precisely without group training as shown in Fig. 7(a). With the application of group training, the over-or under-segmentation of tumors is significantly improved (Fig. 7b). However, the segmentations of the boundary are still imprecise. With the application of group training and VWCE loss function, the best segmentation results have been obtained as shown in Fig. 7(c) The DSC of each case in the testing dataset with different parameters is shown in Fig. 8. For testing dataset, it can be seen that our three-stage training strategy with VWCE loss has significantly improved the segmentation results in most images and achieves the best improvement of DSC. Comparison with other methods Three methods including two weakly-supervised methods (SDI and constrained-CNN) and one fullysupervised method (UNet) were used to compare with our proposed method. These methods are briefly summarized in Existing methods section. For model training, the computation time of our proposed method is about 48 h, the SDI method is about 80 h, and the constrained-CNN and fully-supervised UNet are about 24 h. for model testing, the computation time of our proposed method is similar to the fully-supervised method. Our network can generate the segmentation result of a single image in a few seconds Table 3 is the comparison of segmentation results among our method, the other two existing weaklysupervised methods and fully-supervised method. We only compared the bounding box with d = 5 for simplicity. Experiments show that our method achieves the best results of DSC, HD and ASD, which are 0.826, 15.811 and 2.838 respectively. In terms of DSC, neither SDI nor Constrained-CNN reaches the values higher than 0.8. One thing worth to be mentioned is that the evaluation metrics are not improved effectively in SDI after MC since we deal with it in 2D situation. When the margin is lower than 5, the performance of our . 7 The comparison of 2D segmentation results with different parameters: k = 0 with WCE loss (a), k = 3 with WCE loss (b), k = 3 with VWCE loss (c). Contours in green and red correspond to ground truths and segmentation results respectively method is close to the results obtained by the fullysupervised UNet. Figure 9 shows the comparison of segmentation results obtained by different methods. For SDI method, the shape of the segmented renal tumor in 3D is not continuous as shown in Fig. 9(b). Furthermore, SDI and Constrained-CNN still suffer from the undersegmentation problem. While, our proposed method (d) presents better segmentation results which are similar to the fully-supervised method (e) in visual. Discussion According to our experimental results, our proposed weakly-supervised method can provide accurate renal tumor segmentation. The major difficulty for weaklysupervised methods is that feature maps learned by CNN models can be misled by under-or oversegmentation in the weak masks. Therefore, the key factor in weakly-supervised segmentation is to generate reliable masks from the input weak labels. In this paper, the application of pseudo masks generation and group training improve the quality of the weak masks used for the final CNN model training as shown in Tables 1 and 2. Furthermore, as shown in Fig. 8, the DSCs of large and small tumors are relatively low. It is easy to understand that the DSCs of the small renal tumors are sensitive to the over-or under-segmentation in the predictions. While in large tumor, the shape and texture of the tumor are complicated, which leads to the difficulties of the segmentation. Although this problem exists in all three methods, our proposed method shows the most significant improvement compared with the other two methods. Finally, one limitation of this study is the lack of validation of the final CNN model with external datasets. The training and testing datasets in this paper are from the same hospital. Additional validation of the final CNN model with multi-center or multivendor images will be performed in the future. Due to the differences in image acquisition protocols or the other factors, the CNN model trained in this paper may not be able to achieve a similar performance on the other datasets. However, the parameters in our model can be optimized by fine-tuning with the external datasets to improve the accuracy. In particular, the main advantage of our method is the use of weak labels for network training, which does not take much time for radiologists to generate boundingbox labels. Conclusion In this paper we have presented a novel three-stage training method for weakly supervised CNN to obtain precise renal tumor segmentation. The proposed method mainly relies on the group training and weighted training phases to improve not only the efficiency of training but also the accuracy of segmentation. Experimental results with 200 patient images show that the DSCs between ground truth and segmentation results can reach 0.834, 0.826 when the margin of bounding box was set to 0 and 5, which are close to the fully-supervised model which is 0.859. The comparison between our proposed method and the other two existing methods also demonstrate that our method can generate a more accurate segmentation of renal tumors than the other two methods. Fig. 9 The comparison of the results from three testing images obtained by different methods: 3D ground truth (a), SDI (b), Constrained-CNN(c), the proposed method (d) and fully-supervised method (e). Contours in green and red correspond to ground truth and segmentation results respectively
6,451.2
2019-11-18T00:00:00.000
[ "Medicine", "Computer Science", "Engineering" ]
Numerical validation of ice formation on a lattice structure evaporator This paper presents a comparative analysis between experimental and numerical results of the ice formation process on the surface of the evaporator in a Sparkling Water Dispenser. The experimental setup comprises an R290 refrigeration cycle that is fully equipped with thermocouples, pressure transducers, a wattmeter, and a Coriolis-meter. The evaporator, which is specifically designed to enhance the ice formation speed, is situated within a water-filled tank, where the ice formation process takes place. Due to the phase change phenomenon, which involves the interface between two phases moving, solving transient heat transfer problems involving solidifications can be inherently challenging. Analytical solutions are only possible under certain simplified circumstances. In cases where exact solutions are not available, semi-analytic, approximate, and numerical methods can be utilized to address phase-change problems. A numerical model of the solidification process based on the energy equation and conjugate heat transfer was developed using COMSOL Multiphysics. The software was found to be effective in simulating the physical processes associated with heat transfer through conduction and convection, as well as the behaviour of phase change. The results showed a very good agreement between experimental and numerical results. Introduction Refrigeration systems are widely used in many industries and applications.Among the various applications of refrigeration systems, we also find water dispensers.These devices extract potable water from a source (usually from the water supply or a reservoir), cool it to the setpoint temperature (typically below 10 °C), and dispense it.Given the physical properties of water and the specific requirements of this type of device (which must be small and consume low electrical power), an auxiliary system is needed to handle intermittent peaks of thermal power.Therefore, these devices are generally associated with a thermal storage unit, which mitigates load peaks and prevents the refrigeration circuit from being oversized and activated whenever there is a demand for chilled water.Thermal storage plays a central role in the operation of a Sparkling Water Dispenser (SWD).Due to its availability, low cost, and high energy density (333 kJ/kg), ice water storage is the most widely used technology in SWDs. Another critical component of a refrigeration system is the evaporator.In these devices, the evaporator is immersed inside the storage unit and charges the storage by solidifying the water.Ice formation can significantly reduce the heat transfer rate and increase the system's energy consumption.The ice formation process is complex and depends on various factors, such as the evaporator geometry, ambient and refrigerant temperatures, and the flow rate of the refrigerant.To better understand this process, researchers have developed various experimental and numerical models to investigate the ice formation mechanism and optimize the design of the heat exchanger [1][2][3][4].Given objective to optimize the heat exchanger geometry, the utilization of a numerical model offers a significant advantage in a cost-effective and time-efficient manner.The phase change process has been studied for a variety of geometries, the most frequent of which being shell-and-tube systems, encapsulated systems, and ice-on-coil systems.Ice-on-coil energy storage tanks are widely used in phase change commercial cooling applications [5][6][7][8][9].Numerous studies on ice formation on the coil heat exchangers have been conducted during the past decay using commercial CFD programs.Afsharpanah et al. [9] numerically examined a cuboid-shaped ice container with serpentine tubes and plates by evaluating various dimensionless parameters regarding the flow and geometric aspects.In numerical research, Hamzeh & Miansari [10] looked explored the quantity and configuration of refrigerated tubes as well as the impact of the fin dimensions on ice development.A novel ice-on-coil cold storage system with coil tube was investigated in experimental and numerical research by Mousavi Ajarostaghi et al.,[11] by proposing a new evaporator concerning better cooling process and uniform ice production.In all mentioned research, Ansys Fluent was used for the numerical simulation of the ice formation process.COMSOL Multiphysics is one such software that also can be used for this purpose.However, it is important to validate the simulation results with experimental data to ensure their accuracy. In this paper, we will compare the experimental data obtained from lattice structure evaporators with the simulation results obtained using COMSOL Multiphysics.We will also discuss the agreement between the experimental and simulated results. Refrigerant Loop Figure 1 shows the sketch of the experimental apparatus.The facility consists of a propane (R290) refrigeration cycle composed of a reciprocating compressor (COM), a water-heated evaporator (EV), an air-cooled condenser (CO), and an expansion device (CT).An oil filter (OF) is located before the capillary tube.The evaporator, consisting of a copper finned-tube structure, is placed inside a tank of water kept in motion by an agitator.The temperature of the hot source is kept constant in the laboratory environment, while the cold source's temperature evolves during ice formation on the evaporator's surface. Temperatures at each point of interest are measured with calibrated T-type thermocouples with a measurement uncertainty of ±0.2 °C.Thermocouples are located at the inlet of the capillary tube (T1), at the outlet of the capillary tube (T2), at the outlet of the evaporator (T3), at the outlet of the compressor (T4), and in front of the condenser fan (T5).Thermocouples T1, T2, T3, and T4 are placed inside the copper tube in direct contact with the refrigerant fluid.The pressure at the sides of each component is measured with absolute pressure transducers with an uncertainty of ±0.025 bar.The pressure transducers (KELLER-PAA-23 SY Ei) are placed at the inlet of the capillary tube (P1), at the outlet of the capillary tube (P2), at the outlet of the evaporator (P3) and the outlet of the compressor (P4).A Coriolimeter (Bronkhorst -MINI Cori-Flow, CF) placed at the condenser outlet measures the R290 mass flow rate with an uncertainty of ±0.2%.Finally, a digital energy meter (Socomec COUNTIS E03/E04, WM) measures the electrical power absorbed by the compressor. Evaporator and Thermal Storage The evaporator is made of a spiral-shaped copper tube with an 8 mm outer diameter installed on a square base with 19 cm on each side and a height of 21 cm.The heat exchanger tube has seven turns and a longitudinal copper fin that runs its entire length except for the corners.The fin has a thickness of 1 mm and a length of 24 mm.In addition, 80 copper bars with a 3 mm thickness, spanning the entire height of the heat exchanger, are positioned perpendicular to the fins.The thermal storage, a water-filled tank containing 5.4 litres of water, holds the evaporator.A 10 cm-long radial agitator is in the center.Six T-type thermocouples, arranged between the fourth and fifth turns starting from the top, are used to instrument the heat exchanger.These are positioned between two fins and two rods and installed into a 3D-printed support, as shown in figure 2. Thermocouples are calibrated and of the same type as those used in the refrigerant loop, with an uncertainty of 0.2 °C. Figure 2. Six thermocouples location on two different surfaces. Experimental Procedures The experimental procedure unfolds as follows.The storage is at 27 °C, then, the refrigeration cycle and the agitator inside the storage are turned on, starting the sensible cooling phase.When the average temperatures measured by the thermocouples are close to 0°C, the agitator is switched off, and the ice formation begins.The test ends when 4 kg of ice has been formed.The water level and the energy balance on the evaporator assess the amount of ice.Three experiments were conducted one day apart making sure that the same environmental temperature of 22 °C was maintained Numerical simulation The most suitable method for simulating our case is the conjugated heat transfer approach, as it accurately represents the physics involved.This approach considers the heat transfer occurring at the interface between a copper tube carrying the heat transfer fluid (refrigerant) in motion and the stationary storage fluid (water), which undergoes a phase change in both processes.As shown in figure 3a, the study was carried out using a 3D geometry, and transient analysis under conduction and convection heat transfer was used to account for the time dependence of the problem as well as for the heat transfer and fluid flow in the entire system.The propane temperature was determined based on the pressure that was observed throughout the test. Grid Mesh In accordance with the physics and geometry concerns, the computational grid was created using COMSOL Meshing (figure 3b).A sensitivity analysis was done to choose the best grid while taking accuracy and computing cost into account.To do this, the simulation was run with varying cell sizes coarser (1.9 million cells), coarse (3.4 million cells), normal (6.8 million cells), and fine (9.6 million cells).By comparing the temperature variations at five different times, the findings in the normal and fine grid meshes can be shown to have a small difference.As a result, all simulations were done using a normal grid mesh. Governing Equations The buoyancy effects are taken into account using the Boussinesq approximation: The continuity: ∇. ⃗ = 0 (2) The momentum equation: The energy equation: Where ℎ and ℎ are sensible and latent enthalpy.In these equations, , and are calculated based on the properties of two phases: Where, 1→2 is the latent heat energy, and is the thermal diffusivity. Boundary Conditions, Initial Conditions For boundary conditions, the following presumptions are made: -The outer, top, and bottom boundaries are considered as perfectly insulated surfaces. -Due to geometry symmetry, one-eighth of geometry was simulated, and two surfaces were considered as symmetric.-No-slip conditions was considered on all walls. -The temperature is based on the measured pressure and mass flow rate inside the evaporation tube used for the convective heat flux boundary condition for the inner surface of the tube.The temperature difference between tube rows was neglected. The initial condition for simulations is a temperature field of 27 °C for all domains.All the properties of the material were used in simulations considered as temperature variants.Some parameters related to the simulation are presented in Table 2. Solver Setting Conjugated heat transfer, turbulent, and laminar flow models are employed as the physics for the simulation.With a total duration of 3500 seconds and a time step of 1 second, a time-dependent approach is used.Three variables, pressure (P), velocity (V), and temperature (T), are used to solve the physics.The solver employed by COMSOL is coupled direct solver PARDISO. Results and Discussion The simulation is implemented by dividing it into two stages (0-1250 and 1251-3500).The first stage involves modeling the turbulent flow physics interface, considering the experimental conditions with a working impeller.In the second stage, the simulation focuses on the laminar flow to replicate the conditions without the impeller.The results of the numerical modeling of the ice-formation process in the geometry under the study and its comparison with the experimental data are presented in this section.Figure 4a shows the comparison between numerical and experimental data for the six temperatures along the process and the absolute error between experimental and numerical results represented in figure 4b.The maximum absolute error for six thermocouples is below two degrees.It is clear that the simulation results are in good agreement with the experimental data.The velocity profile and streamlines at surface 1 are presented in figure 5a.The temperature contours for the surface one for different times have been illustrated in figure 5b.The observation reveals that as time passes, a growing region surrounding the heat exchanger attains the temperature required for solidification.Subsequently, this area solidifies, and the temperatures within these regions even fall below the phase change temperature.To examine how ice is distributed within the storage, one can utilize the contours of the liquid fraction.Figure 5c shows phase fraction contours at different times.These findings are in excellent accord with the outlines of the temperature shown in figure 5b. Conclusion This study used a 3D numerical simulation to examine the ability of COMSOL Multiphysics in simulating the ice formation process in a find-and-tube heat exchanger.The numerical results were validated with experimental results of an evaporator part of a refrigeration cycle.The findings of this study demonstrate that the various physical phenomena involved in water flow, heat transfer through conduction and convection, and phase change can be effectively simulated using COMSOL Multiphysics software.Future endeavors will involve investigating the complete refrigeration cycle, encompassing ice formation and ice melting using COMSOL Multiphysics and performing an experimental validation of the numerical results. Figure 1 . Figure 1.Schematic of the experimental setup Figure 4 . Figure 4. Comparison between experimental and numerical results for six thermocouples, a) temperature, b) absolute errors. Figure 5 . Figure 5. Numerical results for, a) velocity profile, b) temperature profiles at different times, and c) liquid fraction at different times (blue as liquid and white as solid). [ 1 ] Jannesari H and Abdollahi N 2017 Experimental and numerical study of thin ring and annular fin effects on improving the ice formation in ice-on-coil thermal storage systems Appl.Energy.189 369-384 [2] Mađerić D Čarija Z Pavković B and Delač B 2021 Experimental and numerical study on water ice forming on pipe columns in a limited-volume storage Appl.Therm.Eng.194 117080 [3] Li Y Yan Z Yang C Guo B Yuan H Zhao J and Mei N 2017 Study of a Coil Heat Exchanger with an Ice Storage System Energies 10 1982 [4] Yang T Sun Q and Wennersten R 2018 The impact of refrigerant inlet temperature on the ice storage process in an ice-on-coil storage plate Energy.Procedia.145 82-87 [5] Saraceno L Boccardi G Celata G P Lazzarini R Trinchieri R 2011 Development of two heat transfer correlations for a scraped surface heat exchanger in an ice-cream machine Appl.Therm.Eng. 31 4106-4112 [6] Wang Baolong, Li Xianting, Zhang Maoyong, Yang Xudong 2011 Experimental Investigation of Discharge Performance and Temperature Distribution of an External Melt Ice-on-Coil Ice Storage Tank HVAC&R Res. 9 291-308 [7] Ezan M A Erek A 2012 Solidification and Melting Periods of an Ice-on-Coil Latent Heat Thermal Energy Storage System J. Heat.Transf.134 062301 [8] López-Navarro A Biosca-Taronger J Torregrosa-Jaime B Martínez-Galván I Corberán J M Esteban-Matías J C Payá J 2013 Experimental investigation of the temperatures and performance of a commercial ice-storage tank Int.J. Refrig.36 1310-1318 [9] Mousavi Ajarostaghi S S Poncet S Sedighi K Amiri L 2023 Solidification analysis in an ice-on-coil ice storage system: Experimental and numerical approaches J. Energy Storage 65 107291 [10] Afsharpanah F Pakzad K Mousavi Ajarostaghi S S and Arıcı M 2022 Assessment of the charging performance in a cold thermal energy storage container with two rows of serpentine tubes and extended surfaces J. Energy.Storage.51 104464 [11] Hamzeh H A and Miansari M 2020 Numerical study of tube arrangement and fin effects on improving the ice formation in ice-on-coil thermal storage systems Int.Commun.Heat.Mass.Transf.113 104520 Table 1 . Table 1 summarizes the main characteristics and accuracy of the measuring devices.Characteristics and accuracy of the measurement elements.
3,396.4
2024-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Miniscule differences between the sex chromosomes in the giant genome of a salamander, Ambystoma mexicanum In the Mexican axolotl (Ambystoma mexicanum) sex is known to be determined by a single Mendelian factor, yet the sex chromosomes of this model salamander do not exhibit morphological differentiation that is typical of many vertebrate taxa that possess a single sex-determining locus. Differentiated sex chromosomes are thought to evolve rapidly in the context of a Mendelian sex-determining gene and, therefore, undifferentiated chromosomes provide an exceptional opportunity to reconstruct early events in sex chromosome evolution. Whole chromosome sequencing, whole genome resequencing (48 individuals from a backcross of axolotl and tiger salamander) and in situ hybridization were used to identify a homomorphic chromosome that carries an A. mexicanum sex determining factor and identify sequences that are present only on the W chromosome. Altogether, these sequences cover ~300 kb, or roughly 1/100,000th of the ~32 Gb genome. Notably, these W-specific sequences also contain a recently duplicated copy of the ATRX gene: a known component of mammalian sex-determining pathways. This gene (designated ATRW) is one of the few functional (non-repetitive) genes in the chromosomal segment and maps to the tip of chromosome 9 near the marker E24C3, which was previously found to be linked to the sex-determining locus. These analyses provide highly predictive markers for diagnosing sex in A. mexicanum and identify ATRW as a strong candidate for the primary sex determining locus or alternately a strong candidate for a recently acquired, sexually antagonistic gene. AUTHOR SUMMARY Sex chromosomes are thought to follow fairly stereotypical evolutionary trajectories that result in differentiation of sex-specific chromosomes. In the salamander A. mexicanum (the axolotl), sex is determined by a single Mendelian locus, yet the sex chromosomes are essentially undifferentiated, suggesting that these sex chromosomes have recently acquired a sex locus and are in the early stages of differentiating. Although Mendelian sex determination was first reported for the axolotl more than 70 years ago, no sex-specific sequences have been identified for this important model species. Here, we apply new technologies and approaches to identify and validate a tiny region of female-specific DNA within the gigantic genome of the axolotl (1/100,000th of the genome). This region contains a limited number of genes, including a duplicate copy of the ATRX gene which, has been previously shown to contribute to mammalian sex determination. Our analyses suggest that this gene, which we refer to as ATRW, evolved from a recent duplication and presents a strong candidate for the primary sex determining factor of the axolotl, or alternately a recently evolved sexually antagonistic gene. ABSTRACT 23 In the Mexican axolotl (Ambystoma mexicanum) sex is known to be determined by a 24 single Mendelian factor, yet the sex chromosomes of this model salamander do not 25 exhibit morphological differentiation that is typical of many vertebrate taxa that possess 26 a single sex-determining locus. Differentiated sex chromosomes are thought to evolve 27 rapidly in the context of a Mendelian sex-determining gene and, therefore, 28 undifferentiated chromosomes provide an exceptional opportunity to reconstruct early 29 events in sex chromosome evolution. Whole chromosome sequencing, whole genome 30 resequencing (48 individuals from a backcross of axolotl and tiger salamander) and in 31 situ hybridization were used to identify a homomorphic chromosome that carries an A. 32 mexicanum sex determining factor and identify sequences that are present only on the 33 W chromosome. Altogether, these sequences cover ~300 kb, or roughly 1/100,000 th of 34 the ~32 Gb genome. Notably, these W-specific sequences also contain a recently 35 duplicated copy of the ATRX gene: a known component of mammalian sex-determining 36 pathways. This gene (designated ATRW) is one of the few functional (non-repetitive) 37 genes in the chromosomal segment and maps to the tip of chromosome 9 near the 38 marker E24C3, which was previously found to be linked to the sex-determining locus. 39 These analyses provide highly predictive markers for diagnosing sex in A. mexicanum 40 and identify ATRW as a strong candidate for the primary sex determining locus or 41 alternately a strong candidate for a recently acquired, sexually antagonistic gene. In many species, sex is determined by the inheritance of highly differentiated 63 (heteromorphic) sex chromosomes, which have evolved independently many times 64 throughout the tree of life (1-3). Often these chromosomes differ dramatically in 65 morphology and gene content (4-6). In mammals, males have a large, gene rich X-66 chromosome and a degraded, gene poor Y-chromosome, while females have two X 67 chromosomes. In birds and many other eukaryotes, females are the heterogametic sex 68 with a large Z and smaller W chromosome, while males are homozygous, carrying two 69 Z chromosomes. Differentiated sex chromosomes are thought to arise through a 70 relatively stereotypical process that begins when a sex-determining gene arises on a 71 pair of homologous autosomes (5, 6). The acquisition of sexually antagonistic alleles, 72 alleles that benefit one sex and are detrimental to the other, favors the fixation of 73 mutational events that suppress recombination in the vicinity of the sex-determining 74 locus (7,8). Recombination suppression can lead to the accumulation of additional 75 sexually antagonistic mutations and repetitive elements, and over time this results in the 76 loss of nonessential parts of the Y or W chromosome, resulting in the formation of 77 heteromorphic sex chromosomes (9). 78 Unlike the majority of mammals and birds with stable sex-determining systems 79 and heteromorphic sex chromosomes, amphibians have undergone numerous 80 evolutionary transitions between XY and ZW-type mechanisms and may possess 81 morphologically indistinguishable (homomorphic) sex chromosomes, like those of the 82 axolotl (10-13). Homomorphic sex chromosomes are not altogether rare among 83 animals, with examples in fish (14), birds (15), reptiles (16) and amphibians (17). 84 Among most amphibians that have been investigated, homomorphy is prevalent (17-85 19). It has been suggested that a majority of salamanders have homomorphic sex 86 chromosomes (18,20), however, evidence for genetic sex determination in most 87 species is largely based on the observation of 1:1 sex ratios from clutches without 88 thorough demonstration of Mendelian inheritance. 89 Early developmental/genetic experiments revealed a ZW type sex-determining 90 mechanism for A. mexicanum (21)(22)(23). The first experiment to test for female 91 heterogamety involved sex reversal through implantation of a testis preprimordium from 92 a donor embryo to a host female embryo. The prospective ovary developed instead into 93 a functional testis. This sex-reversed male was then crossed with a normal female (24). 94 It was expected that if the female were homozygous for sex (XX), the offspring would all 95 be female. If the female were heterozygous for sex (ZW), however, the offspring would 96 have an approximate female to male ratio of 3:1. Two matings with the sex-reversed 97 animals produced a combined 26.1% males, consistent with the hypothesis that the 98 male was indeed a sex-reversed female with ZW chromosomes (21,24). Subsequent 99 studies showed normal sex ratios from matings with the F1 males and most of the F1 100 females, but several of the F1 females produced spawns of all females, suggesting they 101 carried the unusual WW genotype (24). 102 Following these foundational studies, early genetic mapping studies used cold 103 shock to inhibit meiosis II and assessed triploid phenotypes to estimate the frequencies 104 of equatorial separation and map distances between recessive mutations and their 105 linked centromeres (25). Based on these analyses, the sex determining locus was 106 predicted to occur near the end of an undefined chromosome (25) and later estimated 107 to be 59.1 cM distal to the centromere (essentially, freely recombining) (23). 108 Karyotypic analyses later indicated that the smallest chromosomes were 109 heteromorphic in Ambystoma species, suggesting that the smallest pair of 110 chromosomes carried the Mendelian sex determining factor in A. mexicanum (26) and 111 in the A. jeffersonianum species complex (27). However, more recent linkage mapping 112 studies indicated that sex was determined by a locus on one of the larger linkage 113 groups (26, 28), and chromosome sequencing studies have demonstrated that the 114 smallest chromosomes do not carry the sex determining region (29,30). Notably, 115 extensive cytogenetic studies performed by Callan (31), including the use of cold 116 treatments to add constrictions to chromosomes and examination of lampbrush 117 chromosomes from developing oocytes, revealed no features that could be associated 118 with differentiated sex chromosomes. These analyses not only indicated that the sex 119 chromosomes were apparently identical to one another, but also revealed that mitotic 120 chromosomes 9, 10 and 11 were essentially indistinguishable from one another (31). 121 More recently, meiotic mapping of polymorphisms within controlled crosses 122 localized the sex-determining region to the tip of Ambystoma LG9 (previously 123 designated LG5) distal to the marker E24C3 (29). These crosses included a mapping 124 panel that was generated by backcrossing female A. mexicanum/A. tigrinum hybrids 125 with male A. mexicanum. These crosses also revealed no difference in recombination 126 frequencies between the sexes. However, these studies were somewhat limited by the 127 fact that they did not sample large numbers of markers in close proximity to the sex 128 locus or W-specific sequences (29). Taken together, analyses of the Ambystoma sex 129 determination suggest that the sex chromosomes are largely undifferentiated and that, 130 presumably, the sex chromosomes arose recently within the tiger salamander species 131 complex. 132 To identify sex-linked (W-specific) regions in the undifferentiated sex 133 chromosomes of axolotl, we generated sequence reads for 48 individuals of known sex 134 that were derived from a backcross (A. mexicanum/A. tigrinum X A. mexicanum). These 135 reads were then aligned to an existing reference genome from a female axolotl (30, 32) 136 (www.ambystoma.org). Analyses of read coverage identified 152 putative W-linked 137 sequences, including two genes, an ATRX paralog and an ortholog of MAP2K3. The W-138 linked ATRX paralog, ATRW, is estimated to have duplicated within the last 20 million 139 years, providing an estimate of the possible origin of the sex-determining locus in the 140 tiger salamander species complex. In addition, we anticipate that these sex-linked 141 markers will be useful for identifying sex in juvenile axolotls within lab-reared 142 populations, where sex is an important covariate for experimental studies, including 143 studies of metamorphosis and regeneration (28,33). 144 Identification of the sex-bearing chromosomes by FISH 147 Previous studies have demonstrated that sex is linked to the marker E24C3, at a 148 distance of ~5.9 cM distal to the terminal marker on LG9 (29). Consistent with linkage 149 analyses, E24C3 was detected near the tip of an average-sized chromosome ( Figure 150 1). A second BAC corresponding to a marker from the opposite end of LG9 (E12A6) 151 localized to the opposite tip of the same chromosome, indicating that this chromosome 152 corresponds precisely to LG9 ( Figure 1). Notably, the BAC carrying E12A6 also cross-153 hybridized with the centromere of all chromosomes, a feature that could potentially be 154 useful in estimating distances of genes to their respective centromeres. 155 156 Laser capture, sequencing and assembly of the Z chromosome 157 In an attempt to increase the number of markers that could be associated with the sex 158 chromosome, we performed laser-capture sequencing on a chromosome corresponding 159 to LG9. This library was generated from a single dyad that was collected in a larger 160 series of studies on laser capture microscopy of axolotl chromosomes (34). The sex 161 chromosome library contained a total of ~143 M reads between 40 and 100 bp after 162 trimming and contained 995 reads that mapped to 23 distinct markers (transcripts) that 163 had been previously placed on LG9 (Figure 2). In total, this initial sequencing run 164 accounted for 40% of the markers that are known to exist on the linkage group, which 165 was considered strong evidence that this library sampled the sex chromosome. Given 166 this support, an additional lane of sequencing was performed, yielding ~936 M 167 additional reads (for a total of 1,078,893,614 reads). After trimming, ~542 M reads 168 remained. Alignment to human and bacterial genomes revealed that 1.7% and 0.1% of 169 trimmed reads aligned concordantly to the human genome and bacterial genomes, 170 respectively. These were considered contaminants and were removed from subsequent 171 analyses. Of the remaining reads, 68,844 aligned to 40 LG9 contigs representing 70% 172 of the known markers on LG9 ( Figure 2). An error-corrected assembly of these data 173 yielded a total of 1,232,131 scaffolds totaling 242.4 Mb with a scaffold N50 length of 295 174 bp, and contig N50 length of 126 bp. (Table 1: results from other chromosomes are 175 shown for comparison purposes). We also used this library to identify a set of scaffolds 176 from a recently published assembly of a male axolotl genome that could be assigned to 177 the Z chromosome on the basis of sequence coverage. This analysis yielded 2531 178 scaffolds spanning a total of 1.02 Gb (Supplementary Table 1 Figure 3B). While a ZW-type mechanism for sex 198 determination has been inferred for the newt (37), the exact chromosome that 199 determines sex is unknown and no candidate genes currently exist. 200 201 In silico identification of female-specific regions 202 To identify sex-specific regions of the genome, we aligned low coverage sequence data 203 from 26 males and 22 females to both the LG9 assembly and the first public draft 204 assembly of the axolotl genome (30, 32) (www.ambystoma.org). The draft assembly 205 was generated using a modified version of SparseAssembler (38) from 600 Gb of HiSeq 206 paired end reads and 640Gb of HiSeq mate pair reads. Sequencing data were 207 produced using DNA from a female axolotl, which should contain genomic regions from 208 both Z and W chromosomes. Notably, a recently published draft genome was generated 209 from a male and is not expected to represent W-specific regions (39). Males and 210 females used for re-sequencing efforts were drawn from a previously published meiotic 211 mapping panel, which was used in the initial mapping of the sex locus (29). Each 212 individual was sequenced to ~1X coverage with Illumina HiSeq short paired-end reads 213 (125bp) resulting in ~7.4 billion total male reads and 6.4 billion total female reads. The 214 ratio of female to male coverage was calculated across ~10.5M intervals covering ~19 215 Gb of the draft assembly. Genome-wide coverage ratios generally fell within a tight 216 distribution centered on equal coverage, after accounting for initial differences in 217 average depth of coverage ( Figure 4). Intervals were considered to be candidate sex-218 specific regions if enrichment scores [log2 (female coverage/adjusted male coverage)] 219 exceeded two. In total, these analyses identified only 201 candidate female-specific 220 intervals that were contained within 109 genomic scaffolds, with 20 genomic scaffolds 221 having 2 or more intervals (Supplementary Table 2). The combined size of these 222 intervals is approximately 300Kb or ~0.0094% of the genome. 47 intervals were 223 represented by zero male reads, and the average male coverage of male reads for 224 other intervals ranged from 0.002 to 8.63. 225 PCR validation of candidate regions 227 PCR primers were designed for all candidate scaffolds and subjected to initial PCR 228 validation using a panel of six females and six males (Supplementary Table 3). In total, 229 primers from 42 of the 109 scaffolds yielded specific amplicons in all females and no 230 amplicons from males and were considered sex-specific. The combined size of these 231 scaffolds is approximately 174Kb or ~0.0054% of the genome. Aside from the PCR 232 validated female-specific scaffolds, primers from one scaffold were present in all 233 females and one male, two were present in four females and no males, and four were 234 present in a subset of the animals with no specific trend toward one sex or the other. 235 Presumably these represent structural (insertion/deletion) variants that are segregating 236 within the lab population of A. mexicanum, perhaps representing tiger salamander (A. A. mexicanum (male) genome. These revealed that several predicted W-specific contigs 243 correspond to copies of repetitive elements with highly similar sequences elsewhere in 244 the genome, which appears to explain a majority of cases wherein primers yield 245 amplicons in both sexes or are polymorphic among males and females. 246 247 Identifying W-specific genes 248 To search for evidence of sex-specific genes, all 42 validated sex-specific scaffolds 249 were aligned (blastx) to the NCBI nonredundant protein database (41). In total, these 250 searches yielded alignments to 17 protein-coding genes ( 271 The identification of a sex-linked ATRX homolog is notable as ATRX is known to 272 play major roles in sex determination in mammals and other vertebrates (45)(46)(47)(48). 273 Alignments between scaffold SuperContig_990642 and the autosomal ATRX homolog 274 revealed that two distinct ATRX homologs exist in axolotl ( Figure 5). Alignments 275 between ATRX and its sex-specific duplicate show polymorphisms in the ATRX gene 276 that are not present in sex-linked ATRX, characteristic of a hemizygously-inherited 277 duplication (Supplementary Figure 1). Henceforth, we refer to the conserved syntenic 278 homolog on LG2 as ATRX and the W-specific homolog as ATRW. A nucleotide 279 alignment between the axolotl ATRX and ATRW genes shows that the genes share 280 90% identity across 1089 aligned nucleotides, and as such it appears that the two 281 genes diverged relatively recently by transposition of a duplicate gene copy to the W 282 chromosome. To further test this idea and better define the timing of this duplication, 283 several trees were generated using ATRX homologs from multiple vertebrate taxa 284 ( Figure 6, Supplementary Figure 2). Based on these trees, we infer that a duplication 285 event gave rise to ATRW within Ambystoma, after divergence from its common 286 ancestor with newt (the two lineages shared a common ancestor ~151 MYA) (49). 287 Considering the degree of sequence divergence and the relative length of shared vs. 288 independent branches we estimate that the ATRW homolog may have arisen sometime 289 in the last 20 MY ( Figure 6B), a timing that roughly coincides with a major adaptive 290 radiation in the tiger salamander lineage (50, 51). 291 To shed further light on the evolution of ATRX and ATRW within the Ambystoma 292 lineage, we examined patterns of derived substitutions in ATRX and ATRW. Across the 293 251 bp alignment, 9 nucleotide substitutions can be attributed to ATRW since the 294 divergence of axolotl, and these are associated with changes in 2 amino acids. By 295 comparison, ATRX on LG2 shows only 1 nucleotide substitution since the duplication 296 event ( Figure 6). This suggests that ATRW may be evolving at a faster rate than ATRX, 297 in which case 20 MY may represent a substantial overestimate for the origin of the 298 duplication that gave rise to ATRW. 299 300 DISCUSSION 301 Sex chromosome evolution in the axolotl 302 The results from this study show that the homomorphic sex chromosomes of the axolotl 303 contain a small non-recombining region that is specific to the female W chromosome. 304 The female-specific sequence is estimated to be approximately 300Kb, or roughly 305 1/100,000 th of the enormous axolotl genome. It is not surprising that the differences in 306 recombination were not initially evident due to the physical size of the genome and 307 marker density in the Ambystoma meiotic map (29). With respect to the current 308 fragmented female genome assembly, it is still not possible to predict gene orders within 309 this region or locate possible inversions; however, the data are sufficient to identify 310 robust markers for sex and genes that exist in the non-recombining region. Of the few 311 protein-coding genes found within the validated sex-specific scaffolds, two appear to 312 represent non-repetitive coding sequences, including one that represents a relatively 313 recent duplication of the transcriptional regulator ATRX. 314 The ATRX gene is located in the non-recombining region of the X chromosome 315 in mammals. The gene encodes a chromatin remodeling protein that belongs to the 316 SWI/SNF family. It is linked to the rare recessive disorder, alpha-thalassemia X-linked 317 intellectual disability, which is characterized by severe intellectual disability, 318 developmental delays, craniofacial abnormalities, and genital anomalies in humans . In 319 some cases, a mutation in the ATRX gene can lead to female sex reversal due to early 320 testicular failure (52, 53). Gene expression studies performed in a marsupial and 321 eutherian showed that ATRX expression was highly conserved between the two 322 mammals and was necessary for the development of both male and female gonads 323 (48). Because ATRX is one of the few protein-coding genes present in the region of W-324 specific sequence and has been characterized in the sex differentiation of mammals, we 325 propose ATRW as a candidate sex gene for axolotl, or alternately a strong candidate for 326 an acquired, sexually antagonistic gene. 327 Reanalysis of expression data from recent published tissue-specific 328 transcriptomes showed expression of the ATRX gene (from LG2) in all major tissues 329 and developing embryos, however, they showed no evidence of expression of the 330 ATRW gene (54). The tissues represented in the study included whole limb segments, 331 blastemas from regenerating limbs, bone and cartilage, muscle, heart, blood vessel, gill, 332 embryos, testis, and notably, ovaries. It is not clear at what stage the ovarian tissue was 333 taken; however, the author suggests multiple ovaries were sequenced from an adult, 334 and multiple libraries exist for the tissue. It is possible that this sex-specific gene is 335 simply not highly expressed at this specific stage (or in the adult stage, in general) and 336 may only be expressed during early gonadogenesis. Similarly, W-linked genes in 337 chicken were unknown until RNAseq studies were performed prior to and during 338 gonadogenesis (55). 339 If ATRW is the primary sex-determining gene in axolotl, then the origin of this Ongoing improvements to the Ambystoma genome assembly and development of 356 genome assemblies for other salamander taxa should improve our ability to assess 357 hypotheses related to the presence of homomorphic sex chromosomes (e.g. recent 358 evolution, high-turnover, and fountain of youth) (1, 17, 57-62). Additionally, recent 359 efforts to develop genetic tools for the axolotl model should facilitate functional analyses 360 that will be necessary to test whether ATRW is the primary sex-determining gene in 361 axolotl or elucidate its role as a sexually antagonistic factor (63, 64). Methods for 362 achieving targeted gene knockout and knock-ins have been developed in axolotl and 363 could be adapted to better assess the functionality of ATRW in axolotls (40,65,66). 364 365 Utility of sex-linked markers in axolotl research 366 Sex is an important biological variable in research, as it may contribute to variation in 367 experimental studies. Because axolotl is an important model for many areas of research 368 and has shown sex-specific effects, such as tail regeneration, it is important for 369 investigators to differentiate sex effects from other experimental variables (28). Until 370 now it was necessary to visualize the sex organs, utilize axolotls that had produced 371 gametes, or perform experiments in hybrid crosses that segregate markers at the linked 372 locus E24C3 in order to accurately determine sex in axolotls (29). However, many 373 experiments utilize juvenile animals that may not have completed gonadal differentiation 374 or maturation. With several robust markers for W-specific sequences in hand, it is now 375 possible to precisely differentiate sex of an axolotl with a simple PCR (67). These 376 markers will also positively impact axolotl husbandry, as individuals may be housed and 377 utilized in experiments accordingly. 378 Laser capture microdissection and amplification 381 Preparation of cells for metaphase spreads and laser capture were performed as 382 previously described (30). Briefly, fixed cells were spread on UV-treated 1.0mm 383 polyethylene naphthalate (PEN) membrane slides. Slides were inverted (membrane 384 side down) over a steam bath of distilled water for 7 seconds. Immediately after 385 steaming, 100 µl of the fixed cells were dropped across the middle of the slide 386 lengthwise. Each slide was subsequently placed in a steam chamber at ~35°C for 1 387 minute, then set on the hot plate for 5 minutes. After slides dried, chromosomes were 388 stained via immersion in freshly made Giemsa stain (Sigma-Aldrich GS500-500 ML: 389 0.4% Giemsa, 0.7 g/L KH2PO4, 1.0 g/L Na2HPO4) for 2 minutes, rinsed in 95% ethanol, 390 rinsed in distilled water, then allowed to dry in a desiccator until used. 391 The sex chromosome was captured using a Zeiss PALM Laser Microbeam 392 Microscope at 40X magnification as previously described (30) Sequence analyses and assembly 410 Because amplified sequences contain a non-complex leader sequence corresponding 411 to the pseudorandom primers that are used for whole chromosome amplification, reads 412 were trimmed prior to further processing. Trimmomatic was used to remove leader 413 sequences derived from phiX and to trim any window of 40 nucleotides with quality 414 score lower than Q30 (68). Reads were then aligned to 945 model transcripts from the 415 Ambystoma linkage map (35) using the Burrows Wheeler Aligner with the single-end 416 mapping option and BWA-MEM algorithm (69). They were also aligned to several 417 bacterial genomes as well as the human reference genome using the paired-end 418 mapping option to identify exact matches for Bowtie 2 (70). Paired reads that mapped 419 concordantly to the human and bacterial genomes were considered potential 420 contaminants and removed. After trimming and removal of potential contaminants, the 421 reads were corrected with Blue (71) using female A. mexicanum whole genome shotgun 422 data (30) and assembled with SOAPdenovo2 (72). 423 To assign scaffolds from the whole genome assembly of a male axolotl genome 424 to the Z chromosome, error-corrected laser capture reads were aligned as paired-end 425 reads to the assembly with BWA-MEM and filtered to preserve only pairs with 426 concordant reads that map to the reference with no mismatches (69). For each scaffold 427 we calculated physical coverage (i.e. coverage by paired-end fragments: bedtools v. 428 2.27, genomeGoverageBed, option pc, (73)) and assigned scaffolds to the Z 429 chromosome if at least 5% of their bases were covered by reads from laser capture 430 sequencing. 431 FISH of sex-associated BAC E24C3 433 Fluorescent in situ hybridization of BACs to metaphase chromosome spreads were 434 performed as previously described (74,75) . A Qiagen Large Construct kit (Qiagen 435 Science, 12462) was used to extract bacterial artificial chromosome (BAC) DNA for 436 E24C3 and E12A6, previously associated with sex (29). Probes for in situ hybridization 437 were labeled by nick-translation using direct fluorophores Cyanine 3-dUTP (Enzo Life 438 Sciences, ENZ-42501) or Fluorescein-12-dUTP (Thermo Scientific, R0101) as 439 described previously (74) and hybridization of BAC probes was performed as previously 440 described for axolotl chromosomes (40). 441 Phenol-chloroform extraction in 1.2X SSC was used to isolate repetitive DNA 442 fractions from female salamander tissue (76). DNA was denatured for 5 minutes at 443 120°C, re-associated at 60°C for 1 hour to obtain Cot DNA. Microtubes containing the 444 DNA were placed on ice for 2 minutes, then transferred to a bead bath at 42°C for 1 445 hour with 5X S1 nuclease buffer and S1 nuclease for a concentration of 100 units per 1 446 mg DNA. DNA was precipitated with 0.1 volume of 3M sodium acetate and 1 volume 447 isopropanol at room temperature, tubes were inverted several times and centrifuged at 448 14,000 rpm for 20 minutes at 4°C. DNA was washed with 70% ethanol, centrifuged at 449 14,000 rpm for 10 minutes at 4°C, air dried and solubilized in TE buffer. 450 451 Conservation and evolution of salamander chromosomes 452 To evaluate the sex chromosome assembly, we performed alignments between the sex 453 chromosome assembly and reference transcripts (V4: Sal-Site)(32) using megablast 454 (77) to identify genes that occur on the sex chromosome. These genes were then 455 aligned (tblastx) (78) to annotated protein coding genes from the chicken genome 456 assembly (Gallus_gallus-4.0). Annotated genes from scaffolds assigned on the basis of 457 read mapping were aligned (blastp) (78) to this set of annotated chicken genes. Those 458 with an alignment length of at least 50 amino acids and at least 60% identity were 459 considered potential homologs. 460 A similar approach was taken to identify the homologous newt linkage group to 461 assess potential sex candidate genes. Ambystoma reference transcripts from LG9 (V4) 462 were aligned (tblastx) (78) to the chicken genome assembly (41). Using the same 463 minimum thresholds as above, the potential homologs were then used to blast (tblastx) 464 (78) to the newt, Notophthalmus viridescens, reference transcripts (36). 465 466 Identification of female-specific regions 467 We applied depth of coverage analysis to identify single-copy regions in the assembly 468 that have approximately half of the modal coverage in females and 469 underrepresented/absent coverage in males. Reads were generated on an Illumina 470 HiSeq2000 (Hudson Alpha Institute for Biotechnology, Huntsville, Al.) from DNA that 471 was isolated via phenol-chloroform extraction (76) from 48 individuals that were drawn 472 from a previously described backcross mapping panel (42). The resulting reads were 473 aligned to the axolotl draft genome assembly using BWA-MEM (using default 474 parameters) followed by filtering of secondary alignments (samtools view -F2308) and 475 alignments clipped on both sides of the read. Merging of female and male bam files was 476 performed using Samtools merge (69, 79). 477 We used DifCover (https://github.com/timnat/DifCover) (80) to identify candidate shorter than 1Kb and contained fewer than 1000 valid bases (short scaffolds or intervals 493 that fall on the scaffold ends). These shorter intervals were filtered to exclude intervals 494 with fewer than 500 bases or fewer than 200 valid bases. 495 Scaffolds that were validated through PCR in a panel of 6 females and 6 males 496 were aligned to the V4 and V5 Ambystoma transcriptome assemblies in order to identify 497 the genes present on the W-specific portion of the sex chromosome. If a transcript 498 aligned to the scaffold with a percent identity higher than 95%, that transcript was 499 blasted (blastx) (78) to the NCBI nonredundant protein database to search for 500 homologous genes. 501 502 Primer design and PCR 503 Primers were designed within the sex candidate regions identified using Primer3 (81). 504 Each primer was 25-28 bp in length, with a target melting temperature of 60°C, 20-80% 505 GC content and 150-400 bp product sizes depending on the size of the region and 506 location of repeats (avoiding inclusion of repetitive sequence in primer and product). 507 Fragments were amplified using standard PCR conditions (150ng DNA, 50 ng of each 508 primer, 200 mM each dATP, dCTP, dGTP, dTTP; thermal cycling at 94°C for 4 minutes; 509 34 cycles of 94°C for 45 seconds, 55°C for 45 seconds, 72°C for 30 seconds; and 72°C 510 for 7 minutes). Reactions were tested on a panel of six males and six females to 511 validate sex specificity. Gel electrophoresis was performed and presence/absence was 512 recorded for each set of primers (Supplementary Figure 3). The scaffolds from which 513 primers were designed were considered female-specific if the primers yielded specific 514 amplicons in all six females and in no males. 515 Results from these data were used to develop a PCR based assay for 516 determining sex in axolotls at any stage of development. This method uses a primer pair 517 that amplifies a 219 bp DNA fragment in females and an internal control that yields a 518 486 bp DNA fragment in both sexes. This biplex PCR results in two bands (219 bp and 519 486 bp) for females and only the control band (486 bp) in males (67). 520 521 Phylogenetic Reconstruction 522 Homologene was used to collect putative homology groups from the ATRX genes in a 523 variety of eukaryotes (82). Sequence for axolotl ATRX was obtained from Ambystoma 524 reference transcripts, and the newt ATRX gene was obtained by aligning human ATRX 525 to the newt reference transcriptome (83). All sequences were aligned using MEGA7 526 (84) via MUSCLE (85). Sequences were trimmed to compare a conserved subregion of 527 the sequence that was present in all species, a string of 251 codons ( Figure 5). The gray shaded region shows the approximate timing of the ATRW duplication event. 622 The tiger salamander complex consists of 7 named species that occur in the same 623 monophyletic clade as A. californiense, A. mexicanum, and A. tigrinum (56, 91). This 624 tree was generated using Timetree (49) with modification to the position of A. 625 californiense based on Shaffer and McKnight (1996) and Shaffer et al. (2004). Gel electrophoresis of PCR for scaffolds determined to be sex-specific based on 659 computational analyses. Those that show presence in females only are denoted with an 660 asterisk and considered sex-specific. PCRs were tested on six females and six males, 661 and the associated lanes are denoted with ♀ and ♂, respectively. The first and last 662 lanes are labeled with "L" to denote 100 bp ladder. Numerical labels correspond to 663 primer information provided in Supplementary Table 3
7,729.4
2018-06-22T00:00:00.000
[ "Biology" ]
Nucleosynthesis in Strange Star Mergers The possible existence of deconfined matter in the cores of neutron stars has been studied for over three decades without a firm indication either for or against this proposition. Analysis mostly rely on the comparison of mass-radius curves obtained for different compositions with observational data on the mass of the most massive objects of this kind accurately determined. Nevertheless, there are other possibilities for indirectly studying the internal composition of this class of compact objects, e.g, analyzing cooling behavior, X-ray bursts, supernova’s neutrinos. We present calculations on the expected nucleosynthesis spectra for the strange star-strange star merger scenario as means to test the strange quark matter hypothesis and its realization inside such objects. This would result very different from the typical r-process nucleosynthesis expected in neutron star mergers since the high temperature deconfinement of strange matter would produce large amounts of neutrons and protons and the mass buildup would proceed in a Big-Bang nucleosynthesis like scenario. The neutron to proton ratio would allow to reach the iron peak only, a very different prediction from the standard scenario. The resultant light Introduction The exact composition of neutron stars is still under debate and possibilities range from proton, neutrons, and electrons to the presence of more exotic components (such as pions) and even total deconfinement to quark matter (see Ref. 1 for a broad review on the subject and references therein). Recent pulsar mass measurements 2-3 point to a rather stiff equation of state but no definite answer can be provided yet. Among the possibilities, three compositions of these compact objects are widely considered: neutron stars, made of hadronic matter only, hybrid stars (with a quark core, either two -up and down -or three -up, down, and strange -flavors) and strange stars. We investigate the nucleosynthesis and light curve that would result from a strange star -strange star (SS) merger. Given the presumed high abundance of neutrons in the matter ejected in a merger of two neutron stars (NS) or a neutron star and a black hole, r-process nucleosynthesis is expected to take place. This would render a light curve that peaks a few days following the short gamma-ray burst in the infrared region due to the high opacity of lanthanide-rich matter, a kilonova. Two such events have been observed: in 2013 (see Refs. 4,5 ), with characteristics that indicate that the origin was the merger of two neutron stars and a neutron star and a black hole, respectively. Characteristics of the Ejecta Working within the statistical multi-fragmentation model, as presented in Paulucci & Horvath 2014 (see Ref. 6 ), we have calculated the fragmentation spectra of strange quark matter in a compact star merger scenario. The amount of ejected matter that should remain as strange quark matter (strangelets) and the amount that should decay into ordinary matter for different fragmentation temperatures indicates that no significant strangelet survival should be expected after fragmentation, independently of the fragmentation temperature and strange quark matter equation of state (with or without pairing). Given that most of the ejected matter will be ordinary nuclear matter, the nucleosynthesis process will likely be a mass buildup from protons and neutrons. In order to evaluate the maximum mass achieved, we need to obtain the freeze out temperature and the neutron/proton ratio at this time. Matter will be ejected with a typical speed of 0.1-0.3 c (see Refs. 7 , 8 ) and will initially expand freely into the interstellar medium. This will cause the radius to grow from R 0 ≈ 20 km linearly with time: consequently the density, initially ∼ 4B ∼ 2ρ 0 , will drop as t −3 if one considers spherical expansion. The ejected matter is initially at a temperature of ∼ 5-10 MeV. The temperature evolution will be (2) considering adiabatic expansion of a relativistic monoatomic ideal gas (γ = 4/3). Until the freeze out temperature, protons and neutrons will be in chemical equilibrium with relative abundances given by with the mass difference given by 1.29 MeV. The equilibrium reaction rates will be roughly given by The freeze-out temperature is determined when the reaction rate drops below the expansion rate: Using equations (3) and (5), we can write this condition as From the freeze-out temperature to ∼ 1 MeV, neutrons will decay. After ∆t = t(1M eV ) − t(T f reezeout ), the final composition ratio for the beginning of the nucleosynthesis process will be given by being τ n the neutron mean lifetime. Results are shown in Table 1. Since the mean neutron lifetime is much greater than ∆t, the exponential argument is close to zero, which means that the final neutron to proton ratios are essentially the same as those in the freeze-out. Compared to the Big Bang nucleosyntesis, when there were 7 protons for each neutron, here we have 1. Table 1. Freezeout temperature, radius, and proton to neutron ratio for different values of initial temperature and speed of the ejecta for a spherical expansion with initial radius of 20 km along with the time it takes for the temperature to drop from the freeze-out one to 1 MeV and the corresponding proton to neutron ratio. For the case of T = 5 MeV and v/c = 0.3, the freeze-out temperature is higher than the initial one. In this case, we consider the system to fragment immediately. Nucleosynthesis: Numerical Results For the nucleosynthesis calculation we employed the TORCH code 9 , a general nuclear reaction network code. As expected, the obtained neutron to proton ratio renders a nucleosynthesis process which is effective in creating elements in the first mass peak. Fig. (1) give the most abundant elements created. The final mass fractions are very insensitive to the initial conditions found in the previous section given the dynamics employed. Different radioactive elements are produced that will contribute to the light curve when decaying. Fig. (2) show the temporal evolution of the most abundant of those elements along with the energy output they produce. Finally in Fig. (3) we present the light curve and effective temperature assuming a black-body emission as a function of time considering the expansion speed of the ejecta to be v = 0.1c. It indicates that the total energy output could be compatible with a kilonova event, although the details of the wavelength peak emission has still to be analyzed. Also, the influence of the expansion dynamics may be of fundamental importance. Conclusions and Perspectives If strange quark matter is the true ground state of cold barionic matter and is to be found inside compact stars, forming strange stars, we have shown that the nucleosynthesis following the merger of two such objects would render a very different picture from the standard scenario. In particular, the most prominent feature would be the total absence of lanthanides with a mass buildup populating the low mass (A < 70) region. The results obtained with this simple approach are encouraging due to the production of many radioactive elements that could power the light curve. We intend to investigate the role of the ejecta dynamics on the produced elements as well as the possibility of crust elements (made of high mass elements) as seed nuclei and the reproducibility of the infrared glow at the time seen for the 2013 kilonova.
1,696.8
2017-01-01T00:00:00.000
[ "Physics" ]
An Adaptive Optimizer for Measurement-Frugal Variational Algorithms . Introduction There are various strategies to make use of noisy intermediate-scale quantum (NISQ) computers [1].One particularly promising strategy is to push most of the algorithmic complexity onto a classical computer while running only a small portion of the computation on the NISQ device.This is the idea behind variational hybrid quantum-classical algorithms (VHQCAs) [2].VHQCAs employ a quantum computer to efficiently estimate a cost function that depends on Patrick J. Coles<EMAIL_ADDRESS>parameters of a quantum gate sequence, and then leverage a classical optimizer to minimize this cost.VHQCAs intend to achieve a quantum advantage with NISQ computers by finding short-depth quantum circuits that at least approximately solve some problem.VHQCAs have been proposed for many applications including ground-state preparation, optimization, data compression, simulation, compiling, factoring, diagonalization, and others . A concern about VHQCAs is that they might require prohibitively many quantum measurements (shots) in order to achieve convergence of the cost function [25], especially for applications like quantum chemistry that require chemical accuracy [26,27].In response to this concern, there has been an recent explosion of papers looking to improve the measurement frugality of VHQCAs by simultaneously measuring commuting subsets of the Pauli operators needed for the cost function [28][29][30][31][32][33][34]. Here, we approach the problem from a different direction by aiming to improve the classical optimizer.There have been several recent efforts to improve optimizers for VHQCAs [35][36][37][38][39]. Our approach is different from these works in that the optimizer we propose is specifically constructed to achieve measurement frugality.In particular, we develop an adaptive optimizer that is adaptive in two senses: it frugally adjusts the number of shots for a given iteration and for a given partial derivative. Our method is inspired by the classical machine learning algorithm named Coupled Adaptive Batch Size (CABS) [40].For pedagogical reasons, we first directly adapt the CABS algorithm to VHQCA applications and call the resulting algorithm Coupled Adaptive Number of Shots (CANS).In order to achieve greater measurement frugality, we go beyond direct adaptation and modify the optimizer to account for dif-ferences in the number of shots needed to estimate individual components of the gradient.We call this method individual-CANS (iCANS). While iCANS is conceptually simple, it nevertheless performs very well.Using IBM's simulator [41], we implement iCANS and other state-ofthe-art optimizers such as Adam [42], SPSA [43], and sequential gate optimization [37,38] for both the variational quantum eigensolver [3] and variational quantum compiling [14][15][16]18].We find that iCANS on average performs the best.This is especially true for our implementations in the presence of noise, i.e., with IBM's simulator of their NISQ device.This is encouraging since VHQCAs must be able to run in the presence of noise to be practically useful. Ultimately, one can take a multi-pronged approach to reducing measurements in VHQCAs, e.g., by combining our measurement-frugal classical optimizer with the recent advances on Pauli operator sets in Refs.[28][29][30][31][32][33][34].However, one can apply our optimizer to VHQCAs that do not involve the measurement of Pauli operator sets (e.g., the VHQCAs in [7][8][9]).In this sense, our work is relevant to all VHQCAs.In what follows, we first give a detailed review of various optimizers used in the classical machine learning and quantum circuit learning literature.We remark that this lengthy review aims to assist readers who may not have a background in classical optimization, as this article is intended for a quantum-computing audience.(Experienced readers can skip to Section 3.) We then present our adaptive optimizer, followed by the results of our numerical implementations. Gradient Descent One standard approach to minimization problems is gradient descent, where the optimizer iteratively steps along the direction in parameter space that is locally "downhill" (i.e., decreasing) for some function f (θ).Mathematically, we can phrase the step at the t-th iteration as where α is called the learning rate.If one takes a large learning rate, one cannot be sure that one will not go too far and possibly end up at a higher point.For a small learning rate one is more guaranteed to keep making incremental progress (assuming the change in slope is bounded), but it will take much longer to get to a minimum.Knowing an upper bound on the slope is therefore very helpful in determining the appropriate learning rate. To formalize this discussion, we review the notion of Lipschitz continuous gradients.The gradient of a function f is Lipschitz continuous if there exists some L (called the Lipschitz constant) such that for all θ (t+1) and θ (t) .(We note that in our notation the • without a subscript denotes the 2 or Euclidean norm.)When this holds, we can see that the fractional change in the gradient over the course of one step is bounded by αL, meaning that for sufficiently small α we can be sure that we are following the gradient.In fact, the convergence of the basic gradient descent method is guaranteed for deterministic gradient evaluations so long as α < 2/L [40].In machine learning contexts L is usually unknown, but for VHQCAs it is often possible to determine a good bound.We discuss this alongside an analytic formula for estimating gradients for VHQCAs in the next subsection. Gradient Estimation Working with the exact gradient is often difficult for two reasons.First the gradient can depend on quantities that are expensive to estimate with high precision.Second, it might be that no analytic form for the gradient formula is accessible, and hence the gradient must be approximated by finite differences.In the following we discuss the two scenarios in more detail. Analytic gradients If one has sufficient knowledge of the structure of the optimization problem under consideration, it might be possible to find analytic expressions for the gradient of the function.In deep learning this is what is provided via the backpropagation algorithm, which allows one to take analytic derivatives with respect to all parameters [44].However these formulas are usually expressed as an average over the full sample one has available in a learning task.To decrease the cost of evaluating the gradient often only a subset of the full sample, a so-called mini-batch, is used to get an unbiased estimate of the gradient [44].This introduces a trade-off between the cost of the gradient estimation and its achieved precision. In VHQCAs there exist similar scenarios where it is possible to analytically compute the gradients [45][46][47].For example if the parameters describe rotation angles of single-qubit rotations and the cost function is the expectation value of some operator A, f = A , partial derivatives can be computed as i.e., the partial derivative is determined by the value of the cost function if one changes the ith component by ±π/2.However, the value of the cost function can only be estimated from a finite number of measurements, and this number of measurements as well as the noise level of the computation itself determine the precision of the gradient estimates.Therefore it is important to understand how to choose the number of shots, and keep in mind that for VHQCAs the gradient estimate is always noisy to some extent, even though it is referred to as analytical. An immediate extension of this is that (3) can be used recursively to define higher derivatives.This result then allows one to determine a usefully small upper bound on L in (2).In particular, we note for operators with bounded eigenspectra, the largest magnitude of a derivative of any order we can find with (3) is precisely half the difference between the largest and smallest eigenvalues λ max and λ min , respectively.Thus, For the common case where the eigenspectrum is unknown but we know how to decompose A into a weighted sum over tensor products of Pauli matrices, A = i a i σ i , we can bound the highest and lowest eigenvalues in turn by i |a i | and − i |a i |, respectively, which gives By setting equality in (5) (or (4) when we have more information), we therefore find a useful Lipschitz constant. Finite Differencing If one does not have access to analytical gradients, one way to approximate the partial derivatives is by taking a finite δ step in parameter space Again, as in the analytical case, the function values need to be estimated by a finite number of shots introducing statistical noise.However, as opposed to the analytic case, the estimate (6) is systematically wrong, with an error that scales with δ 2 .Therefore, one might want to decrease the parameter δ during an optimization procedure using such a gradient estimate.Intuitively this makes the optimization harder, and was recently discussed in the context of VHQCAs [48]. Noisy Gradient Descent For the case where one has noise in one's measurement of the gradient, the analysis of a gradient descent procedure becomes more complicated as the best one can achieve are statements about the behavior that can be expected on average.However, so long as one's estimates are unbiased (i.e., repeated estimates average to the true gradient) one should still end up near a minimum.This idea is at the heart of all stochastic gradient descent methods which we discuss now. Stochastic/Mini-Batch Gradient Descent In cases such as VHQCAs (as well as some machine learning applications), we cannot access the gradients directly and therefore need to estimate the the gradients by sampling from some distribution.A standard approach to this case is to choose some number of samples that are needed to achieve a desired precision.This method is known as either stochastic or mini-batch gradient descent.(A mini-batch here refers to a collection of samples, usually much smaller than the total population.) The number of samples as well as the learning rate are usually set heuristically, in order to balance competing interests of efficiency and precision.First, when collecting samples is computationally expensive, it can sometimes be more efficient to take less accurate gradient estimates in order to converge faster, though doing so can be detrimental if it means that one ends up needing to perform an inordinate number iterations [49].Second, it does not make sense to attempt to achieve a precision greater than intrinsic accuracy of the distribution from which one samples.If there is some error expected in the representation of the distribution one samples the gradients from, there is therefore an upper bound on the number of samples that it is sensible to take based on that accuracy [49].For the case of VHQ-CAs, this often means that the upper limit on the number of samples, s max depends on the (usually unknown) bias b noise introduced to the gradient measurements by the noise of the physical quantum device: Since for VHQCAs this bias is a function of the unknown, time varying device noise for the specific gate sequence, often the best one can do is to make a rough estimate about its order of magnitude and use that in the denominator. Typically, the number of samples as well as the learning rate are heuristically adjusted based on the structure of the cost landscape as well as the error level.When little information is known about the optimization problem, the minimization process is optimized either by manual trial and error until an acceptable choice is found or using a hyper-parameter optimization strategy [50]. For a stochastic gradient approach to converge quickly, it is often helpful to decrease the error in the optimization steps during the run of the optimization.This can be done by either decreasing the learning rate α, or minimizing the noise in the gradient estimates.The following two subsections introduce two methods from machine learning that respectively take these two strategies. Adam Adam is a variant of stochastic gradient in which the step that is taken along each search direction is adapted based on the first and second moment of the gradient [42].To do this, one takes an exponential decaying average of the first and second moment (m t and v t , respectively) for each component of the gradient individually where the square is understood element-wise, g t is the gradient estimate at step t, and β 1 , β 2 are the constants that determine how slowly the variables are updated.The parameters are then updated with the following rule: where mt (v t ) is an initialization-bias-corrected version of m t (v t ), and is a small constant to ensure stability [42].One particular feature of Adam is that the adaptation happens individually for each component of the gradient.We also briefly mention that there is a recent modification to Adam that looks promising, called Rectified Adam (RAdam) [51].RAdam essentially selectively turns on the adaptive learning rate once the variance in the estimated gradient becomes small enough.While Adam has made a large impact in deep learning, to our knowledge it has not been widely considered in the context of VHQCAs. Balles et al. analyzed the problem of choosing the sample size in the context of optimizing neural networks by stochastic gradient descent [40].Their approach is to find the number of samples s that maximizes the expected gain per sample at each iteration. In the following we denote the i-th component of the estimated gradient by g i , the empirical variance of the estimate g i by S i , the actual gradient by ∇f , and the actual covariance matrix (in the limit of infinite samples or shots) of the gradient estimation by Σ. Balles et al. introduce a lower bound G on the gain (improvement in the cost function) per iteration.Accounting for the finite sampling error, they find that the average value of G is [40] Tr(Σ). ( 11) As an immediate consequence, they then find that the expected gain at any step has a positive lower bound if By taking a small but fixed α, Balles et al. then maximize the lower bound on the expected gain per sample by taking samples [40].Unfortunately, this formula depends on quantities Σ and ∇f that are not accessible.Therefore in CABS, Σ is replaced by an estimator Σ and, specializing to the case where the minimum value of f is known to be zero, ∇f 2 is replaced by f /α as the gradient estimator is biased.Since the Lipschitz constant is also often unknown in the machine learning problems they were considering, they also drop the factor of 2Lα/(2 − Lα) [40].CABS then proceeds as a stochastic gradient descent with a fixed learning rate and a number of samples that is selected at each iteration based on (13) with the quantities measured at the previous point, making the assumption that the new point will be similar to the old point. As discussed in the next section, our adaptive optimizer for VHQCAs is built upon the ideas behind CABS (particularly (13)), although our approach differs somewhat. SPSA The simultaneous perturbation stochastic approximation (SPSA) algorithm [43] is explicitly designed for a setting with only noisy evaluation of the cost function, where no analytic formulas for the gradients are available.It is also a descent method, however, instead of estimating the full gradient, a random direction is picked and the slope in this direction is estimated.Based on this estimate a downhill step in the sampled direction is taken: Here g(θ (t) ) is the estimated slope in the random direction and estimated as [52]: where ∆ t is the random direction sampled for the t-th step and ∆ −1 t simply denotes the vector with its element-wise inverses.In order to ensure convergence the finite difference parameter c t as well as the learning rate α t have to be decreased over the optimization run.This is commonly done by using a prefixed schedule [52].In this approach, we have In the original formulation, the idea is usually to estimate the cost function in (15) by a single measurement.However, in a quantum setting it seems intuitive to take a larger number of measurements for the estimation, as was done in [53]. Sequential Subspace Search Another approach to optimizing a multivariate cost function is to break the problem into subparts which are independently easier to handle.The generic idea is to define a sequence of subspaces of parameter space to consider independently.These methods then approach a local minimum by iteratively optimizing the cost function on each subspace in the sequence.Now we discuss two instances of this approach: the famous Powell method [54] as well as a recently proposed method specialized to VHCQAs [37,38]. Powell Algorithm The Powell algorithm [54] is a very useful gradient-free optimizer that specializes the subspace search to the case of sequential line searches.Specifically, starting with some input set of search vectors V = {v i } (often the coordinate basis vectors of the parameter space) this method sequentially finds the set of displacements {a i } along each search vector that minimizes the cost function.Next, the method finds the v j associated with the greatest displacement, a j = max(a i ).This v j is then replaced with the total displacement vector for this iteration, namely: and then the next iteration begins with this updated set of search vectors.This replacement scheme accelerates the convergence and prevents the optimizer from being trapped in a cyclic pattern.In practice, the displacements a i are typically found using Brent's method [55], but in principle any gradient-free scalar optimizer could work.(Gradient-based scalar optimizers would make Powell's method no longer "gradient-free.") Sequential Optimization by Function Fitting In the special case of VHQCAs where the cost function is expressed as an expectation value of some Hermitian operator and the quantum circuit is expressed as fixed two-qubit gates and variable single-qubit rotations, it is possible to determine the functional form of the cost function along a coordinate axis [37].After fitting a few parameters, it becomes possible to compute where the analytic minimum should be in order to find the optimal displacement along any given search direction.This can be scaled up to finding the analytic minimum (exact up to distortions from noise) on some subspace that is the Cartesian product of coordinate axes, though this is hampered by the fact that the number of parameters that must be fit scales exponentially with the dimension of the subspace [37].We will refer to this algorithm as the Sequential Optimization by Function Fitting (SOFF) algorithm.We note that a very similar method was published shortly after SOFF [38].The primary difference was the incorporation of the Anderson and Pulay convergence acceleration procedures used in computational chemistry [56,57].We note that, though SOFF and Powell are closely related, due to the limitation to only searching along coordinate axes, it is not possible to take arbitrary search directions, thus SOFF is not quite a special case of Powell's method.For VHQCA problems where it is applicable, SOFF has been demonstrated to be highly competitive with or better than other standard optimization schemes like Powell's method [37,38]. Adaptive Shot Noise optimizer As mentioned above, the basic idea behind our approach is similar to that of CABS [40], but we implement those ideas in a different way.Specifically, by implementing different estimates for the inaccessible quantities in (13) that are suitable to the number of shots in a VHQCA (rather than the batch size in a machine learning method), we arrive at a variant of CABS we name Coupled Adaptive Number of Shots (CANS).Recognizing that a different number of shots might be optimal for estimating each component of the gradient in VHQCAs, we further develop this variation into individual-CANS (iCANS), which is our main result.For pedagogical purposes, we first introduce CANS and then present iCANS. CANS We now discuss our adaptation of CABS to the setting of VHQCAs.In order to use the number of shots recommended by the CABS method, we need to rewrite (13) using only quantities that are accessible.Making use of the parameter shift rule (3), we have access to the Lipschitz constant L given by ( 5).An unbiased estimate of Tr(Σ) is given by d i=1 S i = S 1 , i.e., by the empirical variances of the gradient components.(Here and below d is the number of parameters being optimized.)The naive estimate of ∇f 2 is g 2 , with g := (g 1 , ..., g l ) T the estimated gradient.This estimator is biased (see Equation (17) of [40]), however using a bias-corrected version is numerically unstable.With these choices, we then define CANS as the CABS algorithm with (13) replaced by We note that the learning rate α must be less than 2/L with this formalism.The CANS algorithm is included in Appendix B for completeness.For the remainder of the paper we will focus on iCANS, which we introduce next. iCANS The CANS algorithm is inspired by CABS [40], which was designed for applications in deep learning.Therein for each data point the full gradient is evaluated, and noise arises by considering only a minibatch of the full sample.In VHCQAs, however, each individual partial derivative is estimated independently.This gives us the freedom to distribute measurements over the estimation of the partial derivatives more effectively.This is the idea behind iCANS, which is shown in Algorithm 1 and described below. Algorithm 1 Stochastic gradient descent with iCANS1/2.The function iEvaluate(θ, s) evaluates the gradient at θ using s i shots for the i-th derivative via the parameter shift rule (3).This function returns the estimated gradient vector g as well as the vector S whose components are the variances of the estimates of the partial derivatives. 25: k ← k + 1 26: end while iCANS prioritizes the individual partial derivatives rather than the gradient magnitude as in (11).For this purpose, we define G i as our lower bound on the gain (i.e., the improvement in the cost function) associated with the change in parameter θ i for a given optimization step.Furthermore, we define γ i as the expected gain per shot (i.e., the expectation value of G i divided by the number of shots) as follows: where s i is the suggested number of shots for the estimation of the i-th partial derivative.Note that ( 19) is an adaptation of (11) to our setting. In analogy with the CANS approach (see (18)), we estimate the number of shots that maximizes (19) with As with CANS, we again note that this formalism is only valid if α < 2/L.The idea now is to update each parameter with a gradient-descent step, where each partial derivative is estimated with its individual optimal number of shots.However, empirically those parameters that are close to a local optimal value (hence have a small expected gain) require a large number of shots, while parameters that are far from convergence (and hence usually have a large expected gain) require a small number of shots.We therefore restrict our algorithm to not take more shots for any partial derivative than a cap we will call s max .We take s max to be the number of shots needed in order to estimate the partial derivative for the parameter θ imax , where i max is the index associated with highest expected gain per shot.In other words: and we impose s i s max for all partial derivatives. We note that the introduction of this cap on the number of shots is a heuristic choice which we find to often be beneficial to shot frugality, but which removes the guarantee that γ i will be maximized or even positive.In order to preserve this frugality while retaining the guarantee of positive expected gains, one can also introduce a step that verifies that the learning rate to be used is appropriate after the measurements are taken and adapts it if it is not.Motivated by (12), we check the following condition for each component of the gradient: When this condition fails to hold for the i-th partial derivative, we temporarily replace α with the right hand side of (23) for the update along that direction.Adding in this check results in a more conservative procedure as it takes smaller steps when needed in order to enforce that γ i > 0, and thus restores the guarantee that E [G] > 0. Below, we will refer to iCANS without this learning rate check as iCANS1 and with it as iCANS2. The distinction between iCANS1 and iCANS2 is made in Algorithm 1 with the conditional statements on lines 10 and 12. Beyond the core components of iCANS given above, both implementations of iCANS also take two more hyperparameters for increased stability.Since iCANS is intended to be deployed on highly noisy problems, we find that it is beneficial to use smoothed quantities for the gradient and variance when estimating γ i and s i .For this reason, we use bias-corrected exponential moving averages χ i and ξ i in place of g i and S i , respectively, when implementing equations (19) and (20).These exponential moving averages introduce a new parameter, µ, which controls the degree of smoothing and is bounded between 0 and 1.Since the update step is independent of this smoothing, we often find it beneficial to choose µ close to 1 to achieve a steady progression of s i 's.Finally, we also add a regularizing parameter b to the denominators of lines 13, 16, and 20 of Algorthim 1 for numerical stability.By multiplying b by µ k and choosing b to be small, the bias from this regularizing parameter begins small and exponentially decays as the algorithm progresses. Implementations In order to compare the performance of iCANS1 and iCANS2 to established methods, we consider two optimization tasks: variational quantum compiling with a fixed input state [14][15][16]18] and a variational quantum eigensolver (VQE) [3] Figure 1: The quantum circuit diagram for the ansatzes we used to construct the unitary operators U (θ) in our implementations.The angles in each rotation gate (denoted as R j , where j denotes the axis being rotated about) are varied independently.Panel a shows the ansatz used in the compiling and Heisenberg spin chain VQE task, and we note that this is the same ansatz used in Ref. [37].Panel b shows the ansatz used when doing the size scaling comparison with the Ising spin chain VQE task. for a Heisenberg spin chain. For our experiments we set the iCANS hyperparameters as α = 0.1, µ = 0.99, and b = 10 −6 , except for the case of the system size scaling comparison.For that case, since the Lipschitz constant L grows linearly with the system size, leaving α = 0.1 leads to α > 2/L for larger systems, which is invalid for iCANS.We therefore chose α = 1/L for the different length Ising spin chains we consider below. For the other algorithms we compare to, we will denote the number of shots per operator measurement as s.We will denote algorithm A with s shots per operator measurement as A-s (e.g., SOFF with s = 1000 is denoted SOFF-1000).We also note that in the figures and tables below we show the analytical cost and energies that one could achieve with the parameters that the optimizers output, i.e., without hardware noise or shot noise.The optimizers did have to contend with finite statistics and, where indicated, hardware noise to find those parameters. In addition to the fixed number of shots they use, the other algorithms we compare to also come with other hyperparameters, which were chosen empirically in an attempt to get the best performance from each.For Adam we used a learning rate of α = 0.1 along with the momentum parameter values of β 1 = 0.9 and β 2 = 0.999.For SPSA, we found that the default parameters were the best among those that we tried, and thus we set A to be a tenth of the total number of allowed iterations, β = 0.602, and γ = 0.101. Variational Compiling with a Fixed Input State For our first optimization task, we follow [37] and consider as a benchmark the optimization of the following cost function: where θ * is a vector of fixed, randomly selected angles and θ is the vector of angles to be optimized over.For this problem, we construct the parametrized unitary operator U (θ) with the ansatz described in Fig. 1(a), setting n = 3 qubits and D = 6.(As adding depth and thus more parameters increases the difficulty of the optimization task and amplifies the effect of the noise model, D = 6 was chosen to increase the difficulty of the task although shorter depth ansatzes would work here.)We then simulate the optimization procedure with one hundred different random seeds (each of which generates a unique random θ * and initial point) and a collection of different optimizers.The results for both the case of a noiseless simulator and the case of a simulator using the noise profile of IBM's Melbourne processor [58] are shown in Fig. 2. For the latter, we emphasize that this noise profile reflects the properties of real, currently available quantum hardware.In addition, the average costs obtained for each optimizer are listed in Tables 1 and 2 with the best value found for each total number of shots expended N shown in bold.Furthermore, see Appendix C for the cumulative probability distributions over cost values, which provide more information than the average cost VQE For our second optimization task, we follow [53] in considering the Heisenberg spin chain with wrapped boundary conditions and the Hamiltonian: where the <> bracket denotes nearest-neighbor pairs.For the purpose of our comparison, we fix J = 1 and B = 3 and again consider the ansatz described in Fig. 1(a).Running the comparison with n = 3 qubits in a triangle and D = 6 for the ansatz, we simulate running VQE with one hundred different random seeds and initial points, along with the same set of optimizers as in the benchmark case above.As before, the results for the both a noiseless and a noisy simulator (also using the IBM Melbourne processor's noise profile [58]) are shown in Fig. 3. Again, the average energies obtained for each optimizer are listed in Tables 3 and 4 with the best value found for each total number of shots expended N shown in bold.In addition, see Appendix C for the cumulative probability distributions over energy values, which provide more information than the average energy values. Comparison of Scaling In order to compare the performance of iCANS to that of other optimizers when one scales up the number of qubits, we now consider VQE applied to Ising spin chains of differing lengths with open boundary conditions and the Hamiltonian: where the <> bracket again denotes nearestneighbor pairs.In order to generate enough entanglement in the ground state to require a modest depth, we choose g = 1.5 so that we are near but not at the critical point g = 1.For this problem, we used the ansatz shown in Fig. 1(b) with D = 3 (two repetitions of the block shown in braces), as its performance for this problem was significantly better than the simple ansatz in Fig. 1(a). The results for a noiseless simulator for 4, 6, 8, 10, and 12 qubit Ising spin chains are shown in Fig. 4. Discussion Here we report on the behavior of the various optimizers we studied.First we consider SOFF, which is the only optimizer studied here other than iCANS that was formulated specifically for VHQCAs.By leveraging analytical knowledge about the optimization landscape, SOFF's gradient-free method of making single parameter updates allows it to quickly train in low noise environments.However, the limit of the precision when fitting the analytical function with a finite number of shots means that SOFF hits a precision floor and cannot improve past that point.Additionally, hardware noise tends to distort the landscape in such a way that the analytical form no longer provides as good of a fit, making SOFF struggle more relative to the other optimizers considered.In the optimization tasks we looked at here, we found that SOFF was often competitive with iCANS shortly before hitting its precision floor, with SOFF-100 sometimes doing better for a brief interval early on.For example, SOFF-100 was the best performing optimizer for the compilation task with N = 10 3 (noiseless noisy) and N = 10 4 (noisy only), as well as for the Heisenberg VQE with N = 10 4 (noiseless and Adam was originally conceived in the context of machine learning and excels at optimizing in noisy environments.However, our numerical studies we found that Adam suffered from an instability the hyperparameters we chose and the number of shots we allowed at each partial derivative evaluation.This appears to enter later in the optimization when we are working with more shots, and it can be seen in the upturn of the curves in Figs.2-4.For the case of the noisy compilation task, Adam-100 looks like it might just be reaching that instability at the end of the allowed shot budget, and slightly outperformed iCANS1 to be the best on average.We note that, similar to what was seen with SOFF, Adam was usually competitive with the iCANS methods before it reached the point where it stopped improving. Unlike SOFF and Adam, SPSA did not seem to hit a point at which it stopped improving with shot budget, for the chosen hyperparameters.We note though that SPSA is the most sensitive to perturbations of the hyperparameters among the methods studied here and can become very unstable if they are incorrectly chosen.However, if one hits upon the correct hyperparameters, SPSA can be very effective.While for our cases we did not find SPSA outperforming iCANS, we note that for the noiseless Heisenberg VQE task, SPSA-100 was the most competitive with iCANS.Overall, we find that iCANS performed well on all optimization tasks considered, with either iCANS1 or iCANS2 usually providing the best result for a given total shot budget N .Even when scaling up the system size in the Ising model VQE task (see Fig. 4), we found that iCANS continued to outperform the other optimizers studied.We also note that empirically iCANS1 usually outperformed iCANS2.While iCANS2 provides a benefit by reducing the sensitivity to the input learning rate, so long as the learning rate is chosen well we expect that iCANS1 may tend to perform better. We remark that while we do not report full results for RAdam [51], we found with preliminary results that it did not seem to provide a substantial improvement over the simpler Adam algorithm for our use cases.Similarly, we found that SOFF with the Anderson acceleration step proposed in [38] did not noticeably improve upon the performance of basic SOFF, and therefore the curves for this method are not shown. We finally remark about the different performance for the various fixed-shot optimizers with different numbers of shots (e.g., Adam-10 versus Adam-100).This performance difference can be understood as a trade-off between reducing the statistical uncertainty and achieving more itera-tions before hitting the limit on the total number of shots.When few shots are used, many more iterations might be allowed but the update steps are much noisier, usually meaning that the optimizer can perform more quickly early on but then potentially hits an effective floor due to the precision.Increasing the number of shots will allow more precise updates and thus lowers the precision floor (if present) but means that far fewer iterations can be performed.This is the idea at the heart of iCANS.iCANS uses few shots early on and so achieves a period of noisy but fast descent, but then slows down and computes with greater and greater precision to continue making progress.This strategy allows for shot frugality as well as in principle removing such a precision floor for iCANS. Conclusions In order to bring about the promise of VHQCAs solving usefully large and complex problems on NISQ devices, one needs a way to perform the requisite optimizations efficiently.As the ratelimiting step of these optimizations will likely be the number of times one must prepare and measure quantum states, it will be important to have optimizers that are frugal in the number of times physical measurements must be performed on a quantum computer. In this work we introduced two versions of a measurement-frugal, noise-resilient optimizer tailored for VHQCAs.Both of the strategies we propose, iCANS1 and iCANS2, address measurement frugality by dynamically determining the number of measurements needed for each partial derivative of each step in a gradient descent.iCANS1 is the more aggressive version, always taking the same learning rate, while iCANS2 is more cautious and limits the learning rate for steps so that the expected gain is always guaranteed to be positive.Our numerical results indicate that these optimizers may perform comparably or better than other state-of-the-art optimizers.The performance compares especially well in the presence of realistic hardware noise. iCANS has already found use in the very recent VHQCA literature [18].Furthermore, after our article was originally posted, a related study of stochastic gradient descent for VHQCAs found that small shot counts can provide rapid improve-ment in early stages of training [59], which provides further motivation for iCANS. One potential direction for future work is exploring the possibility of extending our frugal adaptive approach to non-gradient methods, such as SPSA. A The Expected Lower Bound on the Gain per Shot Here we repeat the derivation provided by [40] for the lower bound on the expected gain per shot (given in (11)), and extend it to our expression lower bounding the expected gain per shot per partial derivative (19). Assuming that the cost function is admits a Taylor series representation about the current point in parameter space, to quadratic order we have In this way, we approximate the gain (the change in the cost function) we expect after the update step, with θ = θ − αg: If the gradients are Lipschitz continuous, we can achieve a lower bound G on this quantity using the Lipschitz constant L: Next we assume that the gradient estimates g have mean E [g] = ∇f (θ) and covariance Σ/s, where s is the number of shots used in the estimate.We then have Plugging this back into (29) then gives us Tr(Σ), (31) which is (11).Dividing both sides by s then gives the expected lower bound on the gain per shot.In order to arrive at (19), we rewrite this expression as: Finally, defining γ i = E [G i ] /s i and replacing ∂ i f and Σ ii with their estimators g i and S i , respectively, gives (19). B CANS Algorithm For the interested reader, we present the algorithm for CANS (Coupled Adaptive Number of Shots) in Algorithm 2, which is an adaptation of the CABS algorithm [40] to the VHQCA setting. C Cumulative probability distributions for 3-qubit implementations Here we show the cumulative distribution plots of the cost values or energies achieved by the optimizers we studied for the compilation task (Fig. 5) and the Heisenberg spin chain VQE task (Fig. 6) for various shot budgets. Algorithm 2 Stochastic gradient descent with CANS.The function Evaluate(θ, s) evaluates the gradient at θ using s measurements for each component of the derivative using the parameter shift rule (3) and returns the estimated gradient vector g as well as the vector S with the variances of the individual estimates of the partial derivatives.Input: Learning rate α, starting point θ 0 , min number of shots per estimation s min , number of shots that can be used in total N , Lipschitz constant L, running average constant µ, bias for gradient norm b 1: initialize: θ ← θ 0 , s tot ← 0, s ← s min , χ ← (0, ..., 0) T , ξ ← 0, k ← 0 2: while s tot < N do Figure 2 : Figure 2: Comparison of performance for the compilation task across one hundred random target states and initial starts.As mentioned in the text, we denote algorithm A with s shots per operator measurement as A-s.Panels a and b show the average cost value attained as a function of the total number of shots (N ) expended for the noiseless and noisy cases, respectively. Figure 3 : Figure 3: Comparison of performance for the Heisenberg spin chain VQE task across one hundred random starts.Panels a and b show the average ∆E value (i.e.energy above the ground state energy) attained as a function of the total number of shots (N ) expended for the noiseless and noisy cases, respectively. Figure 4 : Figure 4: Comparison of performance for the Ising VQE task with different numbers of sites (i.e., qubits) without hardware noise.Each panel shows the average ∆E per site value attained as a function of the total number of shots expended (N ) for each number of qubits.Each curve represents the average over ten random starts. Table 1 : Noiseless Compilation Average Cost Values Table 2 : Noisy Compilation Average Cost Values
9,434.6
2019-09-19T00:00:00.000
[ "Physics", "Computer Science" ]
Higher-Point Positivity We consider the extension of techniques for bounding higher-dimension operators in quantum effective field theories to higher-point operators. Working in the context of theories polynomial in $X=(\partial \phi)^2$, we examine how the techniques of bounding such operators based on causality, analyticity of scattering amplitudes, and unitarity of the spectral representation are all modified for operators beyond $(\partial \phi)^4$. Under weak-coupling assumptions that we clarify, we show using all three methods that in theories in which the coefficient $\lambda_n$ of the $X^n$ term for some $n$ is larger than the other terms in units of the cutoff, $\lambda_n$ must be positive (respectively, negative) for $n$ even (odd), in mostly-plus metric signature. Along the way, we present a first-principles derivation of the propagator numerator for all massive higher-spin bosons in arbitrary dimension. We remark on subtleties and challenges of bounding $P(X)$ theories in greater generality. Finally, we examine the connections among energy conditions, causality, stability, and the involution condition on the Legendre transform relating the Lagrangian and Hamiltonian. Introduction A dramatic development in our knowledge of quantum field theory has been the discovery that not all effective field theories are consistent with ultraviolet completion in quantum gravity. Certain Lagrangians that one can write down possess pathologies that are a priori hidden, but that can be elucidated though careful consideration of consistency conditions that can be formulated in the infrared and that are thought to be obeyed by any reasonable ultraviolet completion. Such infrared conditions include analyticity of scattering amplitudes, quantum mechanical unitarity, and causality of particle propagation [1][2][3][4][5][6][7][8][9][10][11][12][13], as well as self-consistency of black hole entropy in the context of the recent proof of the weak gravity conjecture [14]. Delineating the space of consistent low-energy effective field theories is of great current interest in the context of the swampland program [15][16][17], which seeks to characterize and bound in theory space the possible effective field theories amenable to ultraviolet completion in quantum gravity. Infrared requirements form a powerful set of tools, giving us rigorous positivity bounds that complement intuition from ultraviolet examples. Such self-consistency constraints have been used to bound the couplings of many different higher-dimension operators in scalar field theory [1], gauge theory [1], Einstein-Maxwell theory [5,14], higher-curvature corrections to gravity [3,7,9], and massive gravity [8]. The simplest positivity bound on effective theories applies to the coupling of the (∂φ) 4 operator. In a massless theory of a real scalar φ with a shift symmetry, the first higher-dimension operator that one can add to the kinetic term −∂ µ φ∂ µ φ/2 is the operator (∂φ) 4 = ∂ µ φ∂ µ φ∂ ν φ∂ ν φ. (1) In a theory given by − 1 2 (∂φ) 2 + λ(∂φ) 4 , the forward amplitude for two-to-two φ scattering is A(s) = 16λs 2 . A standard dispersion relation argument [1] then relates the coefficient of s 2 in this forward amplitude at low energies to an integral over the cross section at high energies, which physically must be positive. That is, analyticity of scattering amplitudes guarantees that λ is positive. Similarly, one can compute the speed of propagation of φ perturbations in a nonzero φ background: one finds that subluminality requires λ > 0 and that if λ < 0 it is straightforward to build causal paradoxes involving superluminal signaling between two bubbles of φ background with a relative boost. A litany of other examples of analyticity and causality bounds focuses on similar four-point interactions, though for more complicated theories and fields involving gauge bosons and gravitons. In this paper, we explore a new direction in the space of positivity bounds: higher-point operators. In particular, we will bound the P (X) theory, whose Lagrangian is simply a polynomial in X = ∂ µ φ∂ µ φ, (2) which in the effective field theory we can write as 1 A case of particular tractability is an nth-order P (X) theory, in which the λ i are very small or zero for i < n for some n > 1, where n is the first nonnegligible higher-order term in the P (X) polynomial: We will show that analyticity of scattering amplitudes and causality of signal propagation imply the same positivity bound on the theory in Eq. (4): We will also find that Eq. (5) comes about as a consequence of unitarity of quantum mechanics in the context of spectral representations for a particular class of ultraviolet completions. This bound represents progress for the program of constraining the allowed space of self-consistent low-energy effective theories, constituting a generalization of the well known (∂φ) 4 bound. Further, the formalism we develop along the way for applying infrared consistency bounds to higherpoint operators is useful in its own right. Considering X n as the first nonnegligible operator in the effective field theory can be motivated physically in several different ways. Using a weak-coupling assumption to guarantee a well-defined counting, we can consider tree-level completions of the X i operators through massive states coupling to (∂φ) i . If there is no coupling of massive states to (∂φ) i for i < n, then the tree-level value of λ i vanishes for i < n. We can then place the positivity bound in Eq. (5) on λ n using the tree-level amplitude. Note that this logic does not contradict the positivity bound on (∂φ) 4 in Ref. [1], since λ 2 could still be generated at loop level, though λ n from the tree-level completion would be parametrically larger in units of the cutoff. Moreover, from the perspective of the effective field theory, the higher-dimension operators in the nth-order P (X) theory in Eq. (4) can be viewed as a sector of a larger theory. For example, taking a complex scalar φ with a Z n symmetry φ → e 2πim/n φ for integer m, the allowed higher-dimension operators are of the form X np ,X np , andX p for integer p, whereX = ∂ µ φ * ∂ µ φ * andX = ∂ µ φ∂ µ φ * . In particular, all operators X i for i < n would be forbidden and the scattering of 2n φ particles at tree level would occur only through the X n contact operator, just as in the nth-order P (X) theory in Eq. (4). This paper is organized as follows. In Sec. 2, we consider the application of analyticity bounds for higher-point amplitudes and derive our bound (5) on the nth-order P (X) theory. Next, in Sec. 3 we find that the bound (5) also follows from demanding the absence of causal paradoxes. In Sec. 4 we consider a particular class of tree-level completions and find that the couplings obey Eq. (5) as a consequence of unitarity of the spectral representation. Along the way, we present an elegant derivation of the propagator for higher-spin massive bosons in arbitrary spacetime dimension. We discuss the obstacles, in the form of kinematic singularities, that preclude straightforward generalization of some of these bounds to arbitrary (i.e., not strictly nth-order) P (X) theories in Sec. 5. In Sec. 6 we show that there is a deep relationship between positivity bounds and the involution property of the Legendre transform relating the Lagrangian and Hamiltonian formulations of the mechanics of the P (X) theory. We conclude and discuss future directions in Sec. 7. Bounds from Analyticity In this section, we derive the bound in Eq. (5) through a generalization of the dispersion relation argument that has been previously applied to two-to-two scattering amplitudes [1]. We first discuss formalism for general n-to-n particle scattering, before considering our specific theory of interest and deriving the bounds. The Forward Limit Consider a general effective field theory for which one wishes to bound the couplings of higherdimension operators using analyticity of scattering amplitudes. Fundamentally, such positivity bounds come from the optical theorem, Im A(s) = sσ(s) where A(s) is the forward amplitude, s is the center-of-mass energy of the incoming particles, and σ is the cross section, which is mandated physically to be positive. Taking a four-point operator, kinematics allows only one forward limit (module polarization or other, internal degrees of freedom): working in the convention of all momenta incoming. However, at higher-point, there are multiple forward kinematic configurations, given by the angles that the various momenta make with respect to each other. In particular, considering n-to-n particle scattering, going to forward kinematics so that p n+i = −p i for 1 ≤ i ≤ n, there is a family of forward limits of dimension (D − 2) n−2 . The reason for this counting is as follows. A priori, we choose an angle on the celestial sphere for the direction associated for each of the p i , Momentum is conserved automatically by the forward condition. Moreover, we can use Lorentz invariance to fix two of the directions: one angle is fixed by rotational invariance and another is fixed by boost symmetry, which allows us to take two of the pairs to be back-to-back. Hence, we can fix n − 2 points on the celestial sphere, each of which requires D − 2 angular coordinates in D spacetime dimensions. This large number of possible forward limits means that higher-point amplitudes have significant power to constrain the couplings of higher-point operators, despite the larger number of operators one can write down. Higher-Point Dispersion Relations and Bounds for P (X) Placing positivity bounds using higher-point amplitudes follows a generalization of the argument bounding four-point operators. First, let us define the Mandelstam invariants There are n(2n − 3) independent Mandelstam invariants for the 2n-point amplitude (i.e., n-to-n scattering), taking into account momentum conservation and the on-shell conditions. Analyticity implies that A is an analytic function of the s ij everywhere but for a discrete set of poles and branch cuts. Choosing a particular forward limit, by fixing all (D − 2) n−2 of the angular parameters, A becomes a function of the remaining nonzero s ij . In particular, we will choose as our variable for analytic continuation the center-of-mass energy squared, We wish to place a bound on the couplings of the nth-order P (X) theory (4) for even or odd n, where the first nonnegligible λ i coefficient of the X i operator occurs at i = n. Making particular concrete choices for the kinematics will allow us to bound the coefficient λ n . We will find that different choices of kinematics and dispersion relations are needed for n even or odd. At general kinematics, the 2n-point tree-level amplitude for the nth-order P (X) theory is where σ runs over the the (2n)!2 −n different possible groupings of {1, . . . , 2n} into an ordered list of n unordered pairs. Even n If n is even, we choose the following forward kinematics: for all i, 1 ≤ i ≤ n. Then the center-of-mass energy is s = n 2 4 s 12 (11) and the forward amplitude is In the complex s plane, we consider the contour integral where γ is a small contour around the origin. Similarly, we can define where γ is a contour running just above and below the real s axis, plus a boundary contour at infinity. Analyticity implies that A is analytic everywhere except for poles in the s ij where massive states in the ultraviolet completion go on-shell and, at loop level, branch cuts associated with massive states in loops. Now, given the choice of kinematicss in Eq. (10), the only nonzero s ij is s 12 , which is equivalent to a rescaled version of s by Eq. (11). Hence, all nonanalyticities in the complex s plane occur at a set of poles (and branch cuts) on the real s axis. That is, Cauchy's theorem implies that I n = I n . We assume that the boundary integral at infinity vanishes. For a massive theory, this would follow from the Froissart bound |A(s) < |s log D−2 s| at large |s| [18,19]. Even though we are considering a massless theory, it is reasonable to assume some form of polynomial boundedness that forbids the amplitude from diverging too quickly with s at large s; in essence, discarding the boundary integral is equivalent to demanding that the X n term in the action is in fact ultraviolet completed, i.e., forbidding primordial X n terms by demanding that the higher-dimension operator originate from the exchange of states at some scale. Equating I n = I n , we thus have where s 0 is some regulator below which we take the amplitude to be analytic and disc A(s) = For example, if we use counting to restrict to the tree-level scattering amplitude, we can take s 0 to be of order the scale of the ultraviolet completion. In the two-to-two scattering case, the integrals over the positive and negative real s axis are related by the crossing symmetry associated with swapping p 1 and p 3 , i.e., by swapping the s and u = −s − t channels for forward kinematics. For our present calculation involving n-to-n scattering, crossing symmetry implies that the amplitude is invariant under swapping legs n and 2n. With the choice of kinematics in Eq. (10), this is equivalent to swapping legs p i for p i+n for all even i between 2 and n, which has the effect of swapping p 2 ↔ −p 2 while leaving p 1 unchanged, so s 12 ↔ −s 12 and s ↔ −s. Hence, as in the two-to-two case, crossing symmetry implies that with our choice of kinematics A(s) is an even function of s, even in the ultraviolet. We thus have disc A(−s) = −disc A(s) and Using the Schwarz reflection principle A(s * ) = [A(s)] * , we have disc A(s) = 2i Im A(s). In two-to-two scattering, the optical theorem relates the cross-section to the imaginary part of the forward amplitude. Generalized to an initial multiparticle state |n, s with center-of-mass energy s, the optical theorem implies where the sum is over all intermediate states X, dLIPS X is the Lorentz-invariant phase space measure for the intermediate state, 2 and A(|n, s → X) is the amplitude for the n-particle initial state with center-of-mass energy s going to the final state X. In particular, we note that the right-hand side of Eq. (17) is manifestly positive. Thus, we have a bound on λ n in the nth-order P (X) theory for even n: Odd n For the nth-order P (X) theory where n is odd, we choose the kinematics With these choices of kinematics, we have the center-of-mass energy and the forward amplitude is We can make a further choice of kinematics to set s 1n = s 2n , which we will for brevity call s n , and analytically continue in s n while holding δ = (n − 1) 2 s 12 /4 constant. That is, the center-of-mass energy is s = (n − 1)s n + δ, so analytic continuation in s is equivalent to analytic continuation in s n . 3 Note that for physical kinematics, δ > 0. The forward amplitude is In contrast with Sec. 2.2.1, we define the contour integrals for odd n as for a small contour γ around the origin and for a contour γ running just above and below the real s axis, plus a boundary contour at infinity that we drop as before. Crossing symmetry under swapping legs n and 2n is equivalent under our choice of kinematics to swapping p n ↔ p 2n = −p n , i.e., swapping s n ↔ −s n while holding s 12 (and thus δ) fixed. That is, the forward amplitude, even in the ultraviolet, must be an even function of s n . Equivalently, the full forward amplitude satisfies We therefore have disc Using analyticity to equate I n and I n in Eqs. (23) and (24) and using the Schwarz reflection principle and the optical theorem as before, we obtain a bound on λ n in the nth-order P (X) theory for odd n: Bounds from Causality Next, let us consider how bounds on the P (X) theory can be derived from causality. For now, we will consider an arbitrary P (X) theory, with no assumptions about the relative sizes of the various higher-dimension operators. The equation of motion for this theory is: which is solved by a constant background φ condensate, ∂ µ φ = w µ = constant. We will use bars to denote background vaues of fields, so ∂ µ φ = w µ and X = w 2 . The leading-order action for the fluctuation ϕ = φ −φ can be written as The term in the action zeroth-order in ϕ is a cosmological constant P (w 2 ), which can be dropped, while the term first-order in ϕ is a tadpole, which vanishes becauseφ satisfies the background equations of motion (27). The canonical momentum associated with φ is In order for the background ∂ µ φ = w µ to be able to be causally constructed via Cauchy evolution, p µ , evaluated on the background value of φ, must be causal, i.e.,p 2 ≤ 0. That is, to construct the background condensate in a spacetime that is asymptotically empty, in a well-defined initial value problem, requires timelike or nullp µ . Otherwise, if p µ is spacelike, then one could choose coordinates such that there is a Cauchy slice in which the equation of motion (27), ∂ µ p µ = 0, becomes just a spatial constraint with no time evolution, ∂ i p i = 0. Thus, the equation of motion ∂ µ p µ = 0 implies that the canonical momentum p µ behaves like a conserved fluid current, which causality requires be timelike or null. We therefore require that w be causal, so w 2 ≤ 0. Let us consider the question of stability of the w µ condensate background and write w µ = (w 0 , w). First, suppose that w is timelike, so w 2 < 0. We can go to the condensate rest frame, so w = 0. Then we have If P (w 2 ) > 0, there are ghosts in theory, resulting in a quantum mechanical pair-production instability [22]. We thus conclude that P (w 2 ) ≤ 0 if w 2 < 0. If w is null, then we simply have P (0) = −1/2. Hence, stability will guarantee that P (w 2 ) is always nonpositive. Since in the w 2 = 0 case L ϕ is trivial, we hereafter take w to be timelike. We can derive the condition P (w 2 ) ≤ 0 alternatively by imposing the null energy condition. The background energy-momentum tensor is so requiring that T µν µ ν ≥ 0 for all null implies P (w 2 ) ≤ 0. Let us compute the speed of propagation for fluctuations about this background. The equation of motion for ϕ in theφ background is Taking a plane-wave ansatz for ϕ, we have the dispersion relationη µν k µ k ν = 0, that is, Writing k µ = (k 0 , k), the speed of propagation is v = k 0 /| k|, which satisfies wherek = k/| k| and w µ = (w 0 , w). We note that (−vw 0 +k · w) 2 is always nonnegative and can be chosen to be strictly positive for nonzero w by choosing the direction ofk. Moreover, we choose w so that P (w 2 ) is nonzero. It follows that v ≤ 1 if and only if Since stability of the condensate background, which implies P (w 2 ) < 0, is necessary in order to reliably consider fluctuations about that background, we conclude that in order to guarantee v ≤ 1 (see also Ref. [1]). As shown in Refs. [1,5], if v > 1 one can immediately form a causal paradox by highly boosting two bubbles of background condensate relative to each other in an otherwise empty region of space; sending superluminal signals back and forth between the two forms a closed signal trajectory in spacetime. Let us now apply the causality bound (37) to the nth-order P (X) theory, where all the λ i are negligible at leading order for 1 < i < n. By taking w 2 sufficiently small, we guarantee that P (w 2 ) is dominated by the X n term, which we take to be nonzero. We have so since w 2 < 0, λ n > 0 if n is even, Bounds from Unitarity Let us again consider the nth-order P (X) theory in which the first higher-dimension operator with nontrivial coefficient is X n . For such a theory, we can consider a family of tree-level completions of the X s operator that takes the form of some combination of operators O j , where We generate X s whenever there is some part of χ µ 1 ···µ j and χ µ 1 ···µ k that are the same field (up to some extraneous metrics) for j + k = s. The coupling of X s will thus receive contributions that go as g j g k for j + k = s. Of course, in that case the X j operator is also generated via the exchange of a χ µ 1 ···µ j between two of the O j operators and similarly for X k . Thus, in a theory in which the tree-level coefficients λ i for X i are negligible, in units of the cutoff, compared to λ n for 1 < i < n, we must consider a completion in which the g i coefficients vanish for 1 < i < n. In such an nth-order P (X) theory, the X n operator is generated by integrating out χ µ 1 ···µn , joining two copies of O n . 4 Let us consider the structure of our massive states χ µ 1 ···µn . Without loss of generality, we can take χ to be symmetric on its indices, since the interaction with ∂ µ 1 φ · · · ∂ µn φ effectively projects out any nonsymmetric component. We can split χ up into its traces and traceless components by defining χ µ 1 ···µn = χ (n) µ 1 ···µn + η (µ 1 µ 2 χ (n−2) where parentheses around subscripts denotes normalized symmetrization, i.e., n! T (µ 1 ···µn) = (T µ 1 ···µn + permutations). We will bound λ n via an argument involving the Källén-Lehmann form of the exact propagator for the χ states. All Massive Bosonic Higher-Spin Propagators in Arbitrary D We now build the propagator numerator for χ (s) µ 1 ···µs . This is a canonical higher-spin state, that is, a symmetric tensorial rank-s representation of the SO(D − 1) little group for a massive state in D dimensions. 5 We require that χ (s) µ 1 ···µs satisfy the Fierz-Pauli conditions [24], so that at leading order in χ (s) in the equations of motion we have Equivalently, the propagator numerator must be transverse and traceless on shell, when k 2 = −m 2 , where m is the mass of χ (s) . We will write the propagator numerator for χ (s) as Π µ 1 ···µsν 1 ···νs . Unitarity implies that, on shell, the propagator numerator can be written as a sum over a tensor product of the physical polarization states, where ε(a) µ 1 ···µs are the unit-normalized spin-s polarization states and a is a label for the different states, with ε(a) µ 1 ···µs ε(b) µ 1 ···µs = δ ab [20]. Hence, the full trace Π In the special case of D = 4, Eq. (53) matches the result of Ref. [26]. 6 For example, the propagator numerator for a massive vector is just Π µν , while the propagator numerators for massive states of spin 2, 3, 4, and 5 are: For the spin-2 case, we see that we have recovered the usual form of the massive graviton propagator numerator in D dimensions [30]. Bounds for P (X) We are now equipped to compute the contribution to the effective operator X n coming from integrating out χ µ 1 ···µn in a theory containing the operator O n = g n χ µ 1 ···µn ∂ µ 1 φ · · · ∂ µn φ. As accounted for in Eq. The ρ (s) (µ 2 ) are the spectral densities, which are nonnegative by unitarity in a theory free of ghosts, since ρ (s) (µ 2 ) can be written as a sum over the norms of the set of intermediate states. The (−1) s factor is present due to our choices of sign conventions and metric signature. Let us now formally integrate out χ µ 1 ···µn , treating the full multiplet in Eq. (41). If we attach two of the O n vertices from Eq. (40) to the exact propagator in Eq. (55) and then compute the effective operator at low energies by sending k to zero, we can calculate the coefficient λ n of X n : Computing the sum, one finds Challenges of More General Bounds Thus far we have focused primarily on nth-order P (X) theories. In this section, we discuss the difficulties inherent to using analyticity of scattering amplitudes to bound more general P (X) theories. For example, let us consider the calculation of the six-point amplitude for three-to-three scattering in the forward limit for the general P (X) theory The three-to-three amplitude is computed from Feynman diagrams of two topologies: a six-point contact diagram and a diagram with φ exchange between two four-point vertices: where "+ permutations" indicates the sum over the other 6! − 1 permutations of the labels {1, . . . , 6}, while "+ other channels" indicates the sum over the other nine ways of dividing the labels into two groups of three. If we choose forward kinematics, p 1 = −p 4 , p 2 = −p 5 , p 3 = −p 6 , then many of the channels have on-shell exchanged momentum; for example, for the 124 channel, the exchanged momentum is p 1 + p 2 + p 4 = p 2 . Thus, the amplitude in Eq. (61) possesses singularities at strictly forward kinematics. These singularities persist even if we make the φ massive: in that case, the denominator of the propagator becomes p 2 + m 2 , where p is the exchanged momentum and m is the φ mass, so when p goes on-shell, the amplitude again is singular. While it is possible to consider almost-forward kinematics and take the forward limit in such a way that the singularity in particular powers of s (e.g., s 2 ) vanishes, it is not clear that such a procedure produces a reliable positivity bound. For example, the optical theorem is independent of the way in which the forward limit is taken, so the limit-dependence that would show up in the residue computed at small s makes the dispersion relation ambiguous. This issue is similar to the subtleties involving the t-channel singularity in gravity amplitudes [1,5,7]. We leave the investigation of these issues and the search for analyticity bounds on more general P (X) theories to future work. The Legendre Transform Since multiple infrared consistency tests point to the same bounds on effective field theory coefficients, it is worthwhile considering whether these bounds are related to other physics principles. In this section, we will show that the positivity bounds we have derived on the P (X) theory are connected with the consistency of the formulation of the mechanics of the theory. In particular, given a theory specified by a Lagrangian L[∂ µ φ, φ], the Hamiltonian of the theory is given by acting on L with the Legendre transform * : The Legendre transform is well defined when L is a convex function of ∂ µ φ. In particular, in a consistent formulation of the mechanics of a system free of constraints, acting with the Legendre transform twice brings us back to the Lagrangian, i.e., the Legendre transform is an involution: Convexity of L with respect to ∂ µ φ implies that the supremum in the Legendre transform occurs so p µ ends up being fixed to its canonical value in Eq. (30), p µ = δL/δ(∂ µ φ). In order to apply the Legendre transform to the Hamiltonian, we treat p µ as an independent variable and require Consistency of the definition of the Legendre transform, which requires L be convex, also implies convexity of H, so the supremum again occurs at a local extremum and we have that is, Substituting this solution back into the definition of H * and assuming that we can write ∂ µ φ as an explicit functional ∂ µ φ[p µ ] of the canonical momentum p µ = δL/δ(∂ µ φ), we have Thus, the involution property of the Legendre transform is guaranteed if p µ [∂ µ φ] is invertible as . See Ref. [12] for further discussion of the connections between this invertibility property and causality. For the P (X) theory, p µ = 2P (X)∂ µ φ as we have previously noted. Thus, the canonical momentum is a mapping from one Lorentzian vector space to another. That is, p µ is invertible for p µ in the image of ∂ µ φ provided this mapping is injective, i.e., the mapping of ∂ µ φ to its image under p µ is a diffeomorphism. Note that the map identifies spacelike, timelike, or null ∂ µ φ with spacelike, timelike, or null p µ , respectively, so these identifications can be considered separately and must each be a diffeomorphism. For some subset Ω ⊂ R n , a map f : Ω → R n is a diffeomorphism from Ω to f (Ω) if df is positive or negative definite on Ω [31]. That is, viewed as a matrix, the Jacobian must be positive or negative definite. Note that for a condensate background, J µν = −η µν , the effective metric for fluctuations ϕ defined in Eq. (29). Now, we know that lim X→0 P (X) = −1/2, regardless of the sign of X, while lim X→0 P (X) = 0. Hence, the involution property holds if J µν is negative definite for all nonzero X. This occurs if and only if all the eigenvalues of J µν are negative. In particular, the eigenvectors of J µν are ∂ µ φ, with eigenvalues e(X) = 2P (X) + 4XP (X), so the involution property holds if e(X) is negative: For a timelike condensate, this is equivalent to saying that the effective metricη µν has the correct signature (i.e., the same signature as η µν ). That is, if we consider the setup of a stable timelike condensate with X < 0 and P (X) < 0, the causality bound in Eq. (37) implies that the condition in Eq. (71) holds, so the Legendre transform is an involution relating the Lagrangian and Hamiltonian. As a final observation, we note that the involution property is related to the weak energy condition. Again taking a timelike condensate w µ as in Sec. 3, the weak energy condition requires that T µν w µ w ν ≥ 0. But from Eq. (32), T µν w µ w ν = P (w 2 )w 2 − 2P (w 2 )(w 2 ) 2 . Comparing with Eq. (70), we notice that where the last inequality follows from Eq. (71). Hence, the weak energy condition, which requires T µν t µ t ν ≥ 0 for all timelike t µ , implies w 2 0 dX e(X) > 0 for w 2 < 0, which is the integral form of the requirement of involution of the Legendre transform. Similarly, the causality bound P (X) ≥ 0 in Eq. (37) implies the dominant energy condition, which stipulates causality of the flux of energy-momentum seen by any inertial observer [1]. Conclusions In this paper, we have extended to higher-point terms the techniques of placing positivity bounds on higher-dimension operators in effective field theories using principles of infrared consistency. In the context of a theory polynomial in X = (∂φ) 2 , we showed that in theories where the first nonnegligible higher-dimension operator is at nth order in X, these infrared consistency bounds imply that λ n > 0 if n is even and λ n < 0 if n is odd, in mostly-plus metric signature. We presented multiple different arguments for these bounds. In particular, we proved the bounds using analyticity of 2n-point scattering amplitudes, as well as another proof using causality and the absence of superluminality in the low-energy theory. In a particular class of tree-level ultraviolet completions, we saw how these bounds arise from unitarity. By considering these lines of argument, we were able to extend useful techniques that will allow higher-point operators to be bounded in other theories. For example, we examined the additional kinematic freedom in the forward limit inherent to higher-point operators. We also exhibited a succinct derivation of the propagator numerators for all massive higher-spin bosons in arbitrary dimension, obtaining their form from symmetries and simple physical constraints alone. Much work remains to be done to map out the space of possible low-energy effective field theories. In Sec. 5, we illustrated the challenges endemic to placing analyticity bounds on more general P (X) theories due to kinematic singularities; these issues are similar in nature to the difficulties in addressing t-channel singularities in gravity theories discussed in Refs. [1,5,7] and the challenge of proving the a-theorem in six dimensions discussed in Ref. [21]. Further work on infrared consistency conditions for multipoint operators has the potential to further our understanding of these questions. Finally, elucidating the deep relationships among constraints on effective field theories is an important topic for future study. In this paper, we derived the same constraint from analyticity, unitarity, and causality and also showed how infrared constraints on the P (X) action are related to the well-posedness of the Legendre transform relating the Lagrangian and Hamiltonian formulations of the theory. Infrared constraints such as these complement bounds obtainable from ultraviolet reasoning. A more complete understanding of the connections between ultraviolet and infrared within the swampland program remains a compelling topic for future work.
8,068.2
2018-04-09T00:00:00.000
[ "Mathematics" ]
A single three-parameter tilted fibre Bragg grating sensor to monitor the thermosetting composite curing process Abstract The unique sensing features of the tilted Fibre Bragg Grating (TFBG) as a single three-parameter optical sensor are demonstrated in this work, to monitor the manufacturing process of composite materials produced using Vacuum Assisted Resin Transfer Moulding (VARTM) process. Each TFBG sensor can measure simultaneously and separately strain, temperature and refractive index (RI) of the material where the optical fibre is embedded. A TFBG embedded in a 2 mm glass-fibre/epoxy composite plate was used to measure the thermomechanical variations induced during the curing process. At the same time, the RI measurements, performed with the same TFBG sensor, can estimate the degree of cure of the resin. The TFBG sensor shows to be a valid and promising technology to improve the state of art of sensing and monitoring in composite material manufacturing. Graphical Abstract Introduction Composite materials are widely used in different engineering sectors such as aerospace, aeronautics, automotive, naval, wind turbines and railways [1] since their high strength to weight ratio and anisotropic nature brings several advantages compared to traditional engineering materials. In particular, the use of composite in primary structures has allowed significant weight reduction while maintaining the same mechanical performance in the aeronautic sector. In the last decades, a significant weight reduction has been achieved by replacing solid metallic parts with composite material [2]. Nevertheless, several defects can be introduced due to a poorly designed manufacturing practice. This can lead to unacceptable level of defects and rejection of the part, which poses cost and sustainability issues [3,4]. Therefore, inspection techniques to investigate the internal state of composites during manufacturing have been developed. Since the first monitoring technologies appeared, Optical Fibre (OF) sensing immediately proved to be a valid option [5], and specifically, Fibre Bragg Grating (FBG) sensors are the most used OF sensors for embedded real-time monitoring of composite structures. They provide accurate and reliable remote multi-parameter measurements (temperature, strain, pressure, etc.), low weight, minimal intrusiveness and high installation flexibility, and additionally, they are immune to electromagnetic interference [6]. Since the 2000s, the embedding of OF sensors has been performed for composites manufacturing quality control [7][8][9][10], as it is a fundamental step to obtain the best mechanical performance [11]. Due to dominant heat transfer phenomena and low thermal conductivity through thickness, some defects such as voids, temperature overshoots, residual stresses and part deformation can occur during composite manufacturing process, which can significantly lower the mechanical performances of the components. A significant effort on the optimisation of the composite manufacturing processes has been undertaken to minimise the occurrence of such defects [12][13][14][15][16]. Moreover, the monitoring of a specific manufacturing process stage is necessary to avoid unexpected defects to arise. At the same time, multiple sensors strategy is not desirable as their presence can generate defects or influence the material mechanical performance, and it would increase the cost and, hardware and software complexity. Some solutions have been developed by considering the benefits of the OFs using a combination of similar and/or hybrid sensors in the same waveguide able to compensate or perform dual-parameter measurements [17][18][19][20][21][22][23][24]. However, they suffer from several drawbacks which make them unattractive, such as nonlocalized measurements, poor spatial resolution and accuracy, use of an intrusive capsule and time consuming. TFBGs are characterised to have a Bragg grating structure tilted with respect to the optical axis of the OF. This special imposition allows a spectrum composed of several well-defined resonance peaks to be obtained [25]. By exploiting these peaks in the TFBG, after a calibration step, each single TFBG sensor can be used to measure, simultaneously and separately, the variation of strain, temperature and Refractive Index (RI) at the point where it is embedded [26]. Specifically, the RI variation of the resin during its curing stage can be monitored and associated with its degree of cure [27]. However, there are no works proving that the TFBG can perform simultaneous three parameter measurements when embedded in a composite and that a single sensor is sufficient to monitor the thermomechanical and curing state of the material. In the present paper a single three-parameter optical sensor based on a weakly Tilted FBG (TFBG) is demonstrated to monitor the thermomechanical state and the degree of cure of a thermosetting composite during the Vacuum Assisted Resin Transfer Moulding (VARTM) manufacturing process. The simultaneous temperature, strain and RI measurements with a single embedded TFBG during the curing stage of a 2 mm composite, are reported with a section dedicated to the comparison between the degree of cure and the RI trend during the curing time. TFBG sensing theory In a short period tilted grating the periodic and permanent RI modulation of the OF core, is deliberately tilted with respect to the longitudinal axis of the waveguide [28]. The tilt angle (h) is super-imposed during the Bragg grating writing process through tilting parts of the FBG manufacturing machine [25]. This angle enhances the generation of multiple peaks in the TFBG spectrum [29]. Specifically, in this paper, only reflective weakly TFBGs (h < 15 ) are considered since, apart from the Bragg and the cladding resonance peaks, they have a further peak in their spectrum called the Ghost resonance, which allows the measurement of multiple parameters simultaneously [25][26][27][28][29][30][31]. All the resonance peaks in the spectrum are sensitive in different measure to thermomechanical external perturbations, which cause a shifting variation of the nominal peak wavelength. However, the Bragg and Ghost peaks are immune to external RI variations, while the cladding resonances undergo a wavelength shift and amplitude change when this is varying. Considering these aspects and by performing a preliminary calibration, both the Bragg and the Ghost peaks can be used to measure simultaneously and separately the temperature and the strain variation from a reference condition. At the same time, the area generated by the envelope of the lower and upper cladding resonance peaks can be exploited to measure independently the variations of the RI surrounding the TFBG sensor [8,31,32]. Once the sensitivity coefficients (K e , K T ) of the Bragg and Ghost peaks are obtained from the calibration, their wavelength shifting (Dk Bragg , Dk Ghost ) are used to measure separately the strain (De) along the OF axis and temperature (DT) variations using the K global thermomechanical sensing matrix [33]. This technique is based on the peaks wavelength shifts acquired through the FBG interrogator device, which can detect data points with a certain scanning wavelength resolution (swR). Consequently, swR also influences the sensor thermal resolution, which can be calculated through the ratio between the thermal sensitivity coefficients difference and swR. The measurements were performed by respecting this resolution constraint, which omission implies incorrect calculations. Regarding the surrounding RI, the cladding resonance peaks envelope area is calculated using the Delaunay triangulation (D-T) demodulation technique [31]. During the refractometric calibration, the area values are calculated and correlated at each external RI change and normalised with respect to a reference area value. At this point, from the correlation points, a fitting function can be obtained in relation to the external RI range of interest. Hence, the surrounding RIs can be obtained by solving the fitting function with respect to the normalised envelope area of the TFBG immersed in a medium. This process and the D-T technique are treated in more detail in [31]. Experimental In the following section 3.1 provides details on the reinforcement fibre and matrix, and the TFBG sensor used to perform the measurements inside the composites, while section 3.2 treats the cure kinetics and T g characterisation. Section 3.3 provides details on the VARTM process and sensor embedding stage. Materials and sensors The 2 mm thick composite plate (12 plies) was manufactured using Interglass TM Unidirectional (UD) Sglass fibre (220 gr/m 2 , 92145) plies and epoxy resin. The resin system was a low temperature curing Hexion Epikote TM 04908 epoxy resin mixed with Epikure TM Hardener 04908 (mixing ratio resin/hardener 100:30 parts by weight) [34]. The vacuum bag setup consisted of a nylon film WrightlonV R 7400, sealant tape Solvay LTS90B, infusion mesh Airtech Greenflow 75, peel ply Airtech Stitch Ply A and release perforated polyolefin foil WrightlonV R WL3700. The TFBG sensor embedded in the composite, was manufactured by FORC-Photonics in Fibercore PS1250/1500 OF with a 2 tilt angle, and is 4 mm long with a 10 mm uncoated length. The TFBG signal was acquired by using the FBG interrogator NI PXIe-4844, which has a swR ¼ 4 pm and a maximum sample frequency of 10 ± 0.1 Hz. A thin K-thermocouple (TC, Ø ¼ 0.250 mm) was also embedded close to the TFBG to provide a temperature measurement reference for comparison with the TFBG measurements. The TC has an accuracy of ±1 C. The integration details are reported in section 3.3. Before TFBG embedding, the thermomechanical calibration of the TFBG sensor was performed as in [33]. The strain and temperature sensitivity coefficients found are Ke ,Bragg ¼1.255 ± 0.004 pm/le and Ke ,Ghost ¼1.255 ± 0.006 pm/le while KT ,Bragg ¼8.686 ± 0.012 pm/ C and KT ,Ghost ¼9.2 ± 0.014 pm/ C. The RI calibration procedure of the TFBG was performed as described in [35], with the room temperature kept at 20 ± 1.5 C. Usually the epoxy resins suitable for VARTM processes have a RI range 1.5-1.56 [36] when uncured, which is expected to increase during the curing stage [27]. For the resin used in this study, the RI of the uncured resin is around 1.54 at 25 C [34]. Hence, to obtain the best measurement accuracy, the curve branch in the range 1.46-1.7 was fitted by a fifth-order polynomial function with a fitting square error (R 2 ) of 0.9996, where the worst RI accuracy is 1Â10 À3 at 1.5. Cure kinetics and T g characterisation The cure kinetics characterisation of the epoxy system was carried out with a TA Instrument Differential Scanning Calorimeter (DSC) 2500. The DSC used liquid nitrogen with a flow rate of 10 ml/ min. Two isothermal tests at 80 C and 100 C and one dynamic test at 1 C/min were performed. The glass transition temperature evolution was characterised using Modulated Differential Scanning Calorimetry (MDSC), at a 3 C/min ramp rate with a modulation set at 1 cycle/min and an amplitude of 1 C, by observing the evolution of the reversible components of the specific heat. Next to the fully cured and fully uncured samples, four partially cured samples were manufactured by heating uncured resin samples at 1 C/min up to an increasing final temperature followed by a quick cool down at the fastest possible machine rate of about 50 C/ min to stop the reaction from progressing. To validate the cure kinetics model, two additional samples have been partially cured. The cure profiles used dictated a ramp-up to 80 C at 1 C/min followed by an isothermal dwell of 20 min for the first sample and 40 min for the second. After that, the samples have been quickly cooled down to stop the reaction. An MDSC analysis to identify the glass transition temperature of the sample has been subsequently performed. The resin cure kinetics and Di Benedetto equation were fitted to the experimental results as shown in the following section. Degree of cure and Di benedetto equation The degree of cure and the glass transition temperatures trend are obtained from the experimental data. These have been fitted with the following kinetics model proposed by Khoun et al. [37], the model proved to accurately describe similar epoxy resin systems [14]: where a is the degree of cure, a c , a T , are coefficients controlling the transition of the kinetics from chemical to diffusion; C governs the breadth of the transition into the diffusion controlled regime, m and n are reaction orders for the n-th order and autocatalytic terms, A is a pre-exponential Arrhenius factor, E is the activation energy of the Arrhenius functions, T is the absolute temperature, and R is the universal gas constant. Figure 1a shows the fitting of the experimental data with the proposed phenomenological kinetic model. The average relative error of the fitting is about 2%. The heat generated Q by the exothermic reaction can be calculated as follows: Where q r is the resin density, v r the volume resin fraction and H tot is the total enthalpy. The glass transition temperature model to fit the experimental data follows the Di Benedetto equation [38]: here T g1 and T go are the glass transition temperatures of the fully cured and uncured material respectively and k is a fitting parameter governing the convexity of the dependence. Figure 1b illustrates the fitting of the experimental data with the Di Benedetto equation. VARTM process The VARTM manufacturing process was used to produce the samples and consists of an infusion stage followed by a curing stage. The OF was placed parallel to the UD reinforcement fibres layers of the composite to measure longitudinal strain. The mould was a flat aluminium plate (50x50x1 cm). The resin was infused at room temperature. The TFBG was embedded using the same translation stage of the TFBG calibration, to hold the TFBG straight and in the centre position of the reinforcement layer inducing a small pre-tensile force to the OF. The TFBG was then spot-glued on the glassfibre foil using small drops of cyanoacrylate glue close to the edges of the composite layer ( Figure 2). Additionally, a TC was placed as close as possible to the glued TFBG sensor by using a special web adhesive which melts once the resin covers it and, in a way that does not interfere with the material surrounding the OF sensor. The TFBG and TC were embedded in the middle plane (6 th reinforcement layer), after this, the preform can be assembled. Figure 3 gives a schematic view of the sample with the sensors embedded. The OFs and the TC wires exiting from the composite were protected with vacuum bag sealant tape. Resin infusion was performed at 50 mbar to avoid the evaporation of volatiles. During resin flow, the TFBG sensor was used to provide information on the flow arrival time whose details are reported in section 4.1. After completing the infusion, the inlet line was closed and the panel was cured in an oven provided with an access hole from which the infusion lines, the OFs and the TCs can be externally connected. The applied curing temperature profile is suggested by the resin manufacturer [34]. As no indications are given by the manufacturer regarding ramp rates, a heating-up of 1 C/min and cooling-down by natural convection were imposed to have a gradual temperature variation. The strain, temperature and RI measurements were performed by processing the TFBG spectra acquired during the curing time and these are reported in the results section. Results and discussion This section reports the outcomes of the TFBG measurements from the spectra acquired during the different stages of the VARTM process. In section 4.1, the TFBG spectra are analysed during the resin infusion stage. The results during the curing stage of the composite plate are addressed in section 4.2. TFBG detection during resin infusion stage During the infusion stage, the TFBG spectra can be used to monitor the resin flow front arrival. At this stage, the TFBG spectra were acquired every second, while the TC was recorded every 3 s. When the resin starts to wet the sensors, the depth between the upper and lower cladding peaks decreases uniformly until they reach a stable condition when the TFBG is fully immersed. In this last condition, the normalised envelope area returns the resin RI using the fitting correlation function obtained from the calibration. A detailed analysis of this behaviour can be found in [35]. Figure 4 shows the TFBG spectra before and after the resin flows in the thin composite. In this case, the required time for the resin to contact the sensor from the start of the infusion was $50 s whilst to have a stable spectrum at the relative area, $9 s were needed. The latter can be considered as the interval needed for the resin flow front to fully cover the TFBG after reaching it. However, this time should not be confused with the absolute speed in a single direction of the composite. In fact, due to the presence of the flow media on top, the flow front propagation is non uniform through the thickness as its permeation from the flow media is superposed to the in-plane flow. Moreover, small oscillations can be present in the spectrum due to pores migration flowing along the OF surface where the TFBG is present and influencing its coupling mode system locally. In this context, the TFBG spectral signals may be used to locally identify possible defects deriving from poor or incomplete wetting. This can occur on-line since the operations to calculate the envelope area are usually fast enough to be completed within the minimum refresh time of the device used to interrogate the TFBG sensors. Furthermore, as the time interval to obtain the spectrum of the single TFBG from dry to fully wet condition, can be calculated with an accuracy of 0.33 s (depending on the interrogator device), these sensors may be used to investigate the local permeability of a fibre reinforcement layer along the embedding direction. TFBG measurements during curing stage The composite thermomechanical state can be obtained with the TFBG as described in section 2. From this procedure, De and DT can be obtained from the moment in which the oven was switchedon (i.e. 96 min). At this point, the strain variation is assumed to be zero and the temperature is $21 C. Here, the TFBG acquisition frequency is once per minute, while the TC measurements are every 3 s. Figure 5 shows the temperature (dashed red line) and strain (black continuous line) profiles measured by the embedded TFBG sensor. Though the TFBG measurements performance are constrained to its thermal resolution (7.8 C), here the temperature trends measured by the TC and TFBG are similar. Nevertheless, the TFBG thermal resolution generates an average temperature difference between the TC and TFBG temperature trends (red lines in Figure 5) of $0.78 C, while the maximum difference is $1.7 C at 198 min. As a consequence, since strains are calculated in isothermal conditions when the TFBG measures the maximum temperature, a mean and maximum strain deviation of $5 le and $12 le respectively, can be calculated through temperature differences. Considering the De and DT variations involved in Figure 5, the previous deviations can be considered negligible. The strain oscillations detected by the TFBG along the entire interval between 200 and 540 min, are caused by temperature fluctuations. During the cooling down step, the TFBG measurements were stopped when the temperature reached 28.8 C as lower temperatures were not detectable due to the TFBG thermal calibration. At this temperature the strain measured by the TFBG is around À157 le. However, the lowest detected temperature was 23.7 C. The additional 5.1 C correspond to about À23 le hence, the maximum De expected is around À180 le. The compressive strain measured is a combination of compressive strains due to matrix related shrinkage and contraction in the cool down phase due to coefficient of thermal expansion (CTE) [15]. The resin RI can be evaluated through the embedded TFBG at any moment of the manufacturing process, simultaneously with the strain and temperature measurements, after being immersed by the resin flow. In Figure 6, the envelope area and the resin RI evolution is shown with the TC temperature profile where both parameters follow the same trend. Both the curves were smoothed with fine averaging to remove the greater part of the oscillations due to possible TFBG signal fluctuations caused by high-heat transfer, transverse strains, OF bending, power detection accuracy and background noise. As the RI is obtained from the envelope area through the correlation fitting function, their trend is very similar. This means that both the envelope area and RI can provide information about the resin curing state as is demonstrated in Figure 7 for the RI variation trend. Hence, in an on-line monitoring application of the resin curing process, the envelope area trend provides enough information to determine the resin state without the RI calculation. However, since the RI has a physical meaning, it is preferred here to base the following discussion on its correlation with the resin cure degree. In Figure 7, a and the resin RI curves are compared and three ranges were identified. In the first, the sensor measures the resin's RI monotone increasing due to the crosslinking of the polymeric chains. Though the initial RI variation is severe, the curve slope becomes milder quickly reflecting the slow resin crosslinking reaction occurring at room temperature. In the second range, the RI trend changes when the ramp-up is performed by switching-on the oven with the programmed temperature profile. The RI value detected from the TFBG at the start of the oven ignition is $1.552. This value deviates slightly from the one reported in the manufacturer data sheet for three main reasons: different reference temperature (here, $21 C), the resin was previously mixed with its designated hardener which influences the overall RI, and the resin has already undergone part of the cure stage at room temperature. Switching-on the oven, the RI tends to flatten, and then decreases as the overall expansion due to the CTE of the mostly uncured resin dominates the shrinkage effect due to the crosslinking. Therefore, the RI curve reaches a local minimum point, where the third range starts. Here, the RI trend reverses when the degree of cure is about 0.4 which indicates that the resin system might be approaching its gelation point, and starts to increase as shrinkage related effect starts to dominate over the CTE. The competing effects between shrinkage and CTE in composite manufacturing has been discussed and quantified by means of FEA in [15]. The resin's RI converges into a plateau in the last part of the curing stage (observed also in [27,39]), where the great part of the crosslinking reactions (a ¼ 92%) occurred. The oscillations in the signal are possibly caused by the low crosslinking density occurring in this range, whereby the resin RI changes are too small to be detected from the TFBG with respect to the RI variation caused by the small temperature fluctuations. Hence, the resin RI measurements during its curing (or cladding resonance peaks envelope area), can be used to detect the different cure dynamics even without the help of the a curve. Additionally, a correlation between the resin RI and its degree of cure can be establish [40]. Furthermore, since the RI is sensitive to the temperature variations, its measurement can identify also whether the curing occurs at room temperature or in an oven. Finally, the resin's T g is exceeding the curing temperature at about 330 min (Figure 7), leading to a vitrification of the matrix. Conclusions In this work, the TFBG sensor has been demonstrated to be able to act as a three-parameter OF sensor in monitoring the VARTM manufacturing process of a glass-fibre/epoxy composites. The TFBG proved to be useful in detecting the time needed by the resin flow front to reach the TFBG and the envelope area can be also used to obtain flow information as resin arrival time, infusion degree and poor resin wetting. The TFBG simultaneous strain-temperature measurements detected during the curing in the oven, indicated a relevant development of compressive strains during cooldown. The measured temperature matches well the one detected by the TC. However, the two profiles deviated due to the TFBG thermal resolution, which influences also the measured strains. This limitation can be easily overcome using a FBG interrogator with a higher swR. The maximum strain deviation calculated corresponds to 2.7% of the average strain for a short time interval of the curing step, and this value can be considered low. At the same time, The TFBG spectra provide the resin RI variation starting from the infusion step. The comparison of the RI trend with the resin degree of cure obtained from its cure kinetics, showed that the RI measurements can detect the resin cure state throughout the process. The monitoring technique can in future be applied to carbon fibre composites made with Liquid Composite Moulding processes or in an autoclave, and in pre-pregs composites. Furthermore, the TFBGs may be suitable for the composite industry as they can improve the quality and health control of the products by providing information during the manufacturing process and in service. TFBG measurements can be also performed on thicker composites, and can be potentially obtained in real-time, and for all the three dimensions of the composite by embedding more TFBGs. To conclude, a single minimal intrusive TFBG sensor was demonstrated as three-parameter OF sensor embedded in a composite, to monitor the thermomechanical and cure state of the composite during the several steps of its manufacturing process after performing a preliminary calibration of the sensor. This allow to improve the monitoring and sensing technology state of art and raise the concept of structural health monitoring of a product.
5,871.6
2022-02-15T00:00:00.000
[ "Physics" ]
Causality Relationship between Import, Export and Exim Bank Loans: Turkish Economy Export promotion tools aim to increase exports and support the entrepreneur in reaching new foreign markets. The positive impact of incentives, especially on financial issues, on exports both before and after shipment is undeniable. Founded in 1987, Turkish Exim bank is Turkey ’ s official export credit institution. By observing macro-economic balances, Exim bank ensures that exporters, export-oriented production manufacturers and entrepreneurs operating abroad are supported by credit, guarantee and insurance programs to increase their competitiveness. The study aims to examine the causal relationship between imports, exports and Exim bank loans in the Turkish economy. In the study, stationarity with the extended Dickey-Fuller unit root test, long-term relationship with the Johansen co-integration test, and then causality with the Granger test were investigated. The causality relationship was analyzed using import, export and Eximbank loans data for the periods 2003 – 2020. Introduction For developing countries to reach the level of developed countries and to catch the level to compete with them, more than one condition must be met. The most important of these conditions is the industrialization strategies that developing countries will implement. With the decisions of January 24, 1980, which were a turning point in terms of redesigning the Turkish economy, the export-based industrialization strategy was started to be implemented by targeting export-based growth instead of the import substitution strategy implemented since the 1960s, and some institutions were created to eliminate the problems that will be encountered at the implementation stage of these decisions ( [1], p. 22). To increase the competitiveness of exporters in foreign markets, Turkish Exim bank provides export financing in Turkey with credit, guarantee and insurance programs under international rules and principles ( [2], p. 180). In developing countries, Exim bank loans are provided by organizations that support the Central Bank of the Republic of Turkey (CBRT) and non-profit exports. Commercial banks, private equity export credit insurance companies and factoring companies are the only organizations that support finance, as the main purpose is profit. In developed countries, the necessary financing for exports is usually provided by the commercial banking system. Export financing organizations, on the other hand, support the export sector and banks with insurance and guarantee programs, only performs the function of providing a risk-free environment. Import Imports are the value of foreign goods and services bought by a country's households, firms, government agencies, and other organizations in a given period. Exports Exports are goods and services that are produced in one country and sold to buyers in another. Exports, along with imports, make up international trade. Eximbank loan Eximbank loans are lines of credit made available by Export Credit Bank of Turkey (Exim bank) to enhance exports. This credit is made available during the pre-export stage against a written pledge by the exporter to export Turkish-origin goods and services as stipulated by Exim bank. It provides a price advantage over other export loans offered by banks. Literature review In the Literature view, a summary of information was given about research that examines the relationship between exports, financial development and economic growth in Turkey in the context of causality. Dodaro [3], examined the relationship between economic growth and exports with the Granger Causality test by using variables between 1967 and 1986 periods. The study found a one-sided causal relationship from economic growth to exports. Bahmani and Domac [4] examined the relationship between economic growth and exports, with the Co-Integration test by using variables between 1923 and 1990 periods. As a result of the research, it is found that there is a decidedly causal relationship between economic growth and exports. Tuncer [5], examined the causal relationship between exports, imports, investments and Gross domestic product (GDP) with the method Toda and Yamamoto by using variables between 1980Q1 and 2000Q3 periods. As a result of the study, a one-sided causality relationship has been found from economic growth to exports. Şimşek [6], tested the export-based growth hypothesis with Error Correction Model, Co-Integration Test and Causality tests by using variables between 1960 and 2002 periods. As a result of the study, the one-sided causality relationship has been found from economic growth to exports. Erdogan [7], examined the relationship between economic growth and exports, with Co-Integration and Causality tests by using variables between 1923 and 2004 periods. As a result of the study, the long-term double-sided causal relationship between economic growth and exports was found at the level of 10% significance. Taştan [8], examined the interaction and causal relationships between export, industrial production and import variables, with Co-Integration and Causality tests by using variables between 1985Q1 and 2009Q3 periods. As a result of the study, a one-sided causality relationship has been found from economic growth to exports. Tıraşoglu [9], examined whether the export-based growth hypothesis is valid in Turkey or not, with Co-Integration and Causality tests by using variables between 1998Q1-2011Q3 periods. As a result of the study, there is a long-term one-sided causal relationship between exports and economic growth. Korkmaz [10], examined the relationship between economic growth and exports, with Co-Integration and Causality tests by using variables between 1998: Q1-2013:Q3 periods. As a result of the study, a one-sided causality relationship has been found from exports to economic growth. Pentecost and Kar [11], examined the relationship between economic growth and exports, with Co-Integration and Causality tests by using variables between 1963 and 1995 periods. As a result of the research, there is a one-sided causal relationship from economic growth to financial development. Al-Yousif [12], studied the causal relationship between financial development and economic growth for 30 developing countries, with both Time Series and Panel Data Analysis tests, by using variables between 1970 and 1999 periods. As a result of the study, there is a double-sided relationship between economic growth and financial development. Ceylan and Durkaya [13], examined the causal relationship between domestic credit volume and economic growth, by taking advantage of Gross domestic product (GDP) and total loans that private banks use domestically by using variables between 1998 and 2008 periods. As a result of the research, there is a onesided causality relationship from economic growth to loans. Data set In this study, the data set used were between 2003 and 2019 periods. The source of the data used in the study was taken from the Central Bank of the Republic of Turkey (TCMB) and the official website of the bank Exim bank. This data was created with three different variables which are listed in Table 1. All analyses and tests were performed on these variables by using the EViews11 program. Augmented Dickey-Fuller (ADF) unit root test To obtain econometrically significant relationships between series in time series analysis, it is essential that the analyzed series must be stationary. Unit root tests are usually used to test whether the series has a stationary structure or not. The most commonly used of these tests is the unit root test performed by Dickey-Fuller [14], which assumes that the error term is independent and uniformly distributed. If a time series is stationary, its variance, average, and covariance (with various delays) are the same, no matter when it is measured ( [15], p. 757). Let Y t be any time series, the stationary of a series depends on the following conditions: The relationship between this period value of Series Y t and the value it has in the last period, is as in Eq. (4): If ρ = 1 or γ = 0 is found in this equation, there is a unit root problem. If ρ = 1, the relationship will be as in Eq. (8): This means that the impact of the shock that the series was subjected in the previous period remains in the system as it was. If ρ < 1, it means that the initial effect of shocks in the past continues and that this effect will disappear over time. The main regression patterns used in the Dickey-Fuller test are: Eq. (9), shows a structure with no fixed term and no trend effect. Eq. (10) shows a structure with a fixed term and no trend term, and Eq. (11) shows a structure with a fixed term and no trend effect. In case of correlation between error terms, the extended Dickey-Fuller (ADF) unit root test was developed again by Augmented Dickey-Fuller [16] by including the delayed values of the dependent variable in the model. The proposed models for this test are shown in the following equations: Eq. (12) shows the structure in which there is no fixed term and no trend effect. Eq. (13) shows the structure in which there is only a fixed term, and Eq. (14) shows the structure in which both the fixed term and the trend effect are observed. The stationary test is first performed at the level value. If stationary is not achieved in the level value, the first difference of the Y t series will be taken. If the ΔY t = Y t À Y tÀ1 series becomes stationary, it is denoted by I(1) and the series becomes stationary in the first difference. If stationarity cannot be achieved in the first difference of the series, the second difference will be taken. The process of taking the difference of the series continues until it becomes stationary. In Eqs. (4) and (7), the H 0 : γ=0 (the series aren't stationary) hypothesis in the unit root test was found by Dickey Fuller [14] and tested with the τ (tau) statistic. If the error term is correlated in the Y t series, the extended Dickey Fuller (ADF) test is preferred, and the H 0 hypothesis is rejected if the critical values of MacKinnon [17], correspond to the absolute value of the statistics τ (tau), are greater than τ. If the ADF test statistic value is more negative than the MacKinnon [17] critical values at various significance levels, it is decided that there is a unit root in the series; in other words, the series are not stationary. In this study, the stability of the series was analyzed using the extended Dickey-Fuller (ADF) unit Root Test. As we can see in Table 2, Import variables were found stationary in the intercept model in the first difference I(1), Export variables were found stationary in nonintercept and trendless model in the first difference I(1); while Eximbank loans variables were found stationary in intercept model in the second difference I(2). Johansen cointegration test To test whether non-stationary series converge to equilibrium over a long period, the cointegration test examines whether there is a long-term relationship between the series or not. But since this test does not provide information about the direction of the relationship, causality tests are used to determine the direction of the relationship. There are two Tests in Johansen's cointegration analysis. These are trace and max. Trace hypothesis test H0: r ≤ r0, H1: r ≥ r0 + 1. Max hypothesis test H0: r = r0, H1: r = r0 + 1. If r = 0 there is not cointegration vector. The series were analyzed using the Johansen cointegration test and the results were shown in Table 3. In Table 3, the r = 0 hypothesis, shows that there is no cointegration relationship between the variables; the r ≥ 1 hypothesis, is an alternative hypothesis which shows that there is at least one cointegration relationship; the r ≥ 2 hypothesis is an alternative hypothesis that shows that there are at least two cointegration relations: According to the Johansen test output, both the Trace test statistic value and the Maximum Eigen test statistic value were greater than the table critical value of 5%. Therefore, the zero hypothesis of r = 0 can be rejected for both test values. In other words, Export, Gross domestic product (GDP), and Loan variables are cointegrated. Granger causality test The Granger causality test examines the relationship between series based on estimating past and present values. According to Granger, if past information about X t helps to obtain estimates. On the other hand, if Y t 's past values allow X t to be estimated, the Y t series is the granger cause of X t . If X t causes Y t and Y t causes X t , there is a bilateral causality relationship. An error correction model is used to determine the direction of the causality relationship, if the series is co-integrated. But if the series is not co-integrated, standard Granger or Sims tests are used to determine the direction of the causality relationship ( [18], pp. 213-228). Determination of appropriate lag length Accurate determination of the number of lag lengths in the Granger causality test is very important for the application to give healthy results, because this test is sensitive to the number of lag lengths. To find the appropriate lag length numbers for the Granger causality test, the Vector autoregression (VAR) model is estimated. Here a generic VAR model is estimated primarily to determine the appropriate number of lag length. Then, the number of lag length, will be determined by Akaike information criteria and by the LM test. For the VAR model, the appropriate lag length was obtained by LogL (Log-We), LR (sequential modified LR test statistic), FPE (Final prediction error), AIC (Akaike information criterion), SC (Schwarz information criterion) and HQ (Hannan-Quinn information criterion) criteria. The model with the largest LogL and LR values and the smallest FPE, AIC, SC and HQ values were selected to determine the appropriate lag length criteria. As seen from Table 4, Sequentially modified LR test statistic (LR); Final prediction error (FPE), Akaike information criterion (AIC),Schwarz information criterion (SC) and Hannan-Quinn information criterion (HQ) appropriate lag length as 1. According to this information, the lag length will be 1. In Figure 1 it is presented the Var(1) model which provides the stationary condition: Since the auto-regressive characteristic roots are all in the unit circle,the model VAR(1) which is used in the study, provided the stationary condition. Subsequently, appropriate delay numbers for the Granger by autocorrelation LM tests, it was determined that there was no autocorrelation and the series was stationary. The series were analyzed using the Granger causality test, as we can see from Table 5; there is no causal relationship between Eximbank to Export variables (ρ = 0.2485 > 0.05), Import to Export variables (ρ =0.1140 > 0.05), Export and Eximbank variables(ρ = 0.3826 > 0.05), Import to Eximbank variables (ρ = 0.0839 > 0.05), Eximbank to Import(ρ =0.98035 > 0.05), Export to Import (ρ =0.8944 > 05). According to the results which are shown in Table 5, it was determined that there is no causal relationship between Eximbank loans, Import and Export variables at 1 and 5% significance levels. Conclusion To decipher the causal relationship between import, export and Eximbank loan variables in the Turkish economy, three different variables were used in the study. All variables used in the study are time series, because they depend on time, so the stationarity of the variables was tested by the ADF test. As a result of the test, stationarity was achieved by taking first-order differences in import and export variables and second-order differences in eximbank loans variables. To test whether non-stationary series converge to equilibrium over a long period or not, the series were analyzed by using the Johansen cointegration test and the results revealed that Export, GDP, and Loan variables were cointegrated. Then the series were analyzed using the Granger causality test, and according to the results, it was determined that there was no causal relationship between Eximbank loans, Import and Export variables at 1 and 5% significance levels. When we look at the literature review, a summary of information was given about research that examines the relationship between exports, financial development and economic growth in Turkey in the context of causality. From the study of Ceylan and Durkaya [13], there was found one-sided causality relationship from economic growth to loans. From the study of Dodaro [3], Bahmani and Domac [4], Tuncer [5], Şimşek [6] and Taştan [8] it was found a causal relationship from economic growth to exports. Erdogan [7] found causality relationship between economic growth and exports at the level of 10% significance. Tıraşoğlu [9] and Korkmaz [10], found a causal relationship between export and economic growth. Pentecost, Kar [11] and Al-Yousif [12] found causal relationships from economic growth to financial development. But in this study, it was determined that there were no causal relationship between Eximbank loans, Import and Export variables at 1 and 5% significance levels. Turkey's export target in 2023, is to set at 500 billion USD. Looking at the export figures at the end of 2015, Turkey must increase exports by an average of 16.5% each year to reach the 2023 target. To achieve this increase, it is necessary to ensure the high growth of the economy, accelerate R&D investments, diversify exports, reach new markets, and provide the necessary regulations and facilities for exporting companies to compete with exporters in other countries. Eximbank loans provide a price advantage over other export loans offered by banks. It has a strong financial structure. Because of this financial structure, it supports exports at a high rate. To achieve the export potential that the country has, also in international markets, it should implement new and effective credit/insurance programs under international treaties and the restrictions of the institutions to which it is affiliated. Author details Yüksel Akay Ünvan* and Ulviyya Nahmatli Ankara Yıldırım Beyazıt University, Turkey, Ankara *Address all correspondence to<EMAIL_ADDRESS>© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
4,121.6
2021-12-31T00:00:00.000
[ "Economics", "Business" ]
A Comparison between Bayesian and Frequentist methods in Financial Volatility with Applications to Foreign Exchange Rates In this paper, a comparison is provided for volatility estimation in Bayesian and frequentist settings. We compare the predictive performance of these two approaches under the generalized autoregressive conditional heteroscedasticity (GARCH) model. Our results indicate that the frequentist estimation provides better predictive potential than the Bayesian approach. The finding is contrary to some of the work in this line of research. To illustrate our finding, we used the six major foreign exchange rate datasets. Introduction In the last few decades, volatility in financial time series has been of a key interest to both academics and practitioners as uncertainty is at the heart of financial decisions. Volatility plays a critical role in pricing derivatives, calculating measures of risk, and hedging. Since the gold standard abandonment in 1971, asset prices and stock markets began to broadly change and searching for predictive volatility modeling has been one of the major areas in time series analysis. Early work on volatility includes the ARCH (autoregressive conditional heteroscedasticity) of Engle (1982) and the GARCH (generalized autoregressive conditional heteroscedasticity) of Bollerslev (1986), which have become the benchmark models for estimating the volatility. ARCH/GARCH and their extended implementations have been proven to be a successful tool in modeling the conditional variance of financial time series data. A few examples are as follow. Wang et al. (2010) investigated volatility on Shanghai Stock Exchange with high-frequency intraday data. Huang et al. (2012) investigated the performance of GARCH models in option pricing. More recently, Jahufer (2015) has used GARCH models to examine Sri Lanka stock market using non-parametric specification test. The traditional frequentist approach uses the (conditional) maximum likelihood estimation (MLE) technique to estimate the parameters in the GARCH or GARCH-type models. We briefly describe this method in the next section and one can refer to Fan and Yao (2005) for more details. An-other technique that has gained momentum in recent years is the Bayesian approach, which takes into account prior information to estimate the posterior distribution. Nakatsuma (1999) developed three Bayesian methods: Markov chain Monte Carlo, Laplace approximation and quadrature formula to estimate the parameters of the ARMA-GARCH model. Bauwens (1998) explained how a Gibbs sampler can be implemented to perform the inferences on Bayesian GARCH models. Vrontos (2012) proposed a full Bayesian analysis of GARCH and Exponential-GARCH (EGARCH) model on parameter estimation, model selection, and volatility prediction. The Bayesian method has been an alternative way to model datasets in many different fields. The comparison of GARCH models under frequentist and Bayesian has garnered some attention in research. Nakatsuma (1996) conducted a study which focuses on this comparison. Based on a small sample Monte Carlo experiment, they found that the Bayesian approach performs better than the frequentist approach when comparing the mean square errors of the posterior mean in the ARMA-GARCH models. Hoogerheide (2012) examined density prediction of stock index returns us-ing GARCH models under both frequentist and Bayesian estimation. They showed that there is no significant difference between the qualities of whole density forecast, while Bayesian estimation exhibits better left-tail fore-cast accuracy. More recently, Sigauke (2016) modeled the Johannesburg Stock Exchange (JSE) using the Bayesian and frequentist approaches and concluded the Bayesian Autoregressive Moving Average-Generalized Autoregressive Conditional Heteroskedasticity (BARMA-GARCH-t) provided a better fit for the data than the standard ARMA-GARCH-t model. In a more general setting, studies have been conducted to compare the Bayesian and frequentist methods. Wagenmakers et al. (2008) advocate the use of Bayesian inference in the field of psychology. Samaniego (2010) gives the comparison of the Bayesian and frequentist approaches to estimation. Albers et al. (2018) outline the ramifications of using frequentist and Bayesian analyses. In our work, we show that the traditional frequentist approach renders better predictive performance than the Bayesian approach. The rest of the paper is organized as follows. Section 2 introduces the GARCH model along with the maximum likelihood estimation and Bayesian methodologies. Section 3 describes the results and Section 4 provides the discussion. Methods Let { : t ∈ Z} be a stochastic process that is adapted to filtration { : ∈ Z}, where = σ({ ∶ s ≤ t}) and σ({ }) is a sigma-field generated by {xs}. Following Geweke (1993), we assume where ϵ are innovations and ϵ |Ft-1 either follow a standard normal distribution or a tdistribution with v degrees of freedom. Although the mean, µ, can be time dependent in practice and modeled separately, we fix this value to be zero. In this work, we are primarily concerned with σt, the volatility, in time series economics. A plethora of works have been devoted to modeling this latent variable in the last thirty years and the work is still ongoing. As mentioned previously, the pioneer work on volatility is the ARCH/GARCH model of Engle (1982) and Bollerslev (1986). The GARCH model with order (1,1) (or GARCH(1,1)) assumes Our main focus is based on this GARCH(1,1) by examining the predictability of σ 2 under two cases for wt: (1) fixed wt = v/(v − 2) and (2) wt ∼Inv-Gam(v/2, v/2). The details are given in the following subsections. In practice, if yt is a stock price then the log-return series xt is defined as This measures the relative changes in the stock price. The above form can also be written as: log(y t ) − log(y t−1 ) = log (1 + y t − y t−1 y t−1 ) ≈ y t − y t−1 y t−1 Many financial studies use the return series xt instead of price series yt for many benefits. First, the returns are scale-free. Second, they have more attractive statistical properties than the price series and third, they are time-additive. The reader can refer to Tsay (2010) for more details and elaboration. Frequentist GARCH Estimation In the traditional frequentist statistics, the parameters are fixed unknown constants. Under this framework, we fix wt = v/(v-2) so that the equation (1) becomes where ϵ follow N (0, 1) or tv. Under a standard normal distribution, the likelihood function of x = (x1, ..., xT ) T is defined as and under a t-distribution with v degrees of freedom, the likelihood function is defined as The maximum likelihood (ML) estimators are the maximizers of the functions above. Note that σt 2 is a function of the unknown parameters α0, α1,and β and it depends on the past squared return series and the past squared volatility σt 2 . In addition, the likelihood is conditioned on (x 1 2 , x 2 2 , . . . , x p 2 ) and (σ 1 2 , σ 2 2 , . . . , σ p 2 ). The reader is referred to Fan and Yao (2005) for more details. In our work, we used the nonlinear optimization under the augmented Lagrange method which is implemented in the R package solvnp of Ghalanos (2011) in rugarch of Ghalanos (2016). Bayesian GARCH Estimation To describe the Bayesian framework, we first write by following Geweke (1993). Let = ( 1 , . . . , )' , and α = (α 0 , α 1 )' and we regroup the unknown parameters as = ( , , )'. Upon defining the T × T diagonal matrix: the likelihood function of (θ, w), under the normal distribution, is defined as: The parameters (θ, w) are random variables which are characterized by a prior density, denoted by p(θ, w). Inferences are made based on the posterior density defined by p(θ, w|x) = L(θ,w|x)p(θ,w) After observing the data, the posterior distribution gives a probabilistic description of the knowledge about the model parameters. Following Ardia (2010), we take the truncated normal prior distributions for the GARCH parameters α and β is the d-dimensional normal density, µ. and Σ. are the hyperparameters, and I[·] is the indicator function. Assuming that wt are inde-pendent and identically distributed as the inverse gamma with (v/2, v/2), the prior distribution of the vector w given v is The prior distribution of v is chosen as the translated exponential with λ > 0 and δ ≥ 2: . The mass of this prior is mostly concentrated near δ when λ is large and hence, the degree of freedom can be constrained in this manner. Deschamp (2006) points out that this prior density is useful in two ways. Bounding the degrees of freedom away from two may potentially be important from a numerical perspective to avoid a rapid divergence of the conditional variance. Next, the normality of the errors can be estimated while allowing the prior to remain reasonably constrained, which may allow for better convergence of the sampler. Assuming the prior independence among the parameters, the joint prior distribution is then ( , ) = ( ) ( ) ( | ) ( ). (9) There is no closed form for the joint posterior distribution in (8) and no conjugate prior exists for this joint posterior density. Hence, we resort to the Markov chain Monte Carlo (MCMC) method for simulation to approximate the density of the posterior distribution. The MCMC sampling technique was initially introduced by Metropolis (1953) and was later generalized by Hastings (1970). The basic idea of this MCMC sampling method is based on the creation of a Markov chain ( (0) , (0) ), . . . , ( ( ) , ( ) ) in the parameter space. Under some regularity conditions, as k goes to infinity, the asymptotic distribution of (θ(k), w(k)) will be (8). To implement the MCMC sampling technique, we used the Metropolis-Hastings (MH) algorithm. The details can be found in Chib (1995). This algorithm is used to update the GARCH parameters in blocks with one block for α and one block for β, while the parameter for degrees of freedom is sampled through an optimized rejection method from a translated exponential density defined earlier. This process is incorporated in the R package bayesGARCH for its MCMC sampler which uses the approach of Ardia (2008). For ( ), we specify two cases for the variance-covariance matrix : Similarly, the variance for p(β) was set to be 1000 and 0.01. Both prior means μ α and μ β were set to 0. Model Assessment In the frequentist setting, we assumed xt = σt with having the mean 0 and standard deviation 1. Therefore, ( 2 ) = [ ( 2 | −1 )] = [ ( 2 2 | −1 )] = 2 , where, in practice, Ft denotes the past financial information up to time t. This is also true under the Bayesian setting because ϵ t and w t are independent, and E(wt) = −2 . Using this fact and the fact that the true squared volatility σ t 2 is unknown when we deal with the actual datasets, we have used the squared series as a proxy for the squared volatility. Hence, we measure the mean square error (MSE) and the mean absolute deviance error (MADE) by where a t = |̂2 − 2 |. As another measure of accuracy, we've used the directional accuracy (DA), which is defined by: ℎ The DA gives the average direction of the forecast volatility by measuring the correctness of the turning point forecasts. To test for significance in forecasting accuracy, we carried out the Diebold and Mariano (DM) test proposed by Diebold and Mariano (1995). The underlying hypotheses associated with this test are where z row and z column are the squared deviance a t 2 (and absolute deviance a t ) from the models in the row and the column, respectively. Hence, the null hypothesis indicates the "equal accuracy" between the two approaches. In large samples, the DM statistic is the spectral density of the loss differential at frequency 0, and ( ) = [( − μ)( − − μ)] is the auto-covariance function at τ. Results In this section, we compare the predictive potentials of the GARCH(1,1) model under the frequentist and Bayesian methods using six daily exchange rates. We consider the daily exchange rates of six major currencies against US dollars. These currencies are Euro (EUR), Japanese yen (JPY), Pound sterling (GBP), Australian dollar (AUD), Swiss franc (CHF), and Canadian dollar (CAD). We analyze the most traded pairs of currencies, commonly called the Majors. The Majors are EUR/USD, GBP/USD, USD/JPY, AUD/USD, USD/CAD, and USD/CHF. Except for the EUR/USD pair, Several numerical summaries for the datasets are given in Table 1. It is noticeable that the skewness and kurtosis of AUD/USD are very high. This indicates that the distribution of the return series may be right-skewed and have fat tails. The fat-tail can also be noted from other datasets except for EUR/USD. We've conducted some preliminary analyses of the datasets. Table 2 shows the results from Ljung-Box test based on squared return series. Except for EUR/USD when Q(1) = 2.103, the results indicate significant serial dependence. The time series plots of the return series are shown in Figure 1. It can be seen that volatility clustering are present in the datasets. Also, the variability increases in the USD/CAD dataset. These may indicate that the datasets may not be stationary. Figure 2 shows the autocorrelation function (ACF) for the squared return series for each dataset. It is evident that the squared series seems to be serially correlated indicating a possible dependence at a higher moment. For each dataset, the in-sample data consist of the first 70% of the dataset to fit the model and the out-of-sample data contain the last 30% to test the model. In practice, in-sample measures do not mean much since we are interested in predictive nature of the model. Table 3 gives the comparison based on these three measures for both in-sample and out-of-sample periods. Under the out-of-sample measures, the best measure that corresponds to the method is printed in bold for each dataset. The results indicate the frequentist approaches are generally better than the Bayesian approaches. 4.Conclusion Our main interest in this study was to compare the frequentist and Bayesian estimation approaches using the GARCH(1,1) as a basis model. In contrary to the existing literature, we have found that the frequentist method pro-vides better predictive potential than the Bayesian method. We considered six foreign exchange rate datasets. We computed MSE, MADE, and DA to compare different model outcomes and the outof-samples indicate that the frequentist performed better. We also carried out DM test to observe the significance in these results. We have observed that it remains true, in general, that the frequentist provide more accurate predictive potential than the Bayesian approach. Finally, the current study is limited to the GARCH(1,1) as the basis model; however, one can use other basis model such as Exponential GARCH or Integrated GARCH models as well.
3,444
2021-02-24T00:00:00.000
[ "Economics" ]
Study on Bending Beam Delayed Cracking of Ultra High-strength DP Steel for Automotive Use Automobile lightweight project can effectively reduce energy consumption and exhaust emission, but it will reduce automobile safety. Therefore, using a large number of high-strength or ultra-high-strength steels in automobiles is an effective way to give consideration to both lightweight and safety. Delayed cracking is a symptom-free cracking phenomenon, which is extremely destructive. It is necessary to evaluate the delayed cracking performance of ultra-high strength steel before it is used. In this paper, the delayed cracking properties of three ultra-high strength dual-phase steels DP780, DP980 and DP1180 for automobile are studied by bending beam delayed cracking test. The experimental results show that the delayed cracking resistance of DP780, DP980 and DP1180 steel becomes worse in turn, and DP1180 steel has the worst delayed cracking resistance. In addition, the content and morphology of martensite will affect the delayed cracking performance. The morphology of martensite in the tested steel is lathy, and the higher the martensite content, the worse the delayed cracking resistance of the steel. pollution caused by automobile exhaust is becoming more and more serious, so the requirements for energy saving and emission reduction of automobiles are getting higher and higher. Automobile lightweight project can effectively reduce energy consumption and exhaust emission, but it will reduce automobile safety. Therefore, using a large number of high-strength or ultra-high-strength steels in automobiles is an effective way to give consideration to both lightweight and safety. For example, in Honda's third generation fit body structure, the utilization rate of steel with tensile strength exceeding 1000MPa reaches 10%, among which the utilization rate of steel with 1500MPa grade reaches 2%. However, with the improvement of steel strength, especially when its tensile strength exceeds 1200MPa, delayed cracking is easy to occur [1][2][3] . Delayed cracking is a phenomenon of sudden brittle failure of materials under static stress after a certain time, and it is an environmental embrittlement caused by the interaction of materials, environment and stress [4][5][6] . Delayed cracking is a symptom-free cracking phenomenon, which is extremely destructive, so it is necessary to evaluate the delayed cracking performance of ultra-high strength steel before it is used. In this paper, the delayed cracking properties of DP780, DP980 and DP1180 Dual Phase steels for ultra-high strength vehicles are studied by bending beam delayed cracking test. The microstructure of the tested steels and the fracture morphology of the samples after bending beam test are observed by scanning electron microscope and the relationship among delayed cracking properties, strength and microstructure of the tested steels is divided, which provides a reference for the evaluation of delayed cracking properties of ultra-high strength DP steels for vehicles. test materials and methods The grades of three kinds of ultra-high strength DP steels for vehicles are DP780, DP980 and DP1180 respectively. The specific chemical composition of the test steel is shown in Table 1. Sample the side of the test steel along the rolling direction, inlay, grind and polish the sample along the rolling direction, etch and dry it with 4% nitric alcohol, and observe the microstructure under scanning electron microscope. In the delayed cracking test of curved beams, 5%HCl aqueous solution was used as corrosion medium to simulate the test environment, and the ambient temperature was room temperature. The prestress of the test steel plate was added to three stress states of 1.0GPa, 1.5GPa and 2.0GPa respectively. There are 15 samples of each steel under each prestress, and the sample size is 2×20×150(mm), as shown in Figure 1. The sample after loading and fixing is shown in Figure 2. Finally, the sample was put into 5%HCl aqueous solution for testing. During the experiment, pay attention to close observation within 2 hours, observe once every 10 minutes within 2-4 hours, and finally extend the observation time interval gradually. The fracture time of the sample is taken as the time when the sample is found to be completely broken during observation. microstructure The microstructure of ultra-high strength DP steel for automobile is shown in fig.3, the main microstructure of the three test steels is martensite (M)+ ferrite (F). It can be seen from fig.3a that the microstructure of DP780 steel is mainly martensite (M)+ ferrite (F), in which the gray flat area is ferrite (F), and the convex gravel-like structure distributed on the flat ferrite structure is martensite (M). Martensite in DP980 steel is distributed on ferrite in island shape, as shown in fig.2c. The microstructure of DP1180 steel (as shown in fig.3b) is mainly composed of lath martensite matrix and a small amount of grain boundary ferrite, in which lath region is martensite structure and concave polygonal region is ferrite structure. It can also be found from fig.3 that with the increase of strength grade of the test steel, the martensite content in the microstructure gradually increases, while the ferrite content gradually decreases. The martensite content in DP1180 steel is the highest and ferrite content is the lowest. In addition, the morphology of martensite in the test steel changed from crushed stone (DP780 steel) to island (DP980 steel) and finally to strip (DP1180 steel). Table 2 shows the average cracking time of three kinds of ultra-high strength DP steels for vehicles after bending beam delayed cracking test in 5%HCl solution for 1000 hours. It can be seen from table 2 that all samples of DP780 have no fracture phenomenon and no crack under three pre-stresses of 1.0GPa, 1.5GPa and 2.0GPa. Under three kinds of prestress conditions, DP980 has few fracture specimens, only two specimens have fracture under 1.0GPa and 1.5GPa prestress, and five specimens have fracture under 2.0GPa prestress. At first, DP1180 steel cracked, and all specimens with prestress of 2.0GPa broke. However, when the prestress is 2.0GPa, the fracture time is quite different, the shortest is about 10 hours, and the longest is over 500 hours. Under the prestress of 1.5GPa, the samples of DP1180 steel did not break completely, the minimum time was more than 10 hours, and the maximum time was more than 500 hours, the big gap may be caused by different materials of DP1180 samples in the same batch. There are only three samples of DP1180 fracture under the prestress of 1.0GPa, and the average fracture time is over 333 hours, and the maximum fracture time is over 400 hours. It can be found from table 2 that among the three DP steels, DP780 steel has the best delayed cracking resistance, followed by DP980 steel, while DP1180 steel has the worst delayed cracking resistance. In addition, it can be found that DP steels with three strength grades break within 48 hours (about 8 hours), and the fracture is flat, which belongs to brittle fracture and is basically characterized by delayed fracture, as shown in Figure 4. Fig.4 shows the fracture morphology of DP1180 steel after bending beam test under 2.0GPa prestress, from fig.4a, it can be found that when the sample is broken, no obvious corrosion phenomenon is found, no pitting pits appear, the fracture is smooth, no obvious plastic deformation is found, and it is brittle fracture. Fig.4b shows the microscopic fracture under scanning electron microscope, from which it can be found that the fracture is mainly cleavage fracture, and local intergranular fracture is brittle fracture. Corrosion products are rarely found in fracture, which should belong to hydrogen-induced delayed cracking fracture. Figure 5. Fig.5 shows the fracture morphology of DP1180 steel after bending beam test under 1.0GPa prestress, from fig.5a, it can be found that when the sample is broken, a large number of corrosion pits appear, and corrosion fracture is dominant, with few flat fracture. Fig.5b shows the microscopic fracture under scanning electron microscope, from the figure, it can be found that there are a lot of corrosion products in the fracture, so that the fracture mechanism can not be seen clearly, from the low-power SEM photograph, it is probably cleavage fracture, which should belong to stress corrosion cracking fracture. 6 shows the beam bending test results of DP980 and DP1180 steel in 5%HCl solution. It can be seen from the figure that the fracture time of DP1180 steel is much shorter than that of DP980 steel, and most samples (more than 10) of DP980 steel will not break within 1000 hours. In addition, it can be found that the fracture time of DP980 steel and DP1180 steel is concentrated at both ends, that is, short-time fracture, which is basically within 48 hours, and long-time fracture, which is close to 1000 hours, even does not appear after the test. In DP1180 steel, the fracture time of individual samples (3 samples) is shorter, and the average fracture time of these samples is more than 350 hours. The fracture morphology of these two kinds of fractured samples with fracture time is also obviously different. Short-time fracture (within 48 hours), with flat fracture, belongs to brittle fracture, and is basically characterized by delayed fracture. Long-term fracture (fracture time is much longer than 48 hours) shows a large number of corrosion pits, mainly corrosion fracture, and few flat fracture. It is generally believed that the existence of martensite will seriously reduce the delayed cracking resistance of steel [7] , so the more martensite, the worse the delayed cracking resistance. It can also be found from fig.3 that with the increase of strength grade of the test steel, the martensite content in the microstructure gradually increases, while the ferrite content gradually decreases. The martensite content in DP1180 steel is the highest and ferrite content is the lowest. In addition, the martensite morphology of three DP steels is also different. The martensite morphology of DP780 steel is crushed stone, which is on ferrite matrix respectively. In DP780 steel, martensite is used as strengthening phase, ferrite is used as main bearing phase, which bears stress. The morphology of martensite in DP980 steel is island, and the area of single martensite increases on ferrite matrix. In DP980 steel, martensite and ferrite are the main bearing phases, which bear the stress together. The morphology of martensite in DP980 steel is island, and the area of single martensite increases on ferrite matrix. In DP980 steel, martensite and ferrite are the main bearing phases, which bear the stress together. The morphology of martensite in DP1180 steel is lath, and as a matrix, a small amount of ferrite is distributed at the grain boundary of martensite matrix. In DP980 steel, martensite is used as bearing stress and ferrite toughening phase. Martensite in DP1180 steel is lathy and serves as the main bearing phase, and the existence of martensite will seriously reduce the delayed cracking resistance of steel. Therefore, among the three DP steels, DP780 steel has the best resistance to delayed cracking, followed by DP980 steel, while DP1180 steel has the worst resistance to delayed cracking. Conclusion (1) The bending beam test results show that the delayed cracking resistance of DP780, DP980 and DP1180 steel becomes worse in turn, and the delayed cracking resistance of DP1180 steel is the worst. (2) The microstructure has an important influence on the delayed cracking performance of the test steel, and the content and morphology of martensite will affect the delayed cracking performance. The morphology of martensite in the tested steel is lathy, and the higher the martensite content, the worse the delayed cracking resistance of the steel.
2,643.2
2021-01-01T00:00:00.000
[ "Materials Science" ]
Coordinated Control of Intelligent Fuzzy Traffic Signal Based on Edge Computing Distribution With the development of Internet of Things infrastructures and intelligent traffic systems, the traffic congestion that results from the continuous complexity of urban road networks and traffic saturation has a new solution. In this research, we propose a traffic signal control scenario based on edge computing. We also propose a chemical reaction–cooperative particle swarm optimization (CRO-CPSO) algorithm so that flexible traffic control is sunk to the edge. To implement short-term real-time vehicle waiting time prediction as a collaborative judgment of CRO-CPSO, we suggest a traffic flow prediction system based on fuzzy logic. In addition, we introduce a co-factor (collaborative factor) set based on offline learning to take into account the experiential characteristics of intersections in urban road networks for the generation of strategies by the algorithm. Furthermore, the real case of Changsha County is simulated on the SUMO simulation platform. The issue of traffic flow saturation is improved by our method. Compared with other methods, our algorithm enhances the proportions of vehicles that reach their destinations on time by 13.03%, which maximizes the driving experience for drivers. Meanwhile, our algorithm reduces the driving times of vehicles by 25.34%, thus alleviating traffic jams. Introduction With the rapid increase in the numbers of vehicles and a large and complex road network, delays, traffic accidents and environmental pollution due to vehicle queuing caused by traffic congestion have created an urgent need for traffic control strategies. How to improve traffic efficiency while reducing the traffic accident rate by using efficacious control measures has attracted the attention of both academia and industry [1][2][3][4]. Traffic signal control can effectively alleviate the saturated traffic conditions and improve the utilization of the road network. In recent years, many studies have focused on traffic signal control. The earliest traffic control methods relied on hand signals [2]. In order to alleviate the economic losses caused by traffic congestion in Toronto, Hewton [1] proposed online optimization of traffic control signals through computers, which promoted the application of computer technology in the field of traffic management. For reasons of safety, Inose H [3] coordinated the timing of traffic signals according to the traffic flow on a road. With the continuous expansion of traffic scale, automatic traffic control strategies [4] have gradually formed a system. In the era of cloud computing, artificial intelligence (AI) algorithms continue to develop, and new concepts such as intelligent transportation systems (ITSs) have been put forward [5]. Wang [6] and Wu [7] et al. designed adaptive traffic signal control strategies by using deep reinforcement learning (DRL) of multi-agent cooperation to deal with large and complex road networks. Shao [8] took the weight of special vehicles into extra consideration when setting the state and reward function, and had an appropriate preference for special vehicles in traffic signal control. With the emergence of Internet of approach, it is possible to successfully address the issue of coordinated traffic scheduling for vast, intricate road networks, increasing traffic efficiency while enhancing the driving experience. Concretely, our main contributions are as follows: Initially, we propose a traffic signal control scenario based on edge computing, and then we propose a new swarm intelligence algorithm: chemical reaction-cooperative particle swarm optimization (CRO-CPSO). We change the generation method of the traditional swarm intelligence strategy and make full use of the idle computing power at the edge to generate a local strategy so that the flexible traffic control sinks to the edge. The CRO-CPSO algorithm can effectively adapt to large-scale urban road network structures and complex and dynamic traffic conditions. Secondly, we propose a traffic flow prediction system based on fuzzy logic and maintain the global traffic flow state table. We implement short-term real-time vehicle waiting time prediction as a collaborative judgment of CRO-CPSO, which effectively responds to the uncertainty of road traffic and achieves global coordination evaluation of the strategy. Then, we introduce a co-factor (collaborative factor) set based on offline learning to take the experiential characteristics of intersections in an urban road network into account when generating strategies. The co-factor set integrates the potential influence of historical traffic flow of adjacent sections into the strategy generation, which increases the convergence of the algorithm and effectively adapts to the complexity of traffic, thus enhancing coordination in the regulation of large road network structures. Finally, we examine the real case of Changsha County using the traffic simulator SUMO and compare other optimization algorithms (the evolutionary algorithm GA and the swarm intelligence algorithm PSO) to prove the effectiveness and advantages of CRO-CPSO. The remainder of this paper is organized as follows. In Section 2, we describe our optimization methodologies, including an overall model diagram. In Section 3, we introduce the experimental methods and results. The conclusions and prospects for future work are detailed in Section 4. Method In this section, the specific details of the edge swarm intelligence ATSC environment (the scene of traffic signal control), the global traffic flow prediction (involved in collaborative judgment), and the CRO-CPSO model (the core of our solver technique) are introduced. Edge Swarm Intelligence ATSC (Adaptive Traffic Signal Control) Environment This subsection describes the structure and the data flow of the edge swarm intelligence ATSC environment. Figure 1 shows the edge swarm intelligence ATSC environment, which includes cloud servers, MEC servers, traffic signal controllers, RSUs (roadside units) and vehicle units. The cloud layer carries on the global overall coordination. Cloud servers aggregate and generate the global co-factor set and predict the global traffic flow so as to make a collaborative judgment on the strategy of CRO-CPSO. In the MEC layer, the local strategy generation of CRO-CPSO is carried out, and the local co-factor set is trained off-line. MEC servers are deployed in their distribution area to achieve instantaneous data transmission with the traffic signal controllers. Vehicle units communicate with each other via V2V and exchange information with RSUs via V2I. The information is fed to MEC servers which communicate with each other via I2I. After aggregation and preprocessing by MEC servers, local road pheromone information is generated and summarized for cloud servers. The cloud servers perform fuzzy logic processing on the road pheromone information to obtain short-term real-time vehicle waiting time prediction as a collaborative judgment on the strategy of CRO-CPSO. mone information and aggregates local co-factor sets to generate the global co-factor set. The cloud layer transmits the global co-factor set to the MEC layer. The MEC layer takes road pheromone information and the global co-factor set as the input for the swarm intelligence algorithm and obtains the local strategy of CRO-CPSO as the output. The cloud layer makes a collaborative judgment on the local strategy of CRO-CPSO through the short-term real-time vehicle waiting time prediction. Finally, the MEC layer transmits the optimal strategy of CRO-CPSO to the user layer. The Global Traffic Flow Prediction This subsection describes the pheromones in the global traffic flow prediction, the input and output fuzzy membership of the fuzzy logic system, and finally the update strategy for the states of vehicle units. We designed a method to aggregate and process edge-distributed road pheromone information which can be used for traffic flow prediction. Short-term real-time vehicle waiting time prediction is used as the collaborative judgement on local traffic signal control strategies and provides support for local coordinated adjustment. In the edge-distributed ATSC environment, the underlying data constitute the inherent data set for a road, such as road structure names, identifications, geographical location and its graph topology. The dynamic data set of the road includes real-time vehicle speed and positional information and predictive vehicle routes. The data are transmitted to MEC servers via vehicle units and RSUs and are then further filtered and processed. The MEC servers obtain real-time speed and positional information from vehicle units and obtains inherent road As shown in Figure 1, our algorithm is mainly executed in the MEC layer and cloud layer. Vehicle units transmit the road network data to the MEC layer. The MEC layer preprocesses road network data to generate local road pheromone information and upload it to the cloud layer. The MEC layer trains the road network data to generate local co-factor set, which is uploaded to the cloud layer. The cloud layer takes the road pheromone information as the input of the fuzzy logic system and obtains the short-term real-time vehicle waiting time prediction. The cloud layer performs offline training on road pheromone information and aggregates local co-factor sets to generate the global co-factor set. The cloud layer transmits the global co-factor set to the MEC layer. The MEC layer takes road pheromone information and the global co-factor set as the input for the swarm intelligence algorithm and obtains the local strategy of CRO-CPSO as the output. The cloud layer makes a collaborative judgment on the local strategy of CRO-CPSO through the short-term real-time vehicle waiting time prediction. Finally, the MEC layer transmits the optimal strategy of CRO-CPSO to the user layer. The Global Traffic Flow Prediction This subsection describes the pheromones in the global traffic flow prediction, the input and output fuzzy membership of the fuzzy logic system, and finally the update strategy for the states of vehicle units. We designed a method to aggregate and process edge-distributed road pheromone information which can be used for traffic flow prediction. Short-term real-time vehicle waiting time prediction is used as the collaborative judgement on local traffic signal control strategies and provides support for local coordinated adjustment. In the edge-distributed ATSC environment, the underlying data constitute the inherent data set for a road, such as road structure names, identifications, geographical location and its graph topology. The dynamic data set of the road includes real-time vehicle speed and positional information and predictive vehicle routes. The data are transmitted to MEC servers via vehicle units and RSUs and are then further filtered and processed. The MEC servers obtain real-time speed and positional information from vehicle units and obtains inherent road data sets from RSUs. Then, MEC servers predict short-term real-time vehicle waiting time at the Aggregating and generating global traffic flow predictions is a way to directly reflect the real-time state information of vehicle units in highly dynamic road networks. The pheromones used in global traffic flow prediction mainly include VRA (the current road name of the vehicle unit), VST (the time of vehicle unit switching state), VRN (the number of remaining roads for the vehicle unit) and so on. VRA can be obtained by analyzing the real-time location and speed of a vehicle unit and road map topology via V2I. When a vehicle unit runs on a road section at normal speed, then VRA is the name of this road section. When the real-time speed of the vehicle unit is lower than the normal speed, then VRA is "waiting_state". The state of the vehicle unit mainly includes two states: moving and waiting. VST is mainly divided into two types: the remaining time from moving state to waiting state and the remaining time from waiting state to moving state. VST is mainly composed of experienced travel time, intersection delay caused by traffic signal control and delay caused by traffic congestion. VST can be calculated from the real-time vehicle unit speed and the road map topology obtained via V2I and the real-time queue at the intersection obtained via I2I. VRN is the prediction of the remaining path of the vehicle unit which can be obtained via V2I. The first step is to calculate the experienced travel time. The speed information of vehicle units on the road network is updated frequently; thus, we calculate the realtime spatial average speed and use the real-time spatial average speed to calculate the experienced travel time. The real-time spatial average speed is the average speed of k samples within a time interval t, which can be calculated by Equation (1). Then, we divide the length of the road section travelled in time interval t by the real-time spatial average speed to obtain the experienced travel time, as shown in Equation (2). where v m V σ is the instantaneous speed of the mth sample of vehicle unit V σ and L t V σ is the length of the road section travelled by vehicle unit V σ in time interval t. Then, we calculate the delay caused by traffic congestion. The traffic flow is a continuous cycle, and the subjective factors of drivers are uncontrollable. Any slight change in the traffic cycle may result in a mismatch between the calculated and optimal cycle. Thus we use fuzzy logic [33] in each successive cycle to overcome this mismatch. We construct the fuzzy system for the waiting time of vehicle units by establishing relations between the inputs and outputs of the fuzzy system using if-then rules. The proposed fuzzy logic system consists of four inputs: VQL (the queue length of vehicle units), IWJ (the judgement of whether to wait at the intersection), GPD (the duration of the green phase) and VP (the priority of vehicle units). The VQL is the remaining queue length of vehicle units in the traffic flow. It contains three membership functions named Zero, Short and Long that range from 0 to 25 vehicle units, as illustrated in Figure 2a. The IWJ is the judgement of whether a vehicle unit is waiting at the intersection. It contains three membership functions named Ahead Oftime, Ordinary and Delayed that range from 0% to 100%, as illustrated in Figure 2b. The GPD is the duration of the green phase of the traffic light. It contains three membership functions named Short, Medium and Long that range from 0 to 60 s, as illustrated in Figure 2c. The VP is the priority of vehicle units on the road network. It contains three membership functions named Optimal, Suboptimal and Ordinary that range from 0% to 100%, as illustrated in Figure 2d. The output of the fuzzy logic system WT (the waiting time of a vehicle unit) is use to identify the delay caused by traffic congestion. It contains five membership function called RU (Road Unobstructed), LC (Light Congestion), MC (Moderate Congestion), S (Severe Congestion), TU (Traffic Tie-Up) that range from 0 to 100s, as illustrated in Figur 3. The proposed fuzzy logic system comprises forty-five fuzzy rules. Some of these rule are shown in Table 1, below. Figure 3. The proposed fuzzy logic system comprises forty-five fuzzy rules. Some of these rules are shown in Table 1, below. The output of the fuzzy logic system WT (the waiting time of a vehicle unit) is used to identify the delay caused by traffic congestion. It contains five membership functions called RU (Road Unobstructed), LC (Light Congestion), MC (Moderate Congestion), SC (Severe Congestion), TU (Traffic Tie-Up) that range from 0 to 100s, as illustrated in Figure 3. The proposed fuzzy logic system comprises forty-five fuzzy rules. Some of these rules are shown in Table 1, below. The state of the vehicle unit needs to be updated constantly to ensure real-time and accurate representation of the current traffic status. The update strategy for the state of vehicle units is shown in Algorithm 1. With increase in T g , VST decreases synchronously. When VST is non-zero, the vehicle unit will maintain the current state unchanged. When VST is zero, we need to judge whether the intersection of the current road section is the destination of the vehicle unit. When the vehicle unit does not reach the final destination, it will continue to drive. Thus, it is necessary to analyze the change of the state of the vehicle unit. If the vehicle unit is in "waiting_state", then it ends the "waiting_state" and continues to drive on the next road section. VST will be reset and the experienced travel time for the next road section and intersection delay caused by traffic signal control will be added, as shown in Equation (3). If the vehicle unit was in the driving state before, then it arrives at the intersection. We need to judge whether the vehicle unit needs to wait for the traffic light. If the vehicle unit does not need to wait for the traffic light, it will pass through the intersection directly and continue driving on the next road section. VST will be reset and calculated, as shown in Equation (3). When the vehicle unit needs to wait for the traffic light, its VRA will be "waiting_state". VST will be reset and WT will be added, as shown in Equation (4). When the vehicle unit arrives at its destination, we will no longer predict its real-time vehicle state, thus saving computing resources. where t delay is the intersection delay caused by traffic signal control. Algorithm 1 The update strategy of the state of vehicle unit Analysis of vehicle unit queuing: 3: if WT == 0 then 4: Update VRA to name of next road section; 5: Reset VST; 6: VST = VST + T t e,i + t delay ; 7: else if WT = 0 then 9: VRA = waiting_state ; 10: Reset VST; 11: Update VRA to the name of next road section; Reset VST; 15: Remove the vehicle unit from the predicting cycle; 20: end if Table 1. Some rules example of the fuzzy logic. Rule No The Rule Rule CRO-CPSO Model This subsection describes the fitness function, the co-factor set, the solution encoding and finally the optimization procedure for our proposed CRO-CPSO algorithm. Research [29][30][31][32] on traffic signal control optimization has shown that swarm intelligence algorithms can outperform traditional methods in many cases. However, swarm intelligence algorithms have slow convergence speeds when dealing with multi-constraint optimization problems and cannot be well adapted to the current problem of large-scale and complex road networks. Thus, this paper proposes a distributed adaptive cooperative chemical reaction-cooperative particle swarm optimization (CRO-CPSO) algorithm. CRO-CPSO generates and iterates the local strategy at the edge via a distributed structure. It avoids the exponential growth of the solution space when confronted with large-scale road networks. The distributed structure also promotes cooperative control among traffic signal lights in the surrounding area. We use energy exchange as an indicator scheme to achieve an adaptive combination of local search and global exploration in the solution space. We also use a co-factor set to realize cooperative and coordinated control actions among adjacent edges and offline learning with historical traffic flow data. When using the swarm intelligence algorithm to solve traffic signal control optimization, the scale of solution space will increase exponentially with the continuous expansion of a road network and the increasing complexity of intersections. Using the traditional centralized method, even if there is a powerful centralized cloud computing server, there will be a high computing cost and overhead. Thus, it cannot adapt to large-scale road networks. With the development of edge computing, the computing resources deployed on the edge have a wide application prospect. In our proposed CRO-CPSO, the local strategy for each intersection is generated using the computing resources of edge servers deployed at the intersections. It realizes the parallel utilization of distributed computing resources and reduces the computing load of the cloud server. The co-factor set is maintained on the adjacent edge server to realize offline learning of historical traffic data and cooperative control of adjacent traffic signals. The traffic signal control optimization problem in this paper is a typical objective optimization problem. The solution space of the signal control set is divided into two parts: S = s 1 , s 2 , . . . , s N ES and S = s 1 , s 2 , . . . , s N ES , where S is the green phase sequence of each traffic light at the intersection, S is the duration of the green phase of the traffic light at the intersection and N ES is the number of traffic lights at the intersection. Considering information on various events during the simulation, the fitness function of CRO-CPSO is shown as Equation (5). Equations (8)-(10) are the constraints on it. The main objective of Equation (5) is to enhance the driver's driving experience and reduce driving time. We achieve the incentive effect by increasing the number of vehicles that arrive at the destination within the reward time T Π and giving the reward score D score . We set the decision factor for the reward score (χ) so that the vehicle that runs out of time will not receive D score . Equation (8) ensures that each vehicle unit only passes through any intersection at most once. Equation (9) ensures that vehicle units with higher priority pass first while waiting at the intersection. Equation (10) ensures that only one traffic light is green at each intersection at any given time. where D score is the reward score, T Π is the maximum time to ensure the drivers' driving experience, N V is the set of all vehicle units, Θ V σ is the set of road sections of vehicle unit V σ and Ξ V σ is the set of wating state of vehicle unit V σ . V σ is the number of times vehicle unit V σ arrives at intersection i, Φ is the set of all intersections, ϕ i,j is the road section starting from intersection i and ending at intersection j, and Ψ is the set of all road sections. where ϕ i,j V σ is the queuing sequence of vehicle unit V σ waiting for the green light in road section ϕ i,j when it arrives at intersection j and VP where Ω ϕ i,j is the identifier of the state of the green traffic light in road section ϕ i,j and Φ →j is the set of all intersections leading to intersection j. In this paper, we use a co-factor set to realize the cooperative and coordinated control actions among adjacent edges and offline learning with historical traffic flow data. The co-factor set objectively reflects the congestion of each road section and the potential "Attacking Traffic Flow" (after passing the intersection, the vehicle units will enter the next road section, thus increasing the congestion level and attacking the traffic density of the road section) among road sections. As the experienced set of global road sections, the co-factor set reflects both the past traffic flow information of each intersection and the associated influence among road sections. Thus, we can deepen the degree of cooperation among edge servers in the global control of traffic lights by introducing the co-factor set. It enables edge servers to consider the cooperation among servers in generating local strategies. The joint reward feedback for each edge server will further iterate the co-factor set, so as to take experience and timeliness into account. As the first step, we calculate the joint reward feedback for each edge server. We take the average waiting time of vehicle units within the coverage area of edge servers as the evaluation index. WT κ t is the average waiting time of vehicle units within the coverage area of edge server κ in time interval t, as shown in Equation (11). r κ,local is the local reward feedback for edge server κ, as shown in Equation (12). The adjacent edge servers have traffic flow correlation. Thus, a spatial attenuation factor is introduced so that the joint reward feedback can reflect the gains of the adjacent environment. r κ,joint is the joint reward feedback for edge server κ, as shown in Equation (13). ρ κ t is the traffic density within the coverage area of edge server κ in time interval t, as shown in Equation (14). (11) where N κ V is the set of all vehicle units within the coverage area of edge server κ, N κ WT is the set WT of the vehicle units within the coverage area of edge server κ and N ES is the set of all the edge servers. where 1 |d κ↔Kκ | is the spatial attenuation factor and K κ is the set of the adjacent edge servers of edge server κ. Due to the fast attenuation of space, the set of adjacent edge servers only consider a two-layer road network structure. where Ψ κ is the set of all the road sections within the coverage area of edge server κ, N κ ϕ,t is the number of vehicle units on the road section ϕ within the coverage area of edge server κ in time interval t and L κ ϕ is the length of the road section ϕ within the coverage area of edge server κ. Then we construct the co-factor set A as a matrix of K × K, as shown in Equation (15). The co-factor set is acquired through offline learning and iterative updating with immediate joint reward feedback, as shown in Equation (16). where d is the benchmark distance, d ij is the distance between edge server i and edge server j, and α and γ are the attenuation factors. The swarm intelligence algorithm can be used to solve the multi-objective optimization problem of traffic signal control. In the traditional swarm intelligence algorithm, each iteration of the solution space applies a local search algorithm, which requires a lot of computing resources and reduces the convergence efficiency. Edge servers only have limited computing resources and storage capacities. Thus, we adopt the optimization framework of the chemical reaction optimization algorithm [34] to accelerate the convergence of traffic signal control optimization strategies. We adopt the PSO [35] algorithm with excellent inter-individual coordination and introduce the co-factor set under the CRO framework to realize local cooperative scheduling and global control among traffic lights in the multi-dimensional control problem of large-scale traffic light control. By configuring two necessary attributes, molecular potential energy (PE) and molecular kinetic energy (KE), the algorithm can avoid falling into local optima too early and converge to optima faster. PE represents the stability of the solution space and is defined as the reciprocal of vehicle travel time, as shown in Equation (17). When PE increases, the solution space tends to be stable. Thus, the global exploration will be stopped and local searching will be performed. KE makes the solution space tend to be dynamic. When KE is high, the global exploration will be continued to avoid falling into local optima too early. Therefore, only when KE attenuates to a threshold value and PE tends to be stable, will local searching be carried out, thus greatly improving the convergence efficiency. Thus, the algorithm can be deployed on edge servers with limited computing resources. The pseudocode of CRO-CPSO is shown in Algorithm 2. The input of Algorithm 2 is road pheromone information and the global co-factor set; its output is the local traffic light regulation strategy. The solution space S = s 1 , s 2 , . . . , s N ES is the green phase sequence of each traffic light at the intersection, where s 1 , s 2 , . . . , s N ES is an array of different integers ranging from 1 to N ES . In this paper, we use Monomolecular Decomposition and Polymolecular Synthesis to run the global exploration of the solution space. When the solution space tends to be stable, we use Monomolecular Collision and Polymolecular Collision to run the local search in the solution space. Figure 4 shows the updating scheme for the solution space S of dimension 6. When the solution space satisfies the condition of Self-Collision, Monomolecular Collision will occur. As shown in Figure 4a, the solution space is slightly changed. We use the method of Swapping Two-Domain Spaces to ensure that the spatial arrangement of S is not repetitive. After the Collision, the KE of S decays and the PE of S is updated. When the solution space satisfies the condition of Self-Decomposition, Monomolecular Decomposition will occur. As shown in Figure 4b, the solution space is changed considerably. Select a breakpoint for S and divide it into two parts [ first_half , sec ond _half]. S1 retains [first_half] and S2 retains [sec ond_half]. Then, the remaining parts of S1 and S2 are generated while ensuring that the arrangement of solution spaces is not repetitive. The KE between the solution spaces is redistributed and the PE of the solution spaces is updated. When the solution space satisfies the condition of Intergroup-Collision, Polymolecular Collision will occur. As shown in Figure 4c, the solution space is slightly changed. In the Polymolecular Collision, we use Conflict-Detection to map the conflicting values so as to ensure the non-repeatability of the arrangement between S1 and S2 . In the example of Conflict-Detection in Figure 4c, we can see two sets of mappings 2 → 5 → 4 and 3 → 1 . Thus, if there are two 2 s in the solution space after the Polymolecular Collision, 2 will be converted to 4, and so on, until there is no conflict. After the Collision, the KE between the solution spaces is redistributed and the PE of the solution spaces is updated. When the solution space satisfies the condition of Intergroup-Synthesis, Polymolecular Synthesis will occur. As shown in Figure 4d, the solution space is changed considerably. Choose a synthesis point for the two solution spaces. Then, the [first_half] of S1 and the [sec ond_half] of S2 are combined to produce a new solution space S with great diversity. In the Polymolecular Synthesis, we also use Conflict-Detection to map the conflicting values so as to ensure the non-repeatability of the arrangement of S. After the synthesis, the KE of the solution spaces is aggregated, and the PE of the solution space is updated. When the solution space satisfies the condition of Self-Collision, Monomolecular Collision will occur. As shown in Figure 4a, the solution space is slightly changed. We use the method of Swapping Two-Domain Spaces to ensure that the spatial arrangement of S' is not repetitive. After the Collision, the KE of S' decays and the PE of S' is updated. When the solution space satisfies the condition of Self-Decomposition, Monomolecular Decomposition will occur. As shown in Figure 4b, the solution space is changed considerably. Select a breakpoint for S and divide it into two parts [first_half , second_half]. S1 retains [first_half] and S2 retains [second_half]. Then, the remaining parts of S1 and S2 are generated while ensuring that the arrangement of solution spaces is not repetitive. The KE between the solution spaces is redistributed and the PE of the solution spaces is updated. When the solution space satisfies the condition of Intergroup-Collision, Polymolecular Collision will occur. As shown in Figure 4c, the solution space is slightly changed. In the Polymolecular Collision, we use Conflict-Detection to map the conflicting values so as to ensure the non-repeatability of the arrangement between S1′ and S2′. In the example of Conflict-Detection in Figure 4c, we can see two sets of mappings 2 → 5 → 4 and 3 → 1 . Thus, if there are two 2 s in the solution space after the Polymolecular Collision, 2 will be converted to 4, and so on, until there is no conflict. After the Collision, the KE between the solution spaces is redistributed and the PE of the solution spaces is updated. When the solution space satisfies the condition of Intergroup-Synthesis, Polymolecular Synthesis will occur. As shown in Figure 4d, the solution space is changed considerably. Choose a synthesis point for the two solution spaces. Then, the [first_half] of S1 and the [second_half] of S2 are combined to produce a new solution The solution space S = s 1 , s 2 , . . . , s N ES is the duration of the green phase of a traffic light at an intersection, where s 1 , s 2 , . . . , s N ES is an array of different integers ranging from 0 to 60 s. When edge server i generates the duration of the green phase of the traffic light, we introduce co-factor set A into the solution space. We focus on the k-neighbor road sections that are directly related to the current intersection Φ i in updating the solution space. The co-factor A ij reflects the potential "Attacking Traffic Flow" (including the potential traffic flow of other road sections extending from intersection Φ j ) at intersection Φ i in the direction of road section ϕ j→i . Thus, when A ij accounts for a large proportion of A k , it indicates that the road section ϕ j→i may become congested. Thus, experience-based coordinated regulation can be implemented to alleviate the potential congestion of road sections. We need to extend the green phase of road section ϕ j→i in an appropriate proportion within the green cycle. The green phases of other road sections with smaller proportions in A k are appropriately compressed. The ratio of extension to compression is determined by the weight ratio. The extension-compression ratio is also affected by the experiential factor due to the empirical lag of the co-factor set, as shown in Equation (18). We introduce the idea of particle swarm optimization in the iteration of S = s 1 , s 2 , . . . , s N ES . Each potential solution to the problem is the position of the particle, and the particles are updated iteratively on a population scale. The particles are initialized before the iteration begins. The fitness of the particles is calculated using Equation (5) as the initial E m ij and Q m ij of the particles. Then, the circular heuristic search process is initiated: v m ij+1 and x m ij+1 (x m ij+1 for the particle is rounded in the updating process) for the particle are updated iteratively, and its fitness is calculated. If the fitness is better than E m ij , update E m ij . If the fitness is better than Q m ij , update Q m ij . In each iteration update, the particle updates its position according to Equation (18): where β is the experiential factor, and represent rounding processing and v m ij+1 is the velocity of the particle, as shown in Equation (19): where E m ij is the individual extremum, Q m ij is the global extremum, c 1 and c 2 are learning factors, U(0, 1) is a uniform random value in [0, 1] and ω is the inertia weight, as shown in Equation (20): (20) where ω max is the maximum of the inertia weight, ω min is the minimum of the inertia weight and P max is maximum iteration. Experimental Setup This section presents the experimental framework followed to assess the performance of our method. We first describe the specific road network scenario generated for this paper. Then, we present the detailed simulation parameters. SUMO [36] is a well-known traffic simulator. It provides an open source, microscopic, multi-modal traffic simulation environment so as to realize the simulation of traffic signal scheduling of large road network structures. We imported digital maps from Open-StreetMap (OSM) [37]. The digital map was then transformed and combed through to provide a valid SUMO network using the netconvert script provided in the SUMO package. We generated an actual road network scenario from the real digital map, as shown in Figure 5. The physical location is Changsha County, Changsha City, Hunan Province, China, which has 64 road intersections. We choose this area because Changsha County has a regular road network structure which can represent the general road network structure found in the central urban area of China and because examining a real case has more practical significance in the context of solving road congestion problems. As for traffic density, this paper tested three different density levels (low density: 300 vehicles, medium density: 500 vehicles, high density: 1000 vehicles) to consider the road traffic flow situations in different periods of time. In the CRO-CPSO algorithm, we set the swarm (population) size to 30 particles and the number of iterations to 100 steps. We set other parameters of the algorithm based on a small area of Changsha County (with 64 traffic lights and 100 vehicles). The detailed simulation parameters are shown in Table 2. Additionally, we implemented two algorithms, GA [29] and PSO [35], in order to establish comparisons against our CRO-CPSO algorithm. The GA algorithm is a classical algorithm used in evolutionary computation, which mainly imitates the survival of the fittest in nature to carry out natural evolutions. The PSO algorithm is a classical algorithm used in swarm intelligence, which mainly searches for optimizations through a heuristic search process. GA and PSO have good applications in solving nonlinear programming problems, so we selected them as comparison algorithms. Experiment and Analysis This section presents the results and analyses from several viewpoints. First, we studied and analyzed the influence of parameters on the performance of the algorithm. Then we conducted comparative experiments. Finally, we analyzed the scalability of the algorithm. Performance Analysis of Algorithms Before carrying out the comparative experiment, we first analyzed the performance of the CRO-CPSO algorithm with different swarm sizes and maximum numbers of iterations through a series of experiments. We set the parameter values in the following experiment by this investigation. Considering a small-scale case of Changsha County (with 64 traffic lights and 100 vehicles), we tried different configuration combinations and plotted the traces of the progress of the best fitness values, as shown in Figure 6. These traces corresponded to the configuration combinations of swarm sizes with 10, 20 and 30 particles and maximum iterations of 50, 100 and 200 steps. The swarm size will affect the diversity of the population and the maximum number of iterations will affect the inertia weight; thus, we mainly studied the parameter values through their different configuration combinations. As shown in Figure 6, for all configuration combinations of swarm sizes and maximum numbers of iterations, our CRO-CPSO algorithm can converge within the interval of 50 to 100 iterations. The algorithm achieved the best performance results under the configuration combination of a swarm size of 30 particles and a maximum iteration of 100 steps. We found that when the maximum iterations was 50 steps, due to the small number of iterations, even if the swarm size was large, it was easier to converge to the local optimal value (between 7.2 × 10 4 and 7.25 × 10 4 ). When the swarm size was 10 particles, the small swarm size limited the evolutionary diversity of the population. Thus, even when the maximum iterations was 200 steps, the population could only converge to a low local optimal value (7.37 × 10 4 ). Therefore, when choosing a configuration combination, one needs to consider both computational cost and optimal fitness value. We found that when the swarm size was 30 particles and the maximum iterations was 100 steps, the population could converge to an approximately optimal value. Although the population achieved a higher fitness value under the configuration combination of a swarm size with 30 particles and maximum iterations of 200 steps, the small increase in fitness value (0.015 × 10 4 ) was not matched by the expensive computation cost (3000 function evaluations). Therefore, we opted to set the swarm size as 30 particles and the maximum iterations as 100 steps in our experimentation. Before carrying out the comparative experiment, we first analyzed the performanc of the CRO-CPSO algorithm with different swarm sizes and maximum numbers of iter tions through a series of experiments. We set the parameter values in the following expe iment by this investigation. Considering a small-scale case of Changsha County (with 6 traffic lights and 100 vehicles), we tried different configuration combinations and plotte the traces of the progress of the best fitness values, as shown in Figure 6. These trac corresponded to the configuration combinations of swarm sizes with 10, 20 and 30 part cles and maximum iterations of 50, 100 and 200 steps. The swarm size will affect the d versity of the population and the maximum number of iterations will affect the inert weight; thus, we mainly studied the parameter values through their different configur tion combinations. As shown in Figure 6, for all configuration combinations of swarm sizes and max mum numbers of iterations, our CRO-CPSO algorithm can converge within the interv of 50 to 100 iterations. The algorithm achieved the best performance results under th configuration combination of a swarm size of 30 particles and a maximum iteration of 10 steps. We found that when the maximum iterations was 50 steps, due to the small numb of iterations, even if the swarm size was large, it was easier to converge to the local optim value (between 7.2 × 10 4 and 7.25 × 10 4 ). When the swarm size was 10 particles, the sma swarm size limited the evolutionary diversity of the population. Thus, even when th maximum iterations was 200 steps, the population could only converge to a low local op timal value (7.37 × 10 4 ). Therefore, when choosing a configuration combination, one need to consider both computational cost and optimal fitness value. We found that when th swarm size was 30 particles and the maximum iterations was 100 steps, the populatio could converge to an approximately optimal value. Although the population achieved higher fitness value under the configuration combination of a swarm size with 30 particl and maximum iterations of 200 steps, the small increase in fitness value (0.015 × 10 4 ) wa In Figure 7, we plotted the fitness distribution of the whole population in the optimization process of the CRO-CPSO algorithm. Specifically, the plot mainly illustrates the operation of the algorithm in the case of Changsha County (with 64 traffic lights and 300 vehicles). We can see that in the early stages the particles were diverse and with low and fluctuating fitness regions (between 1 and 3). Then they gradually converged to a higher fitness with constant iterations. Thus, all the particles showed ideal convergence and robustness in spite of the differences among them. In this optimization process, 277 vehicles out of 300 arrived at the destination within maximum time T Π (92.3% of the vehicles have high-quality driving experience), as shown in Figure 8. We found that under the baseline control strategy, 37 vehicles did not have high-quality driving experience (i.e., they were unable to reach the destination within T Π ), and the journey time of all vehicles was generally high. CRO-CPSO introduces the co-factor set; thus, the influence of potential traffic flow on road conditions is considered. The collaborative effect will not only ensure the driving experience of vehicles but also appropriately extend the waiting time in unobstructed road sections and shorten the waiting time in crowded road sections so as to achieve the effect of alleviating traffic congestion. We introduced the reward score D score and T Π as the incentive mechanism in the calculation of the fitness function so that the convergence of the algorithm takes into account the driving experience of vehicles. It can be seen from Figure 8 that under the CRO-CPSO control strategy, the overall driving time of 300 vehicles is reduced by 11,937 s (about 199 min) compared with the baseline control strategy, thus reflecting the effectiveness of this algorithm in improving the traffic efficiency of the urban road network. We found that due to traffic light control, the driving times of a few vehicles with low driving times increased slightly (i.e., waiting time in unobstructed road sections was extended), and the driving times of some vehicles with high driving times was sharply reduced (i.e., waiting time in crowded road sections was shortened), which verifies the coordinated control of the co-factor set for each intersection of the urban road network. We found that the number of vehicles with low-quality driving experience decreased by 14 (i.e., a 37.8% reduction), and no single vehicle had too excessive a driving time (the longest vehicle driving time under the baseline strategy was also reduced from 1698 s to 1384 s after optimization). As each iteration of the algorithm will take the driving experience of the vehicles as the incentive factor, it can well avoid the extreme phenomenon of shortening the waiting time of the majority of vehicles by seriously affecting the driving experience of a very few vehicles. Thus, the CRO-CPSO algorithm takes into account the driving experience of almost all vehicles while ensuring the optimum efficiency. Comparative Experimental Analysis In this section, we present a test and comparison of three algorithms, our propos CRO-CPSO, GA and PSO, in the case of Changsha County. Compared with GA, we e pected to prove the advantages of using particle swarm optimization (PSO) over evo tionary algorithms in urban traffic control problems. By comparison with the tradition PSO algorithm, we expected to prove that the introduction of prior knowledge (short-ter traffic prediction and co-factor set) in the swarm intelligence algorithm could improve t performance of road cooperative regulation. In this case, we set three traffic flow densit Comparative Experimental Analysis In this section, we present a test and comparison of three algorithms, our propo CRO-CPSO, GA and PSO, in the case of Changsha County. Compared with GA, we pected to prove the advantages of using particle swarm optimization (PSO) over evo tionary algorithms in urban traffic control problems. By comparison with the traditio PSO algorithm, we expected to prove that the introduction of prior knowledge (short-te traffic prediction and co-factor set) in the swarm intelligence algorithm could improve performance of road cooperative regulation. In this case, we set three traffic flow densi Comparative Experimental Analysis In this section, we present a test and comparison of three algorithms, our proposed CRO-CPSO, GA and PSO, in the case of Changsha County. Compared with GA, we expected to prove the advantages of using particle swarm optimization (PSO) over evolutionary algorithms in urban traffic control problems. By comparison with the traditional PSO algorithm, we expected to prove that the introduction of prior knowledge (short-term traffic prediction and co-factor set) in the swarm intelligence algorithm could improve the performance of road cooperative regulation. In this case, we set three traffic flow densities (100 vehicles, 300 vehicles and 500 vehicles) to simulate different congestion conditions on the urban road network. Table 3 contains the best fitness values for CRO-CPSO, GA and PSO for different traffic flow densities in Changsha County. We found that with the increase in traffic flow density, the best fitness values of the three algorithms all showed a nonlinear increase in different proportions. Through an analysis of the traffic flow data set, we found that although the number of vehicles increased proportionally, the track of each vehicle was randomly generated, which affected the waiting time of each vehicle. The number of vehicles increased greatly; thus, the best fitness values for the algorithm showed an overall upward trend. We found that the best fitness values for CRO-CPSO were better than those for GA and PSO under different traffic flow densities. We also found an interesting feature. The best fitness values for GA were better than those for PSO at low traffic flow densities (100 vehicles, 300 vehicles). However, when the urban road network entered a high-congestion state (500 vehicles), the best fitness value for PSO exceeded that for GA. Therefore, the PSO algorithm is more suitable than the GA algorithm when dealing with the congestion typical of a complex urban road network. We also found that with the increase in traffic flow density, the performance improvements of CRO-CPSO with regard to traffic control were 2.70% (100 vehicles), 4.10% (300 vehicles) and 13.03% (500 vehicles), respectively. CRO-CPSO introduces co-factor sets and makes collaborative decisions based on fuzzy logic. When congestion occurs in urban road networks, the coordinated regulation can evacuate potentially congested road sections in a targeted way (that is, the green time of traffic lights can be appropriately extended/compressed in a coordinated way). Therefore, when regulating urban road networks with high congestion, our proposed CRO-CPSO shows a better performance than the other algorithms. The bold is the best fitness value of our CRO-CPSO. In Figure 9, we can see the distribution of the journey times of vehicles under the control of the three algorithms with different traffic flow densities on the urban road network. We found that under the three traffic flow densities, the median journey time of vehicles regulated by CRO-CPSO was the lowest, which reflects the excellent performance of our proposed CRO-CPSO in improving the traffic efficiency of the urban road network. The box height of CRO-CPSO was the lowest, which reflects the low fluctuation in the journeys of vehicles. As the traffic flow density reached 500 vehicles, congestion began to occur on the road network (both the maximums for the journey times of vehicles and the numbers of outliers in GA and PSO increase greatly). However, CRO-CPSO can still significantly reduce the journey time of vehicles (25.34%) while maintaining fewer outliers. Therefore, our proposed CRO-CPSO gives full play to the synergy, which not only improves the traffic efficiency of urban road networks, but also fully considers the driving experience of vehicles during road network regulation. Scalability Analysis This section is mainly concerned with the scalability of our proposed CRO-CPSO. We focus on the scale of the road network structure and the scalability of CRO-CPSO synergies in a large-scale road network. In this analysis, there were 2577 intersections and 3497 sections in the simulation of a large-scale urban road network. As shown in Figure 10, when the road network scale is greatly expanded, CRO-CPSO can still achieve good convergence and excellent regulation performance. We found that in the last two iterations, the best fitness value was greatly improved. Through analysis, we found that only 3.50% of the vehicles did not arrive at their destinations on time (failing to obtain D ). That is, the regulation strategy optimized by CRO-CPSO was almost able to guarantee the driving experience of all vehicles, which result was close to the optimal regulation strategy. Thus, our proposed CRO-CPSO has very good scalability and can well adapt to superlarge-scale road network structures. Scalability Analysis This section is mainly concerned with the scalability of our proposed CRO-CPSO. We focus on the scale of the road network structure and the scalability of CRO-CPSO synergies in a large-scale road network. In this analysis, there were 2577 intersections and 3497 sections in the simulation of a large-scale urban road network. As shown in Figure 10, when the road network scale is greatly expanded, CRO-CPSO can still achieve good convergence and excellent regulation performance. We found that in the last two iterations, the best fitness value was greatly improved. Through analysis, we found that only 3.50% of the vehicles did not arrive at their destinations on time (failing to obtain D score ). That is, the regulation strategy optimized by CRO-CPSO was almost able to guarantee the driving experience of all vehicles, which result was close to the optimal regulation strategy. Thus, our proposed CRO-CPSO has very good scalability and can well adapt to super-large-scale road network structures. Scalability Analysis This section is mainly concerned with the scalability of our proposed CRO-CPSO. focus on the scale of the road network structure and the scalability of CRO-CPSO sy gies in a large-scale road network. In this analysis, there were 2577 intersections and 3 sections in the simulation of a large-scale urban road network. As shown in Figure when the road network scale is greatly expanded, CRO-CPSO can still achieve good c vergence and excellent regulation performance. We found that in the last two iterati the best fitness value was greatly improved. Through analysis, we found that only 3. of the vehicles did not arrive at their destinations on time (failing to obtain D ). T is, the regulation strategy optimized by CRO-CPSO was almost able to guarantee the d ing experience of all vehicles, which result was close to the optimal regulation strat Thus, our proposed CRO-CPSO has very good scalability and can well adapt to su large-scale road network structures. Conclusions In this study, we designed a traffic signal regulation model based on edge computing for traffic congestion in large urban road networks. We used a traffic flow prediction system based on fuzzy logic to predict short-term real-time vehicle waiting times. The cofactor set was generated by offline learning, and the potential influence of historical traffic flow in adjacent sections was fully considered. We proposed the CRO-CPSO algorithm to effectively adapt to large-scale road network structures and complex traffic conditions. We used SUMO, a well-known microscopic traffic simulator, to evaluate our solution. We tested CRO-CPSO in a modern urban road network scenario of Changsha County and compared it with the classical algorithms GA and PSO. A series of analyses were carried out from different viewpoints, from which the following conclusions can be extracted. Our CRO-CPSO algorithm performs successfully in the generation of optimized scheduling strategies for large-scale realistic traffic scenarios. For all the instances, our proposal obtained robust results which were better than those of the other two algorithms compared: the GA and PSO algorithms. Our suggested algorithm can enhance driving experience and shorten journey times. The performance of CRO-CPSO improved by 2.70% (100 vehicles), 4.10% (300 vehicles) and 13.03% (500 vehicles) with different traffic flow densities. CRO-CPSO can also shorten the journey times of vehicles by 25.34% under conditions of high road congestion. All of this means real improvement in city traffic conditions. Additionally, CRO-CPSO has good scalability. When the scenario was extended to 2577 intersections and 3497 sections, the scheduling strategy could still ensure the driving experience of all drivers and was close to the optimal solution for scheduling. Thus, our method can well adapt to different road network sizes and traffic densities. In future work, we will further improve the training parameters of the co-factor set to expand the coordination range for road scheduling. We will further optimize the rule setting according to fuzzy logic. We will also consider introducing machine learning methods to predict traffic flow in real time and consider pedestrians and other multi-objective factors in traffic control. A future study will also focus on the emergency treatment of traffic emergency faults so as to further fit real urban road scenarios. Reward score T Π The maximum time to ensure the drivers' driving experience Θ N σ Set of road sections of vehicle unit V σ Ξ N σ Set of wating state of vehicle unit V σ χ Decision factor for the reward score Number of times vehicle unit V σ arrives at intersection i Φ Set of all intersections φ i,j Vector of the road section starting from intersection i and ending at intersection j Ψ Set of all road sections The queuing sequence of vehicle unit V σ waiting for the green light in road section φ i,j Ω φ i,j The identifier of the state of the green traffic light in road section φ i,j Φ →j Set of all intersections leading to intersection j Learning factors ω Inertia weight ω max Maximum of inertia weight ω min Minimum of inertia weight P max Maximum iteration
12,832.4
2022-08-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Tau accumulation in the retina promotes early neuronal dysfunction and precedes brain pathology in a mouse model of Alzheimer’s disease Background Tau is an axon-enriched protein that binds to and stabilizes microtubules, and hence plays a crucial role in neuronal function. In Alzheimer’s disease (AD), pathological tau accumulation correlates with cognitive decline. Substantial visual deficits are found in individuals affected by AD including a preferential loss of retinal ganglion cells (RGCs), the neurons that convey visual information from the retina to the brain. At present, however, the mechanisms that underlie vision changes in these patients are poorly understood. Here, we asked whether tau plays a role in early retinal pathology and neuronal dysfunction in AD. Methods Alterations in tau protein and gene expression, phosphorylation, and localization were investigated by western blots, qPCR, and immunohistochemistry in the retina and visual pathways of triple transgenic mice (3xTg) harboring mutations in the genes encoding presenilin 1 (PS1M146 V), amyloid precursor protein (APPSwe), and tau (MAPTP301L). Anterograde axonal transport was assessed by intraocular injection of the cholera toxin beta subunit followed by quantification of tracer accumulation in the contralateral superior colliculus. RGC survival was analyzed on whole-mounted retinas using cell-specific markers. Reduction of tau expression was achieved following intravitreal injection of targeted siRNA. Results Our data demonstrate an age-related increase in endogenous retinal tau characterized by epitope-specific hypo- and hyper-phosphorylation in 3xTg mice. Retinal tau accumulation was observed as early as three months of age, prior to the reported onset of behavioral deficits, and preceded tau aggregation in the brain. Intriguingly, tau build up occurred in RGC soma and dendrites, while tau in RGC axons in the optic nerve was depleted. Tau phosphorylation changes and missorting correlated with substantial defects in anterograde axonal transport that preceded RGC death. Importantly, targeted siRNA-mediated knockdown of endogenous tau improved anterograde transport along RGC axons. Conclusions Our study reveals profound tau pathology in the visual system leading to early retinal neuron damage in a mouse model of AD. Importantly, we show that tau accumulation promotes anterograde axonal transport impairment in vivo, and identify this response as an early feature of neuronal dysfunction that precedes cell death in the AD retina. These findings provide the first proof-of-concept that a global strategy to reduce tau accumulation is beneficial to improve axonal transport and mitigate functional deficits in AD and tauopathies. Background Tau, a member of the microtubule-associated protein family, plays a crucial role in many neurodegenerative diseases including Alzheimer's disease (AD), corticobasal dementia, frontotemporal lobar degeneration, progressive supranuclear palsy, and glaucoma [1][2][3][4]. These disorders share similar features including abnormal tau phosphorylation [5][6][7], protein aggregation [7,8], neurofibrillary tangle formation [9,10], and neurotoxicity [4,[11][12][13]. Tau dysfunction has been well described in AD, the principal cause of dementia worldwide [14,15], and occurs several decades before the appearance of cognitive deficits [16,17]. At present, little is known about the early sequence of events leading to tau pathology in AD, highlighting the need to elucidate the interplay of molecular and cellular changes during the pre-symptomatic stages of the disease. The retina is an integral part of the central nervous system (CNS) and has long been considered a window to the brain. The signal produced by light-sensitive photoreceptors is transmitted to bipolar cells and then to retinal ganglion cells (RGCs), which send information via their axons in the optic nerve to visual centers in the brain [18]. As an integral part of the CNS, it is no surprise that the retina is affected by the same neurodegenerative processes that disturb brain function [19]. Indeed, visual deficits are common and significant in AD [20]. Impaired contrast sensitivity, reduced visual acuity and abnormal motion perception are found in AD, and these correlate tightly with the severity of cognitive and behavioral defects [21][22][23][24][25][26][27][28][29][30][31]. For example, 50% of AD patients presented with profound loss of pattern and spatial vision, including contrast sensitivity [32]. Approximately 50% of AD patients and 33% of individuals diagnosed with mild cognitive impairment have substantial visual motion perception deficits [33]. A study on individuals with AD-related senile dementia showed that 44% had important deficits in visual sensitivity measured by automated perimetry [34]. Morphological and additional functional impairments have also been described in the retina of AD individuals suffering from AD including preferential RGC loss and thinning of the retinal nerve fiber layer [35][36][37][38], abnormal electroretinogram response [39], and reduced blood flow [40,41]. Similar to the brain, tau inclusions and amyloid beta (Aβ) deposition have been described in the retina of AD patients and in animal models of the disease [42][43][44][45][46]. Transgenic mice carrying the human P301S tau mutation contain tau aggregates in the retina [46], and display RGC functional deficits, increased susceptibility to excitotoxic damage, and altered neurotrophic factor signaling [47,48]. We recently reported key pathological changes of endogenous tau in glaucoma, an optic neuropathy characterized by selective RGC death and the leading cause of irreversible blindness worldwide [4]. Ocular hypertension, a major risk factor in glaucoma, triggered substantial tau changes reminiscent of AD including abnormal phosphorylation, missorting, and neurotoxicity [4]. Collectively, these findings suggest an association between tau alterations and retinal dysfunction, notably linked to RGC damage. At present, a detailed characterization of the biochemical changes and cellular distribution of endogenous retinal tau and its impact on RGC function and survival during the early pre-symptomatic and prodromal stages of AD is lacking. To address this, we utilized the triple transgenic (3xTg) line [13]. The rationale for the choice of this AD mouse model was three-fold. First, the presence of mutations in the genes encoding presenilin 1 (PS1), amyloid precursor protein (APP), and tau (MAPT) have been identified as causing familial AD (PS1M146V, APPSWE) or tauopathies including frontotemporal dementia and parkinsonism linked to chromosome 17 (MAPTP301L) [49]. Second, unlike other models, the 3xTg mice develop the two cardinal features of AD, namely accumulation of Aβ plaques and neurofibrillary tangles composed of tau, thus phenocopying critical pathological aspects of the disease [13]. Third, the 3xTg mice have been well-characterized regarding the appearance of brain lesions and cognitive deficits, thus providing a timeframe for the characterization of pathological changes in the visual system. Our data demonstrate that, as early as 3 months of age and prior to the onset of reported cognitive defects [50], abnormally phosphorylated tau accumulates in the retina of 3xTg mice preceding its aggregation in the brain. Tau accumulation was primarily observed in RGC soma, dendrites and intraretinal axons, while tau in optic nerve axons was markedly reduced. Importantly, tau phosphorylation and missorting resulted in striking defects in anterograde axonal transport and age-dependent RGC neurodegeneration. Our study identifies novel alterations of endogenous retinal tau protein and neuronal dysfunction in the early stages of AD, thus offering the possibility of exploiting tau to modulate disease susceptibility and onset. Experimental animals The 3xTg mice bearing the human mutations in the genes encoding presenilin 1 (PS1M146V), amyloid precursor protein (APPSwe), and tau (MAPTP301L) [51], tau knockout mice (strain Mapt-tm1[EGFP]Klt/J), and age-matched littermate wild-type controls were purchased from Jackson Laboratories (Bar Harbor, ME) and maintained in our animal facility. Experiments were performed using 3 or 6 month-old female mice because they exhibit a more severe disease phenotype than their age-matched male counterparts [52]. All animal procedures were approved by the Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM) Animal Care Committee (approved protocol #N14024ADPs), and followed the guidelines of the Canadian Council on Animal Care. Retina and optic nerve immunohistochemistry Animals were perfused with 4% paraformaldehyde and the eyes and optic nerves were rapidly dissected. Tissue was embedded in optimal cutting temperature compound (Tissue-Tek, Miles Laboratories, Elkhart, IN), and retinal (16 μm) or optic nerve (12 μm) cryosections were collected onto Superfrost Plus microscope slides (Thermo-Fisher Scientific). The following primary antibodies were added to retinal or optic nerve sections in blocking solution and incubated overnight at 4°C as described [53]: total tau (K9JA, 2 μg/ml, Dako), tubulin isoform βIII (TUJ1, 2.5 μg/ml; Sigma-Aldrich), or neurofilament H (NF-H, 20 μg/ml, Sternberger Monoclonals Inc., Lutherville, MA). For whole-mounted retinas, tissue was permeabilized overnight at 4°C in blocking solution, rinsed and incubated for 5 days at 4°C in the following primary antibodies: total tau (K9JA, 2 μg/ ml, Dako), RNA-binding protein with multiple splicing (RBPMS, 1:1000, PhosphoSolutions, Aurora, CO), or NF-H (20 μg/ml, Sternberger Monoclonals Inc.). Sections or whole retinas were washed and incubated with secondary donkey anti-rabbit or anti-mouse Alexa Fluor 594 and 488 (2 μg/ml, Life Technologies, Eugene, OR). Fluorescent labeling was observed using a Zeiss Axio Observer (Carl Zeiss, Canada) or a Leica SP5 confocal microscope (Leica Microsystems Inc., Concord, ON). All retinal and optic nerve images were acquired under identical conditions using the same illumination intensity, time exposure, and magnification, with careful attention to avoid signal saturation and/or bleaching. The areas sampled were selected using an unbiased stereological sampling method as described (http://www.stereology.info). Reverse transcription and quantitative real time PCR (qPCR) Total RNA was isolated from individual retinas using the RNEasy Mini kit (Qiagen Inc., Valencia, CA). cDNAs were generated from 1 μg of total RNA using the Quan-tiTect Reverse Transcription Kit (Qiagen Inc.). Real-time PCR was performed using TaqMan probes and primers that target exon 5, expressed by all tau isoforms (pantau, catalog # Rn01495715), exon 4a specific to big tau (catalog # Rn01495711), or β-actin RNA as control (catalog # 4331182) (Applied Biosystems, Waltham, MA). Amplification was performed using the 7900HT Fast Real-Time PCR System (Applied Biosystems) with the following cycle conditions: 95°C for 15 s, 60°C for 1 min, 72°C for 1 min. Reactions were run in triplicates for each sample and the 2 -ΔΔ Ct formula was used for the calculation of differential gene expression. Axonal transport measurement Anterograde axonal transport was assessed by injection of cholera toxin β subunit (CTβ) conjugated to Alexa Fluor 488 (Molecular Probes, Life Technologies, Eugene OR) as described previously [54,55]. CTβ is a reliable marker of active transport and has been consistently used to assess RGC anterograde transport to the superior colliculus [56,57]. CTβ (1% diluted in sterile PBS, total volume: 1 μl) was injected intravitreally using a custom-made sharpened microneedle generated from a borosilicate glass capillary tube (5 μl, World Precision Instruments, Sarasota, FL) as previously described by us [58]. Briefly, the glass capillary was pulled using a two stage needle puller (PC-10, Narishige International, Amityville, NY) to produce thin microneedles of 6 cm in length. Under a dissecting microscope, a sharp blade was used to carefully create an opening at the tip of the microneedle. The resulting opening had an elliptical shape with a major and minor axis diameter of approximately 190 μm and 70 μm, respectively. The tip of the microneedle was then beveled using a micropipette bevelling system until the tip was sharp and the edges were flat and smooth [58]. For intravitreal injections, animals were anesthetized with isoflurane (2%, 0.8 L/min). The upper eyelid was gently retracted and the tip of the glass microneedle positioned at a 45°angle relative to the ocular surface. A light pressure was exerted to insert the microneedle through the conjunctiva, sclera and retina into the vitreous cavity. The tracer was injected and the needle was retracted slowly to avoid reflux. This route of administration avoided injury to ocular structures, including the iris and the lens. A small drop of antibiotic was applied topically after the surgery (Vigamox, 0.5%, Alcon Canada Inc., Mississauga, ON), and there were no signs of post-operative infection or inflammation. Animals were perfused transcardially with 4% paraformaldehyde three days after CTβ administration; brains were removed and embedded in optimal cutting temperature compound (Tissue-Tek, Miles Laboratories). Serial coronal cryosections of the entire superior colliculus from each animal were obtained using a cryostat (50-μm thickness). Seven sections per superior colliculus, from rostral to caudal, were selected using an unbiased stereological sampling method. Sections were photographed digitally using the Zeiss Axio Observer fluorescent microscope with Apotome (Carl Zeiss) and the area of the CTβ signal in each section was measured using the Imaris MeasurementPro module (Bitplane, South Windsor, CT). The total CTβ signal in each superior colliculus was calculated using the formula: total CTβ area = ΣCTβ section area/ssf x asf x tsf [59]. The section sampling fraction (ssf ) was the number of sections analyzed over the total number of sections obtained from each superior colliculus (7/36), the area sampling fraction (asf ) was the area sampled divided by the total area (1), and the thickness sampling fraction (tsf ) was the section thickness sampled divided by the total section thickness (1). This analysis provided a representative value of CTβpositive area which was then multiplied by the width of the entire superior colliculus to yield the total CTβ volume. To confirm CTβ uptake by RGCs, whole retinas were incubated overnight at 4°C with goat IgG against the RGC-specific marker brain-specific homeobox/POU domain protein 3a (Brn3a, 0.27 μg/ml, Santa Cruz Biotechnology, Santa Cruz, CA) followed by secondary Alexa Fluor 594 anti-goat IgG (1 μg/ml; Jackson Immu-noResearch Laboratories, West Grove, PA). Retinas were rinsed, mounted, and the total number of CTβ-positive and Brn3a-positive neurons was quantified by independent random stereological sampling. Quantification of RGC soma Mice were subjected to transcardial perfusion with 4% paraformaldehyde and retinas were dissected out and fixed for an additional 15 min. Free-floating retinas were blocked overnight at 4°C in 10% normal goat serum, 2% bovine serum albumin, 0.5% Triton X-100 in PBS, and incubated with the RGC-specific marker RBPMS (1:1000, PhosphoSolutions) for 5 days at 4°C. Retinas were then incubated with Alexa 594-coupled secondary antibody (2 μg/ml, Life Technologies) for 4 h at room temperature, washed, mounted with the nerve fiber layer side up, and visualized with a Zeiss Axio Observer (Carl Zeiss). RBPMS-labeled RGCs were counted within three square areas at distances of 0.25, 0.625 and 1 mm from the optic disc in each of the four retinal quadrants for a total of twelve retinal areas as described by us [4]. Statistical analyses Data analysis and statistics were performed using Graph-Pad Instat software (GraphPad Software Inc., San Diego, CA) by a Student's t-test as indicated in the legends. Results Tau protein accumulation in the retina precedes build up in the brain Visual deficits and pathological changes have been described in the retinas of AD patients [21,31]. Therefore, we first asked whether the level of endogenous retinal tau was altered in 3xTg mice and, if so, whether it correlated with tau changes in the brain. For this purpose, retinal and brain protein samples from 3-and 6-month-old 3xTg mice were analyzed and compared to those from agematched wild-type controls. These time points were selected because they precede the appearance of reported behavioral and cognitive defects in this model [50,60]. Western blots of soluble retinal homogenates using an antibody against total tau (K9JA), which binds the microtubule binding domain of the protein irrespective of its phosphorylation state [61,62], revealed the presence of four predominant tau isoforms of 37-kDa, 50-kDa, 55-kDa and 100-kDa (Fig. 1a). The 100-kDa band most likely corresponded to big tau, a high molecular weight tau isoform detected only in retinal and peripheral neurons [63][64][65]. Densitometry analysis showed that all four tau isoforms increased in 3-month-old 3xTg mouse retinas relative to age-matched wild-type controls (Fig. 1c). Analysis of cortical and hippocampal homogenates revealed the presence of four tau isoforms of 37-kDa 50-kDa, 55-kDa, and 60-kDa (Fig. 1b). In contrast to retina, however, the levels of all brain tau isoforms in 3-month-old 3xTg mice were similar to those in agematched controls (Fig. 1d). No signal was detected in retinal or brain samples from tau null mice thus confirming the specificity of the K9JA antibody in these tissues (Fig. 1a, b). Western blot and densitometry analyses revealed an increase of the 55-kDa tau isoform in both retina and brain samples from of 6-month-old 3xTg mice, while all other isoforms remained unchanged ( Fig. 1e-h). These results demonstrate that tau protein increases in transgenic retinas early in the disease process and precedes tau accumulation in the brain. Phosphorylation can lead to changes in the conformation of tau protein during the course of AD, which can play a key role in pathological tau accumulation and cleavage [74,75]. The conformation of tau in 3xTg retinas was investigated using the antibodies MC-1 and ALZ-50, which recognize the early folding back of the N-terminus on the microtubule domain in a hairclip configuration linked with tau aggregation [74,76,77]. No changes in conformation-dependent epitopes recognized by MC-1 or ALZ-50 were detected in 3-or 6-month-old 3xTg retinas relative to age-matched controls (Fig. 3). Together, these data demonstrate increased tau phosphorylation at AT8 epitopes in the early presymptomatic stages, whereas increased phosphorylation at PS199 appears at a later phase of the disease, in the absence of conformational changes. We conclude that retinal tau undergoes complex age-related and epitopespecific changes in AD. Tau accumulates in the RGC somatodendritic compartment and intraretinal axons The cellular distribution of tau in the retina was investigated by immunohistochemistry using the K9JA antibody against total tau. In agreement with previous reports [4,78], a low basal level of tau expression was found in all retinal layers except the outer nuclear layer (Fig. 4a, c). Consistent with our biochemical findings, retinal tau increased in 3-and 6-month-old 3xTg mice and its accumulation was more pronounced in the inner plexiform layer (IPL), where RGC dendrites are located (Fig. 4b, d). Labeling with an antibody against tubulin isoform βIII (TUJ1), an RGC-specific marker that strongly labels the soma and dendrites of these neurons [79,80], confirmed localization of tau within the somatodendritic compartment (Fig. 4e-g). Further analysis of tau distribution on flat-mounted retinas using RBPMS, which selectively labels RGC soma [81,82], confirmed tau accumulation in RGC bodies in 3xTg retinas relative to controls (Fig. 4h-m). A similar distribution of tau was observed in 6-month-old transgenic mice, with more robust tau build up at this age ( Fig. 4n-s). Further analysis of tau expression in flat-mounted retinas, specifically changes in the intraretinal RGC axons, was investigated by co-localization of tau with the axonal marker neurofilament-H (NF-H). In wild-type retinas, low basal levels of tau protein were detected in RGC intraretinal axons, visualized with NF-H (Fig. 5a-c, g-i). In contrast, a pronounced increase of tau signal in NF-H-positive axons was detected in 3xTg retinas at both 3 and 6 months of age ( Fig. 5d-f, j-l). To establish whether tau accumulation in the retina resulted from increased gene expression, real-time qPCR was performed using primers that recognized all tau isoforms (pan-tau) or big tau [63][64][65]. No significant changes in tau mRNA levels were detected in AD retinas compared to controls (Fig. 5m). These data indicate that tau accrues early in the AD retina, predominantly in RGC dendrites and intraretinal axons, and that this accumulation is not the result of increased gene expression. Tau is depleted from RGC axons in 3xTg optic nerves In normal physiological conditions, tau is enriched in axons with low levels found in dendrites and soma [83]. In AD and other tauopathies, tau detaches from axonal microtubules and accumulates in the somatodendritic compartment of affected neurons [84]. To assess whether the distribution of tau in RGC axons within the optic nerve was altered in 3xTg mice, we carried out Fig. 1 Tau protein accrues in the retina and precedes accumulation in the brain. a Representative western blots of soluble retinal extracts from 3-month-old 3xTg mice and wild-type (WT) age-matched controls probed with an antibody against total tau (K9JA) revealed the presence of four tau isoforms: 37-kDa, 50-kDa, 55-kDa and 100-kDa. All retinal tau isoforms were detected in both wild-type and 3xTg retinas. No signal was detected in tau knockout mice validating the specificity of the K9JA antibody. b Western blot analysis of brain homogenates from 3-month-old 3xTg mice and age-matched controls revealed four isoforms of 37-kDa, 50-kDa, 55-kDa and 65-kDa, while no signal was detected in brains from tau knockout mice. c Densitometry analysis showed a 1.8-, 1.6-, 2-and 2-fold increase in the 37-kDa, 50-kDa, 55-kDa, and 100-kDa tau variants, respectively, in retinal samples from 3xTg mice (n = 10) compared to WT controls (n = 8) (Student's t-test, * = p < 0.05). d Quantitative analysis revealed no changes in brain tau levels between 3xTg mice (n = 10) and WT controls (n = 10) (Student's t-test, n.s.: not significant, p > 0.05). e, f Western blots using retinal (e) or brain (f) samples from 6-month-old 3xTg mice demonstrated that only the 55-kDa tau form increased relative to age-matched WT controls. g, h Densitometry analysis indicated a~1.4-fold increase in the 55-kDa tau isoform in the retina (g) and brain (h) of 3xTg mice compared to controls (3xTg: n = 5, WT: n = 4, Student's t-test, * = p < 0.05). Vertical lines represent non-consecutive samples from the same gel confocal imaging of nerve cross sections colabeled with tau and the axonal marker NF-H. In control optic nerves, tau was enriched in RGC axons visualized with NF-H (Fig. 6a-c, m-o). In contrast, 3xTg optic nerves from both 3-and 6-month-old mice displayed a striking reduction in axonal tau protein ( Fig. 6g-i, s-u). High magnification confocal images of individual optic nerve fascicles revealed that many NF-H positive axons were still detected in transgenic animals in spite of marked reduction of tau protein levels ( Fig. 6j-l, v-x), indicating that a decrease in tau was not the result of axonal degeneration. Western blot analysis of optic nerve homogenates confirmed a visible reduction of tau protein in 3-and 6-month-old 3xTg mice (Fig. 6y, Y' , z, Z'). Taken together, our results demonstrate that tau is markedly reduced in RGC axons within the optic nerve early in the course of the disease. Anterograde transport impairment in 3xTg RGCs precedes neuronal death RGCs, like other long projecting neurons, rely heavily on axonal transport for proper function. Anterograde transport impairment is recognized as an early sign of RGC damage and dysfunction [55]. Therefore, we examined whether the ability of RGCs to transport the anterograde tracer CTβ to terminals in the superior colliculus, the primary target of RGCs in the rodent brain [85][86][87], was altered in 3xTg mice. CTβ is an excellent reagent to study axonal transport because of its high sensitivity, ability to effectively move anterogradely or retrogradely from the injection site, dependency on intact microtubules thus serving as readout of active transport, capacity to label the entire neuron including extremely fine terminals, restriction from labeling fibers of passage, and demonstrated efficacy to label axonal tracts in the visual system following ocular administration [54][55][56][88][89][90][91][92][93][94][95][96]. Alexa Fluor 488-conjugated CTβ was injected intravitreally and its accumulation in the contralateral superior colliculus was quantified using unbiased stereological sampling [54]. Mutant APP, PS1 and tau proteins are expressed throughout development in 3xTg mice, thus we first aimed to establish whether there were developmental alterations in axonal transport in young mice (21 days), a week after eye opening. Our data demonstrate that there was no difference in the amount of brain CTβ between wild-type and 3xTg at 21 days of age (Fig. 7a, b). In contrast, a substantial reduction in the CTβ-labeled volume was observed in 3and 6-month-old 3xTg mice relative to age-matched controls ( Fig. 7c-f). Quantification of total CTβ volume confirmed a 57% reduction in the superior colliculi of 3xTg mice suggesting major deficits in the ability of RGCs to transport cargos to their targets (Fig. 7g). These findings indicate that deficits in anterograde axonal transport in 3xTg mice are not of developmental origin, but rather reflect pathological changes detected early in the course of the disease. To rule out that anterograde transport deficits were caused by the inability of 3xTg RGCs to uptake CTβ, we examined CTβ-injected retinas with Brn3a, a selective marker of RGC nuclei [97]. Abundant cytoplasmic CTβ within RGCs was observed at 24 h after tracer injection in both wild-type and transgenic retinas (Fig. 7h, i). Quantification of Brn3a-positive RGCs containing CTβ revealed a similar number of neurons in both 3xTg and control mice (Fig. 7j), thus confirming effective tracer uptake. To determine whether anterograde transport deficits reflected neuronal death, RGC density was quantified in 3-and 6-month-old transgenic retinas using the cell-specific marker RBPMS. A similar RGC density was found in 3xTg and wild-type mice at 3 months of age, confirming the absence of significant cell death at a time when major transport deficits are already apparent (Fig. 7k-o). In 6-month-old transgenic retinas, only a modest reduction in RGC density was detected relative to controls (~15%, Fig. 7m-o), therefore the substantial axonal transport loss at this age cannot be ascribed solely to retinal neuron death. Collectively, these results demonstrate that major deficits in axonal transport along RGC axons are a relatively early feature of neuronal dysfunction in AD pathology that precedes cell death. Selective tau knockdown improves RGC axonal transport To investigate whether tau accumulation underlies the axonal transport deficits in 3xTg RGCs, we sought to decrease tau protein levels using siRNA followed by analysis of CTβ transport. First, we assessed the ability of a targeted siRNA against tau (siTau) to reduce retinal tau (See figure on previous page.) Fig. 2 Retinal tau undergoes epitope-specific and age-dependent phosphorylation changes in AD. a-c Western blot analyses of retinal homogenates probed with the phospho-tau specific antibodies AT8, PS199, and PHF1 revealed alterations in tau phosphorylation in 3xTg relative to wild-type (WT) mice. d-f Densitometry analysis of phospho-tau relative to total tau revealed increased tau phosphorylation on S202/T205 (AT8) and reduced phospho-S199 (PS199) in 3xTg retinas relative to controls, while no change was detected on S396/S404 (PHF1) (3xTg: n = 5, WT: n = 4, Student's t-test, * = p < 0.05, ** = p < 0.01). g-i Western blot analysis of tau phosphorylation in 6-month-old 3xTg retinas showed alterations on epitopes AT8 and PS199, but not PHF1, relative to controls. j-l Quantitative analysis confirmed decreased tau phosphorylation on S202/T205, increased phosphorylation on S199, and no change on PHF1 (3xTg: n = 5, WT: n = 4, Student's t-test, * = p < 0.05, ** = p < 0.01, *** = p < 0.001, n.s.: not significant p > 0.05). Vertical lines represent non-consecutive samples from the same gel Fig. 3 Lack of conformational tau changes in AD retinas. a, b Western blot analysis of retinal extracts from 3-month-old 3xTg mice probed with MC-1 or ALZ-50 antibodies demonstrated lack of changes in transgenic mice relative to controls. c, d Densitometry confirmed the lack of variations in tau conformation-dependent markers (3-month-old 3xTg: n = 5, 3-month-old WT: n = 4). e-h Western blots of retinal homogenates and densitometry analyses from 6-month-old mice also revealed absence of tau conformational changes between transgenic and control mice (6-month-old 3xTg: n = 5, 6-month-old WT: n = 4, Students t-test, n.s.: not significant p > 0.05). Vertical lines represent non-consecutive samples from the same gel protein levels. We previously demonstrated that siRNA delivered by intravitreal injection is rapidly taken up by RGCs [98]. Western blot analysis of retinal extracts from 3-month-old 3xTg eyes that received siTau showed a significant reduction in tau protein relative to age-matched transgenic mice that received a control non-targeting siRNA (siCtl) (Fig. 8a). Quantitative analysis confirmed that siTau induced a 27%, 42% and 50% reduction of the 50-kDa, 55-kDa and 100-kDa tau isoforms, respectively, relative to control siRNA-treated eyes (Fig. 8b). Next, we investigated whether siRNA-mediated tau knockdown improved RGC axonal transport. For this purpose, siTau was injected intraocularly once a week for a total of three weeks. The multiple injection regimen was selected based on our previous findings that siRNAmediated protein knockdown in the retina is transient [98]. Four days after the last siTau injection, CTβ was administered in the eye and the amount of the tracer in the superior colliculus quantified three days later. Visualization of caudal-to-rostral sections from the superior colliculus of siTau-treated 3xTg mice showed a marked increase in CTβ relative to transgenic mice that received control siCtl (Fig. 8c, d). Quantitative analysis confirmed that tau knockdown promoted a significant increase in anterograde axonal transport compared to control animals (20%, Fig. 8e). Our results demonstrate that attenuation of retinal tau levels improves axonal transport, suggesting that early tau accumulation in the retina impairs axonal function in AD. Discussion Data presented here in a well-characterized mouse model of AD reveal profound alterations in tau within the retina and the visual pathways leading to neuronal dysfunction in vivo. First, we demonstrate that retinal tau accumulation in 3xTg mice occurs early and precedes pathological changes in the brain. Second, our data show that retinal tau undergoes age-related and epitope-specific changes in phosphorylation, which are independent of conformational modifications. Third, we found that tau build up occurs in the somatodendritic compartment and intraretinal axons of RGCs, whereas tau is depleted from optic nerve axons. Lastly, our results demonstrate that tau accumulation leads to substantial deficits in anterograde transport along 3xTg RGC axons, and that tau knockdown improves axonal transport. Collectively, this study reveals early and profound alterations in retinal tau leading to axonal dysfunction suggesting a role for pathological tau in visual deficits associated with AD. Accumulation of pathological tau is a hallmark of AD and other tauopathies [1,66,99,100]. Our data using the 3xTg mouse model, demonstrate early accumulation of tau in the retina prior to the onset of reported cognitive defects [50]. This finding is consistent with previous studies demonstrating increased retinal tau levels in murine models of tauopathies [45,46,48,78]. The increase in retinal tau reported here was considerably more pronounced in younger mice than in older individuals. The expression of mutant tau in 3xTg mice is under the control of the Thy1 promoter [51], and Thy1 transcriptional activity has been shown to remain constant in the adult retina [81]. Therefore, it is unlikely that the marked age-dependent increase in retinal tau reported here is due to changes in Thy1 promoter activity. This is further supported by our finding that tau protein upregulation is not the result of increased gene expression. Instead, our results reveal a robust retinal response early in the course of the disease, suggesting that the imbalance in tau levels in younger individuals might increase the risk of neuronal dysfunction and subsequent neurodegeneration at later stages of the disease. We also demonstrate that tau accumulation in the retina precedes tau build up in the brain. Importantly, even in older mutant mice, which accrue tau in both retinas and brain, the relative increase of tau was consistently higher in retinal than brain samples. These results identify the retina as a highly sensitive system that reflects early tau protein accumulation in AD. Phosphorylation is a critical post-translational modification of tau during development and in pathological conditions [101]. Inclusions of phosphorylated tau are found in most tauopathies and correlate with severity of disease [73], however, virtually nothing is known about alterations in tau phosphorylation in the AD retina. We found that while tau residues S202 and T205 were highly phosphorylated (AT8), there was a net decrease in the phosphorylation of S199 relative to total tau levels in young mice (PS199). Intriguingly, this pattern was reversed in older animals which displayed increased S199 phosphorylation and decreased phospho-S202/T205, indicative of age-dependent tau modifications at these (See figure on previous page.) Fig. 4 Tau accumulates in the somatodendritic compartment of RGCs. a-d Retinal immunohistochemistry using antibodies against total tau (K9JA) revealed marked tau upregulation in 3-and 6-month-old 3xTg retinas relative to age-matched wild-type (WT) controls. e-g Co-staining of tau with TUJ1, an RGC-specific marker, demonstrated tau accumulation in 3xTg RGC dendrites and somata (arrowheads) (n = 5/group). h-s Whole-mount retinal preparations from 3-and 6-month-old mice show co-localization of tau and RBPMS, a selective marker of RGC soma, demonstrating age-dependent accumulation of tau in RGC soma (n = 5/group). Scale bars: A-D = 50 μm, E-G = 25 μm, H-S = 50 μm. ONL: outer nuclear layer, OPL: outer plexiform layer, INL: inner nuclear layer, IPL: inner plexiform layer, GCL: ganglion cell layer Higher magnification images demonstrate co-localization of tau in NF-Hpositive RGC axons. g-l In contrast, optic nerves from age-matched 3xTg animals show marked reduction of tau, which was not due to axonal loss as co-staining with NF-H confirmed an abundance of RGC axons. m-x Tau expression in 3xTg optic nerve axons was much reduced in older mice (6 months) as demonstrated by the marked loss of tau labeling in the optic nerve sections despite robust NF-H staining (n = 5/group). Scale bars: a-c, g-i, m-o, s-u = 10 μm (1000× magnification); d-f, j-l, p-r, v-x = 4 μm (2000× magnification). y, z Western blot analysis of tau expression in optic nerve protein homogenates from 3-and 6-month-old mice confirmed the loss of axonal tau in transgenic animals relative to controls (3-month-old 3xTg: n = 6, 3-month-old WT: n = 9, 6-month-old 3xTg: n = 4, 6-month-old WT: n = 4, Student's t test, * = p < 0.05, n.s.: not significant p > 0.05). Vertical lines represent non-consecutive samples from the same gel residues. Although tau hyperphosphorylation has received much attention, accumulating data indicate that oxidative stress, excitotoxicity and starvation induce tau hypophosphorylation [102][103][104]. Tau dephosphorylation has also been reported during ischemia, hypoxia and glucose deprivation in animal models and in human brain tissue [105][106][107][108], indicative of a potential pathological role. Of interest, the alterations in tau phosphorylation observed in 3xTg retinas were different from those we reported in a model of ocular hypertension (See figure on previous page.) Fig. 7 Anterograde transport impairment in 3xTg RGCs precedes neuronal death. a-f Unbiased stereological rostral to caudal sampling of the superior colliculus after CTβ injection showed no changes in CTβ labeling in 21-day-old wild-type or 3xTg mice. In contrast, a marked reduction of CTβ labeling was observed in both 3-and 6-month-old 3xTg mice relative to controls. g Quantification of the total CTβ-positive region in the superior colliculus demonstrated a striking loss of anterograde transport in transgenic mice compared to age-matched wild-type (WT) controls (3-month-old 3xTg: n = 5, 3-month-old WT: n = 3, 6-month-old 3xTg: n = 5, 6-month-old WT: n = 5, Student's t-test, * = p < 0.05). h, i Co-labeling of CTβ (green) with the RGC-specific marker Brn3a (red) confirmed effective CTβ uptake by RGCs in both 3xTg and WT retinas. j Quantitative analysis confirmed that there was no difference in the number of CTβ-and Brn3a-positive RGCs between 3xTg and WT retina at 3 or 6 months of age (3-month-old 3xTg: n = 3, 3-month-old WT: n = 3, 6-month-old 3xTg: n = 3, 6-month-old WT: n = 4). k-m Flat-mounted retinas labeled with the RGC-specific marker RBPMS were used to quantify RGC density (survival). o Quantitative analysis of RGC density demonstrated absence of cell death in 3-month-old 3xTg mice, while only a modest loss was observed in 6-month-old transgenic animals relative to age-matched WT controls (3-month-old 3xTg: n = 5, 3-month-old WT: n = 6, 6-month-old 3xTg: n = 5, 6-month-old WT: n = 5, Student's t-test, *p < 0.05, n.s.: not significant p > 0.05). Scale bars: A-D = 500 μm, F-G = 25 μm, K-N = 7.5 μm Fig. 8 siRNA-mediated tau deletion improves axonal transport. a, b Western blot analysis of retinal homogenates from transgenic eyes treated with short interference RNA (siRNA). Eyes injected with siRNA against tau (siTau) showed a significant reduction in tau protein (50-kDa, 55 k-Da, and 100 k-Da) while a control siRNA (siCtl) had no effect (siCtl: n = 3; siTau: n = 3; Student's t-test, * = p < 0.05). c, d Unbiased rostrocaudal stereological sampling of the superior colliculus after tracer injection shows increased CTβ in the brains of 3xTg mice that received siTau relative to siCtl-treated control mice. e Quantitative analysis of the total CTβ volume shows a significant increase of anterograde transport (20%) in siTau-treated mice compared to siCtl-treated controls (siTau: n = 5, siCtl: n = 6, Student's t-test, * = p < 0.05). Scale bars: C, D = 500 μm glaucoma in which S396/S404 residues were hyperphosphorylated, S199 hypophosphorylated, and S202/T205 remained unchanged [4]. Collectively, our observations demonstrate disease-specific changes in retinal tau phosphorylation on key residues. Tau is an axonal-enriched protein and its abnormal localization to compartments other than the axon, such as soma and dendrites, strongly correlates with neuronal pathology and cognitive decline [109]. RGCs are highly polarized neurons: their soma, dendrites and initial nonmyelinated axonal segments are within the retina, whereas the distal myelinated axons are in the optic nerve outside the eye [110]. Our data demonstrate that tau accumulated in RGC dendrites and intraretinal axons, while it was depleted from optic nerve axons in 3xTg mice. This abnormal tau distribution is consistent with pathological changes observed in diseases affecting RGCs such as glaucoma, in which tau accumulates in RGC soma and dendrites leading to neuronal death [4]. Despite the clear redistribution of tau from RGC axons to soma, there was no net increase of tau detected biochemically from 3 to 6 months of age. It is possible that the intracellular redistribution observed in RGCs does not faithfully reflect the global retinal changes detected by western blot analysis. We previously demonstrated that tau is present in retinal cells other than RGCs [4], and tau has also been shown to exist in the extracellular space [111], which could account for this discrepancy. Nonetheless, the pathological properties of tau do not stem only from its accumulation but also from post-translational modifications, most notably phosphorylation. Our data show that the phosphorylation pattern of tau changed dramatically at 6 months relative to younger mice, which might contribute to RGC dysfunction and death. The lack of changes in tau mRNA levels ruled out transcriptional regulation as a mechanism for tau accumulation in AD retinas. In physiological conditions, tau protein is produced in the cell body and readily sorted to the axon [112], hence the mislocalization of tau in 3xTg RGCs points to the existence of alterations in the sorting mechanisms that control the normal distribution of tau in different neuronal compartments. Changes in tau phosphorylation might reduce its affinity for axonal microtubules and increase it for dendritic microtubules, as shown in cultured spinal cord neurons [113]. Alternatively, retinal tau accumulation might result from impaired degradation in proximal RGC compartments due to defective autophagy or proteasomal pathways and/or altered protein turn over [114,115]. Future work will be essential to establish the mechanisms driving tau accumulation and missorting in the visual system during the course of AD. Tau is best known for its role in assembling and stabilizing axonal microtubule networks [116]. In vitro studies have demonstrated that tau can regulate axonal transport primarily by modulating the function of kinesin motor proteins, which mediate anterograde movement [11,117,118]. For example, tau overexpression in cultured cells dramatically impairs the anterograde transport of a variety of cargos [119][120][121][122][123]. Previous in vivo studies, however, have yielded controversial results with some reporting reduced anterograde transport in mice overexpressing tau while others showed little or no change [124][125][126]. Our data using CTβ accumulation in the brain, a readout of active microtubule-dependent transport [88], demonstrate early and substantial deficits in transport along RGC axons in 3xTg mice, which is consistent with previous findings in a model of frontotemporal dementia [47]. Importantly, axonal transport dysfunction was not the result of cell death because transport deficits were detected in young 3xTg mice prior to RGC loss. To test whether tau accumulation in the retina contributed to axonal transport impairment, we used a siRNA strategy based on the ability to selectively attenuate tau, without completely inhibiting it, and our observation that siRNAs are readily taken up by RGCs when injected into the vitreous space [4,98]. Although this siRNA approach only partially decreased tau levels in the retina, we observed an improvement of axonal transport in 3xTg RGCs (20%) providing strong proof-ofprinciple for: 1) a detrimental role of tau accumulation on the regulation of anterograde axonal transport in vivo, and 2) early tau-dependent RGC dysfunction preceding overt neurodegeneration in AD. The decrease of tau burden in the retina appears to have a widespread beneficial effect on the overall health of RGCs leading to improvements in axonal transport and functionality. The mechanism by which pathological tau disrupts anterograde transport in RGCs is currently unknown, but might involve tau detachment and microtubule network destabilization or excessive binding of tau to microtubules resulting in the displacement of kinesin motors [117,118,127]. Independent of the mode of action, our findings provide the first demonstration that a global strategy to reduce retinal tau using siRNA is an effective approach to improve axonal transport and attenuate neuronal dysfunction in AD. Conclusions Substantial visual deficits have been documented in Alzheimer's disease patients; however, the molecular basis of this impairment is poorly understood. This study reveals early and profound alterations in retinal tau including abnormal accumulation, phosphorylation, and missorting. These pathological changes cause substantial retinal neuron dysfunction and subsequent death, suggesting a prominent role for pathological tau in visual defects. The eye is the most accessible part of the CNS and the transparent ocular structures allow swift visualization of the retina. Retinal tau is a promising target to detect early pathological changes and to further understand fundamental mechanisms of neuronal damage in AD and tauopathies.
9,567.6
2017-08-03T00:00:00.000
[ "Biology" ]
The Large Phenotypic Spectrum of Fabry Disease Requires Graduated Diagnosis and Personalized Therapy: A Meta-Analysis Can Help to Differentiate Missense Mutations Fabry disease is caused by mutations in the GLA gene and is characterized by a large genotypic and phenotypic spectrum. Missense mutations pose a special problem for graduating diagnosis and choosing a cost-effective therapy. Some mutants retain enzymatic activity, but are less stable than the wild type protein. These mutants can be stabilized by small molecules which are defined as pharmacological chaperones. The first chaperone to reach clinical trial is 1-deoxygalactonojirimycin, but others have been tested in vitro. Residual activity of GLA mutants has been measured in the presence or absence of pharmacological chaperones by several authors. Data obtained from transfected cells correlate with those obtained in cells derived from patients, regardless of whether 1-deoxygalactonojirimycin was present or not. The extent to which missense mutations respond to 1-deoxygalactonojirimycin is variable and a reference table of the results obtained by independent groups that is provided with this paper can facilitate the choice of eligible patients. A review of other pharmacological chaperones is provided as well. Frequent mutations can have residual activity as low as one-fourth of normal enzyme in vitro. The reference table with residual activity of the mutants facilitates the identification of non-pathological variants. Introduction Fabry disease (FD, OMIM #301500) is a rare pathology, but accounts for 8.8% of the patients affected by inherited disorders of metabolism [1] and is the second most common lysosomal storage disorder [2]. FD is caused by those mutations in the GLA gene that result in a deficiency of the protein product, lysosomal α-galactosidase (AGAL Uniprot: AGAL_HUMAN P06280; EC: 3.2.1. 22), and the accumulation of its substrates. The real incidence of FD is difficult to establish. It was estimated at 1 in 100,000 [3]. FD. A new approach with pharmacological chaperones (PC) has been proposed and a small molecular weight molecule is on the verge of being approved with the commercial name of Galafold™. This drug is an iminosugar, which closely resembles the natural product of AGAL galactose, and has been known by different names, 1-deoxygalactonojirimycin (DGJ), migalastat, AMIGAL, AT1001. DGJ inhibits reversibly AGAL at nanomolar concentrations, but stabilizes the wild type enzyme in vitro against thermal [42] and chemical induced denaturation [43] too. DGJ can be used in synergy with ERT either co-administrating both drugs intravenously or one orally (DGJ) and the other intravenously (recombinant enzyme). DGJ prolongs the half-life of AGAL in vivo, both in mouse models and in humans and leads to an improved clearance of Gb3 [44][45][46]. DGJ can be used for a stand-alone oral therapy of FD for specific missense genotypes. The efficacy of DGJ was tested in vitro, ex vivo, in cells derived from patients, and in vivo. Oral administration of DGJ reduces Gb3 in kidney, heart and skin of Fabry transgenic mice carrying the responsive human mutation R301Q [47]. When administered with an oral dose of 150 mg, it was well tolerated, increased AGAL activity [48] and decreased plasma lyso-Gb3 [47] in the majority of the patients with responsive GLA mutations. Interestingly, the best results are obtained when an intermittent regimen is used. The results of a clinical trial phase 3 study carried out on males and females affected by FD has been recently published. Patients received 150 mg of Galafold™ or placebo every other day. The study began with six months of double-blind administration and proceeded with 6 + 12 months of open-label administration. Although the authors conclude their abstract stating quite cautiously that "the percentage of patients who had a response at 6 months did not differ significantly between the migalastat (DGJ) group and the placebo group", promising results are shown. A reduction of the number of Gb3 inclusions per kidney interstitial capillary as well as a reduction of plasma lyso-Gb3 were observed [49]. More than 700 variants have been reported in HGMD for the GLA gene so far and, differently from other lysosomal disorders such as Gaucher, there are not prevalent mutations, on the contrary most are usually found only in a single family. The number of missense mutations, 467 described so far, is a surprisingly high value for a medium size protein, such as AGAL. In order to appreciate this finding it should be considered that more than 70,000 missense mutations affecting proteins associated to human diseases have been reported, with seven variants per protein on average. The large number of missense mutations poses several problems for making a diagnosis and initiating the most appropriate therapy. Recently, it was proposed to use residual activity measured in vitro to classify mutations. We wish to contribute to the evaluation of such a proposal with the first meta-analysis of the residual activity of GLA missense mutations measured by several independent research groups employing different protocols, either ex vivo, in cells derived from patients, or in vitro, in transiently transfected cells. Results covering 317 of missense mutants, mostly cases reported in HGMD and associated to FD, were collected. Data were obtained in the absence or in the presence of DGJ. For this reason, our analysis provides an independent perspective on the amenability to pharmacological chaperones. In addition to this we reviewed other small molecules that were reported to have a stabilizing effect on some GLA missense mutations in vitro and might be developed to act in synergy or as an alternative to DGJ. Meta-Analysis of Data Reporting Residual Activity and Responsiveness to DGJ of GLA Missense Mutations Several independent groups have tested the effect of DGJ on AGAL mutants, administering the drug to cells derived from patients, or most frequently, to HEK293 or COS cell transiently transfected with expression plasmids. The enhancement of enzyme levels and that of the total enzyme activity is monitored in the cells extracts and is regarded as a proof of the stabilization of the mutant in the cell by DGJ. Residual activity is normalized by the total amount of protein in the cell and should not be confused with specific activity, which is normalized by the amount of AGAL. Residual activity is influenced by the stability of the mutant in the cell and by its specific activity. In general, a fixed concentration of DGJ was used, usually 20 µM, in some cases, however, IC 50 was determined and the optimal concentration was used. The results gathered from literature are reported in Supplementary File S1 and the methods employed in each study are summarized in Table 1. In vitro results are robust and do not depend on the type of recipient cells used for transfection ( Figure 1). In vitro results are robust and do not depend on the type of recipient cells used for transfection ( Figure 1). On the other hand, residual activity measured ex vivo varies among individuals and type of cells. A few examples of the levels measured in white blood cells are provided with the average, standard deviation and number of individuals: E66Q 42.3 ± 12.5 (n = 9) [62]; A143T 35.9 ± 7.2 (n = 4), R112H 7.2 ± 7.0 (n = 5), R301Q 7.3 ± 2.7 (n = 6), R356W 1.2 ± 1.9 (n = 4) (Supplementary File S1). Figure 2 shows the average residual activity measured in lymphoblasts or in fibroblasts harboring the same mutation. A moderate yet statistically significant correlation of the data is observed only in the presence of DGJ. However, the residual activities in vitro and ex vivo correlate ( Figure 3A r = 0.7, p < 0.01; Figure 3B r = 0.7, p < 0.01). Thus, it can be concluded that tests in vitro can generally recapitulate the residual activity ex vivo and responsiveness to DGJ. When this manuscript had been completed, we became aware of a recent publication that reports residual activity of AGAL mutants expressed in HEK293 cells and tested with DGJ 10 µM [63]. These data correlate with those reported in Supplementary File S1 (−DGJ r = 0.8, p < 0.000001; +DGJ r = 0.7, p < 0.000001). Test in vitro have a limitation because they cannot account for the effect of exonic mutation on splicing. In fact, mutants are encoded by plasmids that do not contain introns. It is interesting to analyze the case of the mutations affecting a site of splicing and corresponding to G183 represented by red symbols in Figure 3. Substitution of GLY by SER results in a mutant that does not retain activity in cells derived from patients and does not recover activity with DGJ. The same mutant recovers activity with DGJ in vitro. We can hypothesize that the drug has a stabilizing effect on the protein, but cannot correct the effect on splicing. On the other hand, G183A and G183D are responsive to the drug both in vitro and ex vivo suggesting that these mutations mainly affect the protein, but not the splicing. K213M might be another example where splicing could play a role to explain the in vitro and ex vivo differences. Mutations not occurring at splicing sites can have effect on the maturation of RNA too. We suspect that this might be the case for G128E, green symbol in Figure 3, because very low residual activity was measured by several authors in cells derived from patients, either in the presence or in the absence of DGJ, whereas the mutation retains residual activity and is responsive to DGJ in vitro. Although a putative consensus for an exonic splicing enhancer including the triplet 128 was found, further experiments are needed to confirm the influence of the mutation on RNA processing. We report the score obtained by a position specific substitution matrix (PSSM) that measures whether the mutation was tolerated during the evolution of homologous proteins in Supplementary File S2. Mutations affecting the active site, as expected, have no residual activity and do not respond to DGJ, mutations occurring at non-conserved sites tend to be responsive. Predictions were obtained with Web based Polyphen2 using HumDiv or HumVar as the training set and are reported in Supplementary File S2. Both sets use disease mutations in UniprotKB as positive controls, but differ for the negative control set. HumDiv uses differences between human proteins and their closely related mammalian homologs, whereas HumVar uses common human SNPs (MAF > 1%) without annotated involvement in disease. HumVar-trained model is suitable to distinguish mild mutations, the HumDiv-trained model, considers also mild mutations as deleterious one. Although, on average, the residual activity of mutations that are predicted as probably damaging with both training sets is very different from the residual activity of mutations predicted as benign, exceptions can be observed in particular with HumVar-trained model ( Table 2). Other in silico approaches based on the structural features of AGAL, some from our group [64], have been attempted [65,66], to predict the severity of FD genotypes. We believe that data obtained in vitro should always be preferred whenever available. In Supplementary File S2, all missense variants of GLA described in ExAC [67] are reported. ExAC summarizes exome sequencing data from a wide variety of large-scale sequencing projects. Variants reported in this database, in particular those observed with higher frequency, are likely to be non-pathogenic. The mean residual activity measured in vitro for a subset of ExAC variants, i.e., those observed in more than one male is reported in Table 2. The number of hemizygous individuals reported in ExAC, the PSSM score, Polyphen2 prediction and the reference found in HGMD are reported in the same table. R118C is the variant with the lowest residual activity, only 24.5% of wild type when tested in transiently transfected cells. It is relatively frequent in the European population, but it is predicted as deleterious both by Polyphen_Humvar and Polyphen_Humdiv. Oliveira and colleagues reviewed the clinical, biochemical and histopathology data obtained from 22 individual carriers and reached the conclusion that it "does not segregate with FD manifestations at least in a highly-penetrant Mendelian fashion", but might be a risk factor for stroke [73]. In accordance with this, low levels of lyso-Gb3, a biomarker of FD, were measured in the carriers [2]. R118C is considered amenable to DGJ according to galafold amenability table [63]. R118C was tested with DGJ and with Rosiglitazone by Lukas et al. [74]. Although in terms of activity fold increase, the effect of mono-therapy with either drug was small, the combinatorial effect was significantly higher. A143T has an average residual activity of approximately 39.7% of wild type. Brand and coworkers [75] analyzed 15 females and 10 males carrying this mutation. They observed that female and male A143T carriers showed less organ involvement in comparison to FD patients with other missense mutations and those suffering from stroke/TIA showed no further FD-typical organ manifestations. They came to the conclusion that "A143T seems not to be causal for FD, but rather a genetic variant of unknown significance or a genetic modifier". A143T is considered amenable for the therapy with DGJ according to the galafold amenability table. E66Q has specific activity, Vmax, and affinity for the artificial substrate 4-methylumbelliferylα-galactopyranoside, Km, similar to those of wild type, but residual activity in transfected cells is approximately one half of the wild type, possibly because the stability at neutral pH is reduced [52]. The mutation is relatively frequent in East Asian population. Sakuraba and coworkers measured the activity in 20 Japanese or Korean male carriers with renal and cardiovascular disorders and found 13% to 26% of the normal mean values for plasma and 24% to 65% of the normal mean values for white blood cells, but the lyso-Gb3 levels were as low as those of healthy controls and no inclusion bodies were found [62]. Hu and co-workers found that the mutation segregated with renal disease in a very large Chinese family, but they did not measure the accumulation of the substrate or of lyso-Gb3 in the same patients [76]. The involvement in cardiovascular disease has also been suspected, but no accumulation of Gb3 was found in the heart of a patient carrying E66Q [77]. The association between E66Q and the risk cerebral small-vessel occlusion is debated [78,79]. In conclusion, pathogenicity of E66Q is still vexata quaestio. E66Q is considered non amenable for DGJ according to galafold amenability table, but an increase in activity upon drug administration was measured by other authors [52,55,57,80]. D313Y was first associated to classic phenotype [71]. Subsequent data clinically and biochemically indicated that D313Y should be considered a variant [81]. Cardiac, nephrological, neurological, laboratory and quality of life data were collected from carriers of D313Y with a 4-year follow up and the results indicated that the mutation is non pathological. Very low levels of lyso-Gb3 were found [2]. The opinion that D313Y is a non-pathological variant is supported by the fact that its frequency of 0.4% in the non-Finnish European population is much higher than the prevalence of FD in the same population. D313Y is considered amenable for DGJ. For other mutations such as S126G and N139S, the clinical picture did not include specific signs of FD [69,70]. Both mutations are considered amenable for DGJ. This survey would suggest that a residual activity higher than 25% can indicate a non-pathological variant. Nonetheless, when considering administration of a therapy, if any, clinicians should be aware of the fact that the severity of the disease depends not only on the damage caused to the protein itself by the mutation, but also by other factors, which regrettably have not yet been clarified. It should be considered that the phenotype can differ even among the members of the same family [82] and that the residual activity in plasma or in white blood cells can vary largely in people carrying the same mutation [62]. N215S is a mutation affecting glycosylation of the AGAL enzyme [83]. It presents with a proportionally high residual activity >25% of normal. It is considered a distinct sub-type of FD due to its elevated prevalence compared to non-N215S FD cases and late-onset occurrence [60,84]. Interestingly, this variant has never been scrutinized to cause a pathogenic phenotype. By contrast, it is believed to cause a specific cardiac phenotype. Other cardiac-prone mutations might exist, e.g., the so-called IVS4 + 919G > A splice mutation highly prevalent in the Taiwanese population. A clinical trial investigating the long term clinical course of N215S patients is currently ongoing (clinicaltrials.gov, NCT01429597). In this case, the diagnostic and prognostic value of biomarker lyso-Gb3 can be appreciated. While apparently all (genetically) found patients were identified it was demonstrated that Gb3 was normal in a great fraction of patients. Meehan et al. [85] showed that N215S was present in a patient with renal manifestation and is, thus, suggestive to cause mainly cardiac and renal symptoms. N215S is amenable for DGJ. Future Perspectives for Therapy DGJ is a promising drug, but it might not be the ideal drug yet. DGJ inhibits AGAL at nanomolar concentration and stabilizes it at micromolar concentrations. Therefore a continuous exposure to the drug can promote AGAL levels, but not AGAL intracellular activity. Reduction of Gb3 concentration was not observed in fibroblasts derived from patients carrying the mutations R301Q or L300P and incubated with DGJ for 10 days, but was observed if the incubation of seven days with the drug was followed by a three day wash-out [55]. The discovery of DGJ was the result of an educated guess, and not of a methodical screening [86]. In fact, DGJ is a glycomimetic with a six-atom ring and very closely resembles the galactose, which is the natural product and inhibitor, and also the first chaperone described for AGAL [87]. A more systematic search was started with the aim of finding other drugs that might have a better ratio between the stabilizing and the inhibitory effect. Most of the molecules considered so far are glycomimetics as DGJ itself. DGJ is an amine and is positively charged at neutral pH. In order to facilitate its diffusion through membranes, alkylation was proposed [88]. Contrary to what was observed for analogous iminosugars active on other lysosomal glycosidases, alkyl-DGJ derivatives had a lower affinity for AGAL and apparently a lower chaperoning potential probably because one important hydrogen-bond, the one established between the heterocyclic NH proton and D170 of AGAL, is lost. On the contrary, aryl DGJ-derivatives (1-deoxygalactonojirimycin-arylthioureas) that form a hydrogen bond between the aryl-N'H thiourea proton and D231 of AGAL, act as reversible inhibitors and chaperones. When tested at 30 µM concentration on Q279E or R301Q mutants, the best candidate, namely N'-p-methoxyphenyl-DGJ-Aryl thiourea, had a seven fold higher chaperoning activity than DGJ at its optimal concentration [89]. Iminosugars characterized by a smaller, five-atom ring system, have been described [90,91]. 2,5-dideoxy-2,5-imino-D-altritol (DIA) inhibited AGAL and stabilized it against thermal denaturation and acted as a chaperone when tested on Fabry R301Q lymphoblasts although at a concentration 20 times higher than the optimal one for DGJ. The effect on Gb3 accumulation was not tested. One derivative of DIA possessing an aminomethyl group showed a chaperoning effect higher than DGJ when administered to N215S patient lymphocyte cell line at high concentration (100 µM) [92]. DGJ binds and inhibits AGAL both at neutral pH, which is required, and at acidic pH, which is not required [42]. It would be useful to find molecules that bind and stabilize AGAL mutants when they are in the neutral environment of the endoplasmic reticulum, but dissociate when the protein reaches the lysosome. This point was specifically addressed incorporating an orthoester segment into DGJ [93]. Glycomimetics require a precise dosing, whereas non carbohydrate mimetics might offer a larger therapeutic window and an improved therapeutic index. In order to look for chemically diverse drugs, a library of 230,000 diverse compounds was screened but no inhibitors or activators of a-Gal A with an IC 50 below 50 µM were identified. Unfortunately the screening procedure relied only on an enzymatic assay carried out at pH 5.9, but not at neutral pH or on assays based on AGAL stabilization [94]. So far reversible inhibitors of AGAL that act as PC have been described. The association between the two effects, inhibition and stabilization, is avoidable because active sites are not the only targets for chaperones. Allosteric ligands might act as pharmacological chaperones, and might be more effective than reversible inhibitors, since they would perform their stabilizing action without competing with the natural substrate. Looking for allosteric PC is difficult because they do not resemble chemically known substrates. Large libraries of structurally diverse compounds should be tested and preliminary screening in silico might be functional. Allosteric ligands do not bind the active site, but one of the many pockets occurring on the surface of a protein. Therefore it is difficult to restrict the area where binding is allowed as required by structure based virtual screening. A recent screening, carried out on 10,000 molecules, showed that it is possible to find molecules that, at least in silico, preferentially bind an allosteric site than the active site. The two sites are located at the opposite sides of the catalytic domain of AGAL [95]. The PC that have been described previously are specific ligands of AGAL. They are effective on missense mutations that cause destabilization of the enzyme and, ultimately, its early degradation. Other small molecules that do not physically interact with AGAL, but have effect on proteostasis, can be considered for the treatment of these cases as well. Proteostasis regulators can be used in synergy with specific PC potentiating their action or allowing lower dosages. The before mentioned Rosiglitazone, a Peroxisome proliferator-activated receptor gamma (PPARγ) agonist rearranges global cellular ubiquitination by inhibiting the ubiquitin-proteasome system (UPS). It displayed the highest beneficial effect on mutations with a significant residual activity (e.g., R118C and T385A) and was even more effective in combination with a PC [60]. Mechanistic studies are required to explain why other ubiquitination inhibitors such as Pyr-41 failed to increase AGAL activity. This finding might be ascribed to intense adverse effects of cellular ubiquitination inhibition caused by Pyr-41 and associated toxicological aspects. Ambroxol, a mucolytic agent used in the treatment of respiratory diseases, was identified as an enhancer of AGAL activity. The compound has formerly been demonstrated to act as a PC on mutant Glucocerebrosidase in Gaucher disease. Even though its mechanism of action is not known it was demonstrated to increase cellular AGAL level and activity of most (DGJ−) amenable mutations (E59K, A73V, A143T, A156V, I232T, R301G, R301Q, R356W and R363H) indicating an impact on AGAL proteostasis. Ambroxol was, however, not effective as a monotherapy, but only in the combination with a PC, galactose or DGJ [60]. The synergistic effect of N -p-methoxyphenyl-DGJ-Aryl thiourea with two proteostasis regulators, 4-phenylbutyric acid and celastrol has been assessed. The latter compound was not effective, but 4-phenylbutyric acid at 0.1 mM concentration was able to enhance the chaperoning activity of the aryl-thiourea (20 µM) on the fibroblasts harbouring Q279E [89]. The effects of lactacystin 2 µM (a proteasome inhibitor) and kifunensine 0.2 mM (an inhibitor of ER α-mannosidase I) on the processing on some mutants assessing the amount of protein was tested by Fan and coworkers [52] They found that some mutants responded to both drugs (F113L, N215S, and M296I), one responded only to lactacystin (E66Q), others only to kifunesine (M72V, I91T, A97V, R112H, L166V, and Q279E), and others had low or no response to either (A20P, A156V, M296V, R356W, G373D, G373S, E59K, and P146S). All mutants that are responsive to kifunensine or lactacystin are also responsive to DGJ, while A156V, M269V, R356W, and E59K are responsive only to DGJ. These results suggest that a different cocktail of drugs might be ideal for specific AGAL mutations. Methods Residual activities in Supplementary File S1 were obtained from the literature. In those cases where the authors did not report the normalized percentage values, the activity of the mutant in the presence of DGJ was divided by the activity of wild type AGAL multiplied by 100 (+DGJ/wild × 100). The reference wild type activity, measured in the absence of DGJ was obtained for each mutation from the appropriate paper. IC 50 values are reported when available. Pearson correlation coefficients and two tailed p-values were calculated as described by Lowry [96]. PSSM values were calculated as described [97,98]. Active site residues were identified with DrosteP [99]. Predictions were obtained with Web based Polyphen2 using HumDiv or HumVar as the training set under default conditions [100].
5,657.4
2016-12-01T00:00:00.000
[ "Biology" ]
Proof of Concept Study for Increasing Tenascin-C-Targeted Drug Delivery to Tumors Previously Subjected to Therapy: X-Irradiation Increases Tumor Uptake Simple Summary We hypothesized that an agent recognizing a specific factor, which is involved in tissue injury repair, could achieve the goal of delivering an additional antitumor agent to tumors during tissue repair after initial anticancer therapy. To demonstrate our concept, the present study employed tenascin-C (TNC) as a target molecule and radiation as initial therapy. Increased TNC expression was observed in tumors after radiation exposure in a pancreatic cancer mouse model. Of our three anti-TNC antibodies, the antibody 3–6 showed statistically significant higher tumor uptake compared with non-irradiated tumors in the by biodistribution and single-photon emission computed tomography with computed tomography studies. This finding strongly supports our concept. Our proposed therapeutic strategy could result in better outcomes for patients with treatment-refractory cancer. Abstract In treatment-refractory cancers, tumor tissues damaged by therapy initiate the repair response; therefore, tumor tissues must be exposed to an additional burden before successful repair. We hypothesized that an agent recognizing a molecule that responds to anticancer treatment-induced tissue injury could deliver an additional antitumor agent including a radionuclide to damaged cancer tissues during repair. We selected the extracellular matrix glycoprotein tenascin-C (TNC) as such a molecule, and three antibodies recognizing human and murine TNC were employed to evaluate X-irradiation-induced changes in TNC uptake by subcutaneous tumors. TNC expression was assessed by immunohistochemical staining of BxPC-3 tumors treated with or without X-irradiation (30 Gy) for 7 days. Antibodies against TNC (3–6, 12–2–7, TDEAR) and a control antibody were radiolabeled with 111In and injected into nude mice having BxPC-3 tumors 7 days after X-irradiation, and temporal uptake was monitored for an additional 4 days by biodistribution and single-photon emission computed tomography with computed tomography (SPECT/CT) studies. Intratumoral distribution was analyzed by autoradiography. The immunohistochemical signal for TNC expression was faint in nontreated tumors but increased and expanded with time until day 7 after X-irradiation. Biodistribution studies revealed increased tumor uptake of all three 111In-labeled antibodies and the control antibody. However, a statistically significant increase in uptake was evident only for 111In-labeled 3–6 (35% injected dose (ID)/g for 30 Gy vs. 15% ID/g for 0 Gy at day 1, p < 0.01), whereas limited changes in 111In-labeled TDEAR2, 12–2–27, and control antibody were observed (several % ID/g for 0 and 30 Gy). Serial SPECT/CT imaging with 111In-labeled 3–6 or control antibody provided consistent results. Autoradiography revealed noticeably stronger signals in irradiated tumors injected with 111In-labeled 3–6 compared with each of the nonirradiated tumors and the control antibody. The signals were observed in TNC-expressing stroma. Markedly increased uptake of 111In-labeled 3–6 in irradiated tumors supports our concept that an agent, such as an antibody, that recognizes a molecule involved in tissue injury repair, such as TNC, could enhance drug delivery to tumor tissues that have undergone therapy. The combination of antibody 3–6 coupled to a tumoricidal drug and conventional therapy has the potential to achieve better outcomes for patients with refractory cancer. Introduction Continuous advances in cancer therapy have led to improved survival of patients with many types of cancer [1]. Despite such advances, however, the prognosis of patients with a treatment-refractory cancer, as is often the case for pancreatic cancer, remains poor [1,2]. Most patients with refractory cancer receive multimodal therapy consisting of chemotherapy and radiation [3]. Although the outcome of patients with refractory cancer is unpredictable [2,3], anticancer treatments clearly cause damage to cancer tissues, suggesting that the cancer tissues initiate a physiological response to treatment-induced injury. Therefore, we hypothesized that an agent recognizing a molecule associated with that response could also mediate the delivery of anticancer drugs and radionuclides and thereby provide additional therapeutic benefit. Tenascin-C (TNC) is an extracellular matrix glycoprotein that participates in cell adhesion, growth, migration, and differentiation [4][5][6]. TNC is expressed at a low level in healthy adult tissues, yet it is upregulated substantially and specifically in response to tissue injury [5,6]. The upregulation of TNC plays a role in tissue repair in damaged tissues but also can promote the growth, differentiation, vascularization, cell adhesion, invasion capacity, and metastatic potential of tumors [5,6]. TNC is a hexameric glycoprotein [5] that provides many potential binding sites for anticancer agents such as antibodies. Therefore, TNC is an attractive target molecule for testing our hypothesis that a drug delivery mechanism targeting a tissue injury responsive factor could increase the overall efficacy of an anticancer regimen. We developed several antibodies against TNC, including three that recognize both human and murine TNC; these antibodies were named 3-6 [7], 12-2-7 [8], and TDEAR2 [8], as shown in Figure 1. As a tumor model, a BxPC-3 pancreatic cancer xenograft tumor model was selected because BxPC-3 tumor tissues produce only small amounts of TNC in control/nontreated animals, yet substantial amounts are produced after X-ray irradiation, so this approach was appropriate to test our hypothesis. The three antibodies were radiolabeled with 111 In, and changes in the uptake of the radiolabeled antibodies were evaluated in nude mice bearing tumors that had been previously subjected to X-irradiation (or were not irradiated, as a control). Analysis of TNC Expression in Tumors Treated with X-Irradiation Immunohistochemical staining of sections of nonirradiated BxPC-3 tumors revealed only faint TNC intensity in the stroma and none in tumor cells ( Figure 2). This lack of expression in tumor cells was confirmed by cell-binding assays, in which there was no binding of 111 In-labeled antibody 3-6 to BxPC-3 cells in vitro ( Figure 3). In tumors exposed to X-rays (30 Gy), the TNC-stained area as well as staining intensity in the stroma increased progressively until day 7 post-exposure ( Figure 2). Therefore, we chose day 7 post-irradiation as the starting point for further experimentation with irradiated samples. TNC contains epidermal growth factor (EGF)-like repeats and fibronectin type III (FNIII) domains. Alternative splicing occurs between the fifth and sixth FNIII domains. The known binding sites of the three antibodies are denoted by solid lines under the domains. The antibodies 3-6 and 12-2-7 recognize the EGF-like repeats and TDEAR2 binds to the alternative splicing region. Analysis of TNC Expression in Tumors Treated with X-Irradiation Immunohistochemical staining of sections of nonirradiated BxPC-3 tumors revealed only faint TNC intensity in the stroma and none in tumor cells ( Figure 2). This lack of expression in tumor cells was confirmed by cell-binding assays, in which there was no binding of 111 In-labeled antibody 3-6 to BxPC-3 cells in vitro ( Figure 3). In tumors exposed to X-rays (30 Gy), the TNC-stained area as well as staining intensity in the stroma increased progressively until day 7 post-exposure ( Figure 2). Therefore, we chose day 7 post-irradiation as the starting point for further experimentation with irradiated samples. Alternative splicing occurs between the fifth and sixth FNIII domains. The known binding sites of the three antibodies are denoted by solid lines under the domains. The antibodies 3-6 and 12-2-7 recognize the EGF-like repeats and TDEAR2 binds to the alternative splicing region. Analysis of TNC Expression in Tumors Treated with X-Irradiation Immunohistochemical staining of sections of nonirradiated BxPC-3 tumors revealed only faint TNC intensity in the stroma and none in tumor cells ( Figure 2). This lack of expression in tumor cells was confirmed by cell-binding assays, in which there was no binding of 111 In-labeled antibody 3-6 to BxPC-3 cells in vitro ( Figure 3). In tumors exposed to X-rays (30 Gy), the TNC-stained area as well as staining intensity in the stroma increased progressively until day 7 post-exposure ( Figure 2). Therefore, we chose day 7 post-irradiation as the starting point for further experimentation with irradiated samples. Immunohistochemical staining for TNC in BxPC-3 tumors. Paraffin-embedded sections were stained with anti-TNC antibody 3-6 (n = 3 per group). Shown are representative images of tumors that had been irradiated with X-rays (30 Gy) or not irradiated. Biodistribution of 111 In-Labeled Antibodies At 7 days after irradiation of 30 Gy, each individual 111 In-labeled antibody was injected into mice bearing tumors, and the biodistribution of each antibody was evaluated after 30 min and on days 1, 2, and 4 post-injection ( Figure 4). Figure 5 presents data showing the temporal changes of 111 In-labeled antibody uptake in nonirradiated tumors and those irradiated at 30 Gy. In nonirradiated tumors, uptake of 111 In-labeled 3-6 and TDEAR2 was greater than that of 111 In-labeled control antibody (p < 0.01), whereas the uptake of 111 In-labeled 12-2-7 did not differ significantly compared with the control antibody. Although uptake of each of the four antibodies by tumors irradiated with 30 Gy was greater than that of nonirradiated tumors, there were significant differences among tumors of mice injected with 3-6, TDEAR2, or control antibodies (p < 0.01 or p < 0.05), whereas no significant difference was observed with 12-2-7 ( Figure 5). Tumor uptake of 111 In-labeled 3-6 increased markedly, i.e., 35% injected dose per gram (ID/g), at day 1 post-injection ( Figure 5), which was more than 2-fold greater than for nontreated tumors ( Figure 5). Tables 1-4 show the biodistribution of the four 111 In-labeled antibodies in normal organs. Although there were several statistically significant differences between the nonirradiated and 30 Gy groups for each antibody among the various organs, other differences were marginal (Tables 1-4). The statistical differences were found in bone uptake of the control antibody (Table 1), in the spleen of 3-6 (Table 2), the pancreas and kidney of 12-2-7 (Table 3), and the liver of TDEAR2 (Table 4). Although the exact reasons are unclear, the different tumor uptake might affect those uptakes. Biodistribution of 111 In-Labeled Antibodies At 7 days after irradiation of 30 Gy, each individual 111 In-labeled antibody was injected into mice bearing tumors, and the biodistribution of each antibody was evaluated after 30 min and on days 1, 2, and 4 post-injection ( Figure 4). Figure 5 presents data showing the temporal changes of 111 In-labeled antibody uptake in nonirradiated tumors and those irradiated at 30 Gy. In nonirradiated tumors, uptake of 111 In-labeled 3-6 and TDEAR2 was greater than that of 111 In-labeled control antibody (p < 0.01), whereas the uptake of 111 In-labeled 12-2-7 did not differ significantly compared with the control antibody. Although uptake of each of the four antibodies by tumors irradiated with 30 Gy was greater than that of nonirradiated tumors, there were significant differences among tumors of mice injected with 3-6, TDEAR2, or control antibodies (p < 0.01 or p < 0.05), whereas no significant difference was observed with 12-2-7 ( Figure 5). Tumor uptake of 111 In-labeled 3-6 increased markedly, i.e., 35% injected dose per gram (ID/g), at day 1 post-injection ( Figure 5), which was more than 2-fold greater than for nontreated tumors ( Figure 5). Tables 1-4 show the biodistribution of the four 111 In-labeled antibodies in normal organs. Although there were several statistically significant differences between the nonirradiated and 30 Gy groups for each antibody among the various organs, other differences were marginal (Tables 1-4). The statistical differences were found in bone uptake of the control antibody (Table 1), in the spleen of 3-6 (Table 2), the pancreas and kidney of 12-2-7 (Table 3), and the liver of TDEAR2 (Table 4). Although the exact reasons are unclear, the different tumor uptake might affect those uptakes. Biodistribution of 111 In-Labeled Antibodies At 7 days after irradiation of 30 Gy, each individual 111 In-labeled antibody was injected into mice bearing tumors, and the biodistribution of each antibody was evaluated after 30 min and on days 1, 2, and 4 post-injection ( Figure 4). Figure 5 presents data showing the temporal changes of 111 In-labeled antibody uptake in nonirradiated tumors and those irradiated at 30 Gy. In nonirradiated tumors, uptake of 111 In-labeled 3-6 and TDEAR2 was greater than that of 111 In-labeled control antibody (p < 0.01), whereas the uptake of 111 In-labeled 12-2-7 did not differ significantly compared with the control antibody. Although uptake of each of the four antibodies by tumors irradiated with 30 Gy was greater than that of nonirradiated tumors, there were significant differences among tumors of mice injected with 3-6, TDEAR2, or control antibodies (p < 0.01 or p < 0.05), whereas no significant difference was observed with 12-2-7 ( Figure 5). Tumor uptake of 111 In-labeled 3-6 increased markedly, i.e., 35% injected dose per gram (ID/g), at day 1 post-injection ( Figure 5), which was more than 2-fold greater than for nontreated tumors ( Figure 5). Tables 1-4 show the biodistribution of the four 111 In-labeled antibodies in normal organs. Although there were several statistically significant differences between the nonirradiated and 30 Gy groups for each antibody among the various organs, other differences were marginal (Tables 1-4). The statistical differences were found in bone uptake of the control antibody (Table 1), in the spleen of 3-6 ( Table 2), the pancreas and kidney of 12-2-7 (Table 3), and the liver of TDEAR2 (Table 4). Although the exact reasons are unclear, the different tumor uptake might affect those uptakes. Data are expressed as the mean ± SD of % ID/g. * p < 0.05, ** p < 0.01, when compared with the 0 Gy group counterpart. Data are expressed as the mean ± SD of % ID/g, ** p < 0.01, when compared with the 0 Gy group counterpart. Single-Photon Emission Computed Tomography and Computed Tomography (SPECT/CT) with 111 In-Labeled Antibodies SPECT/CT (single-photon emission computed tomography with computed tomography) imaging of mice injected with 111 In-labeled control antibody or antibody 3-6 was conducted to confirm the results of the biodistribution study. Figure 6 presents serial SPECT/CT images after 30 min and on days 1, 2, 3, and 4 post-injection of each labeled antibody. At 30 min post-injection, the radioactivity of both 111 In-labeled antibodies in the blood pool was very high, whereas that in tumors was low. At day 1, the uptake of 111 In-labeled 3-6 in tumors irradiated with 30 Gy had increased markedly compared with the 30 min time point and was substantially greater than that for the nonirradiated tumor and for tumors of mice injected with the 111 In-labeled control antibody. Although on day 2 or later, tumor uptake of 111 In-labeled 3-6 decreased to approximately half, it remained higher compared with the nonirradiated tumor and the control antibody. There was no unexpected high uptake in organs and tissues. These findings are consistent with those in the biodistribution study as mentioned above. Data are expressed as the mean ± SD of % ID/g. * p < 0.05, ** p < 0.01, when compared with the 0 Gy group counterpart. Data are expressed as the mean ± SD of % ID/g. * p < 0.05, ** p < 0.01, when compared with the 0 Gy group counterpart. Cancers 2020, 12, 3652 8 of 15 nonirradiated tumor and for tumors of mice injected with the 111 In-labeled control antibody. Although on day 2 or later, tumor uptake of 111 In-labeled 3-6 decreased to approximately half, it remained higher compared with the nonirradiated tumor and the control antibody. There was no unexpected high uptake in organs and tissues. These findings are consistent with those in the biodistribution study as mentioned above. Autoradiography On day 1 post-injection of 111 In-labeled antibody 3-6 or control antibody, nonirradiated and 30 Gy-irradiated tumors were excised and sectioned for autoradiography. Only a slight radiation signal was evident in each of the nonirradiated and irradiated tumors from mice injected with the Cancers 2020, 12, 3652 9 of 15 labeled control antibody ( Figure 7A). In the irradiated tumors of mice injected with 111 In-labeled 3-6, the area encompassed by the strong signal was much greater than that for the nonirradiated tumor ( Figure 7A). There was no significant difference in signal intensity between 0 Gy and 30 Gy in the control antibody group, while there was a significant difference in the antibody 3-6 group (p < 0.01) shown in Figure 7B. The signal intensity for 111 In-labeled 3-6 was significantly higher than that for the control (p < 0.01, Figure 7B). The area with the strong radioactivity signal for the 111 In-labeled antibody 3-6 also showed intense staining for TNC in adjacent sections, and that with the weak signal showed low staining (Figure 8). Autoradiography On day 1 post-injection of 111 In-labeled antibody 3-6 or control antibody, nonirradiated and 30 Gy-irradiated tumors were excised and sectioned for autoradiography. Only a slight radiation signal was evident in each of the nonirradiated and irradiated tumors from mice injected with the labeled control antibody ( Figure 7A). In the irradiated tumors of mice injected with 111 In-labeled 3-6, the area encompassed by the strong signal was much greater than that for the nonirradiated tumor ( Figure 7A). There was no significant difference in signal intensity between 0 Gy and 30 Gy in the control antibody group, while there was a significant difference in the antibody 3-6 group (p < 0.01) shown in Figure 7B. The signal intensity for 111 In-labeled 3-6 was significantly higher than that for the control (p < 0.01, Figure 7B). The area with the strong radioactivity signal for the 111 In-labeled antibody 3-6 also showed intense staining for TNC in adjacent sections, and that with the weak signal showed low staining (Figure 8). Immunohistochemistry to Compare Antibodies 3-6 and 12-2-7 As a separate experiment, immunohistochemical staining of adjacent sections with antibodies 3-6 and 12-2-7, which bind to the EGF (epidermal growth factor)-like repeats (Figure 1), was carried out. Although both antibodies stained stroma but not tumor cells, the staining pattern of stroma was different; namely, antibody 3-6 stained TNC throughout the stroma, whereas antibody 12-2-7 stained only part of the stroma (Figure 9). Immunohistochemistry to Compare Antibodies 3-6 and 12-2-7 As a separate experiment, immunohistochemical staining of adjacent sections with antibodies 3-6 and 12-2-7, which bind to the EGF (epidermal growth factor)-like repeats (Figure 1), was carried out. Although both antibodies stained stroma but not tumor cells, the staining pattern of stroma was different; namely, antibody 3-6 stained TNC throughout the stroma, whereas antibody 12-2-7 stained only part of the stroma (Figure 9). Discussion We hypothesized that an antibody recognizing a molecule associated with tissue injury repair after antitumor therapy could deliver an additional tumoricidal agent, such as a radionuclide, to cancer tissues. To test this hypothesis, we selected TNC as a target molecule and employed three antibodies (3-6, 12-2-7, and TDEAR2) recognizing human and murine TNC [7,8]. These antibodies were labeled with 111 In, and temporal changes of the uptake of each antibody were evaluated in nude mice bearing BxPC-3 tumors exposed to X-rays, which induce TNC expression. The biodistribution studies revealed markedly increased tumor uptake of 111 In-labeled antibody 3-6 with statistical significance (35% ID/g for 30 Gy vs. 15% ID/g for 0 Gy at day 1, p < 0.01). SPECT/CT imaging and autoradiographic studies provided consistent results. These findings demonstrate that an anti-TNC antibody could deliver a radionuclide to tumors, supporting our hypothesis. Our proposed therapeutic strategy with the anti-TNC antibody coupled with an antitumor agent has three advantages. First, it targets intratumoral regions responding to damage induced by initial cancer therapy as shown in the present study, providing additional burden before successful repair. This strategy could also circumvent the problem of resistance to therapy. Second, the strategy reduces stromal barriers within the tumor microenvironment, as such barriers can inhibit the penetrance of antitumor agents, especially high-molecular-weight agents, into tumors [9]. More intratumoral stroma is formed by anticancer therapy; TNC is induced and plays a role in stroma formation [5,6,10]. Our antibody 3-6 targets upregulated TNC and could inhibit stroma formation. Third, although antibodies will generally accumulate in tumor tissues at a relatively slow rate [11], our antibody 3-6 accumulated rapidly in the tumors that had undergone therapy, indicating that radiolabeled 3-6 can deposit higher radiation doses in tumors. Taken together, the therapeutic strategy with antibody 3-6 conjugated to a tumoricidal agent, including radionuclides, has the potential to provide better outcomes when combined with conventional therapy. Interestingly, the present study revealed a difference in tumor uptake of the three anti-TNC antibodies 3-6, 12-2-7, and TDEAR2. TDEAR2 recognizes a region in TNC derived from an alternatively spliced pre-mRNA, suggesting that the majority of TNC that is upregulated upon exposure of tumors to X-rays does not contain this region. Previous studies showed that the upregulation of TNC in response to a toxin or hapten yields the splice variants of TNC [7,12]. The splicing of TNC pre-mRNA underlies the observed spatiotemporal expression of TNC, which is associated with distinct cellular processes [13]. Our data suggest that TNC pre-mRNA is perhaps spliced in tumors after X-ray exposure, and to date, no other studies have shown this. Additional studies might provide new insights into the complexity of TNC functions during tissue repair. Although antibodies 3-6 and 12-2-7 recognize EGF-like repeats of TNC [7,8], the staining patterns for these two antibodies differed in cancer tissues, suggesting that the two recognize different Discussion We hypothesized that an antibody recognizing a molecule associated with tissue injury repair after antitumor therapy could deliver an additional tumoricidal agent, such as a radionuclide, to cancer tissues. To test this hypothesis, we selected TNC as a target molecule and employed three antibodies (3-6, 12-2-7, and TDEAR2) recognizing human and murine TNC [7,8]. These antibodies were labeled with 111 In, and temporal changes of the uptake of each antibody were evaluated in nude mice bearing BxPC-3 tumors exposed to X-rays, which induce TNC expression. The biodistribution studies revealed markedly increased tumor uptake of 111 In-labeled antibody 3-6 with statistical significance (35% ID/g for 30 Gy vs. 15% ID/g for 0 Gy at day 1, p < 0.01). SPECT/CT imaging and autoradiographic studies provided consistent results. These findings demonstrate that an anti-TNC antibody could deliver a radionuclide to tumors, supporting our hypothesis. Our proposed therapeutic strategy with the anti-TNC antibody coupled with an antitumor agent has three advantages. First, it targets intratumoral regions responding to damage induced by initial cancer therapy as shown in the present study, providing additional burden before successful repair. This strategy could also circumvent the problem of resistance to therapy. Second, the strategy reduces stromal barriers within the tumor microenvironment, as such barriers can inhibit the penetrance of antitumor agents, especially high-molecular-weight agents, into tumors [9]. More intratumoral stroma is formed by anticancer therapy; TNC is induced and plays a role in stroma formation [5,6,10]. Our antibody 3-6 targets upregulated TNC and could inhibit stroma formation. Third, although antibodies will generally accumulate in tumor tissues at a relatively slow rate [11], our antibody 3-6 accumulated rapidly in the tumors that had undergone therapy, indicating that radiolabeled 3-6 can deposit higher radiation doses in tumors. Taken together, the therapeutic strategy with antibody 3-6 conjugated to a tumoricidal agent, including radionuclides, has the potential to provide better outcomes when combined with conventional therapy. Interestingly, the present study revealed a difference in tumor uptake of the three anti-TNC antibodies 3-6, 12-2-7, and TDEAR2. TDEAR2 recognizes a region in TNC derived from an alternatively spliced pre-mRNA, suggesting that the majority of TNC that is upregulated upon exposure of tumors to X-rays does not contain this region. Previous studies showed that the upregulation of TNC in response to a toxin or hapten yields the splice variants of TNC [7,12]. The splicing of TNC pre-mRNA underlies the observed spatiotemporal expression of TNC, which is associated with distinct cellular processes [13]. Our data suggest that TNC pre-mRNA is perhaps spliced in tumors after X-ray exposure, and to date, no other studies have shown this. Additional studies might provide new insights into the complexity of TNC functions during tissue repair. Although antibodies 3-6 and 12-2-7 recognize EGF-like repeats of TNC [7,8], the staining patterns for these two antibodies differed in cancer tissues, suggesting that the two recognize different epitopes. There are several glycosylation sites in the EGF-like repeats region [5], so this particular post-translational modification might affect the recognition of TNC by 3-6 and 12-2-7, leading to the different rates of uptake of the two antibodies. Further epitope analysis could reveal the reason why antibody 3-6 was taken up more aggressively by injured tumors, enabling optimization of our therapeutic strategy. Our study has several limitations. First, the stroma of nonirradiated BxPC-3 tumors expressed only a small amount of TNC, whereas TNC is highly expressed in tumor stroma of many epithelial malignancies including pancreatic cancer [8,10]. Therefore, it will be necessary to evaluate changes in the uptake of anti-TNC antibodies in tumors that express a high level of TNC under the untreated condition. Second, upregulation of TNC expression is induced by antitumor drugs as well as by radiation [5,14]. X-rays achieve uniform distribution of radiation in cancer tissues, whereas chemotherapy and nuclear-medicine therapy result in heterogeneous distribution of drugs and radionuclides, respectively. Therefore, it will be necessary to evaluate to what extent tumor uptake changes after chemotherapy and/or radionuclide therapy to clarify what types of therapy could be combined with our proposed antibody-mediated treatment strategy. In conclusion, the present study demonstrates that antibody 3-6 can deliver a radionuclide additionally to BxPC-3 tumors previously exposed to X-rays. This supports our concept that an antibody recognizing a specific factor, such as TNC, which is involved in tissue injury repair, could achieve the goal of delivering an additional antitumor agent to tumors during tissue repair after initial cancer therapy. A combination of antibody 3-6 with conventional cancer therapy could result in better outcomes for patients with treatment-refractory cancer. Cells The human pancreatic cancer-cell line BxPC-3 and the human melanoma-cell line A374 were obtained from ATCC (Manassas, VA, USA). The cells were maintained in RPMI1640 medium (Wako Pure Chemical Industries, Osaka, Japan) supplemented with 10% fetal bovine serum (Sigma) in a humidified incubator maintained at 37 • C with 5% CO 2 . Cell Binding of 111 In-Labeled Anti-TNC 3-6 The binding of 111 In-labeled anti-TNC 3-6 to BxPC-3 cells was carried out as previously described [15]. Briefly, 3-4 days after seeding, BxPC-3 cells were detached and suspended in phosphate-buffered saline with 1% BSA (Sigma, St. Louis, MO, USA) at various densities ranging from 3.9 × 10 4 to 1.0 × 10 7 (n = 3 per number of cells). Each suspension was incubated with 111 In-labeled anti-tenascin-C (TNC) 3-6 on ice for 60 min. After washing the cells, radioactivity bound to cells was measured using a gamma counter. Mouse Model of Subcutaneous Tumors The protocol for the animal experiments was approved by the Animal Care and Use Committee of the National Institute of Radiological Sciences (code 07-1064-23, 25 September 2017), and all animal experiments were conducted following the institutional guidelines regarding animal care and handling. BALB/c-nu/nu male mice (5 weeks old, CLEA Japan, Tokyo, Japan) were maintained under specific pathogen-free conditions. Mice (n = 165) were inoculated subcutaneously with BxPC-3 cells (4 × 10 6 ) in the left thigh under isoflurane anesthesia. Immunohistochemistry with Anti-TNC Antibodies When subcutaneous tumors reached a diameter of approximately 8 mm, tumors were irradiated with 30 Gy of X-rays at a rate of 3.9 Gy/min with a TITAN-320 X-ray generator (Shimadzu, Kyoto, Japan). Other parts of the mouse body were covered with a brass shield. On post-exposure days 1, 3, and 7, tumors (n = 3 per time point) were sampled and fixed in 10% (v/v) neutral buffered formalin and embedded in paraffin for sectioning. Nontreated tumors were used as controls. Sections (thickness, 1 µm) were immunostained with antibody 3-6 (diluted 1:200) followed by horseradish peroxidase-conjugated anti-rat immunoglobulin from a kit (BD, Franklin Lakes, NJ, USA). Nuclei were counterstained with hematoxylin. Biodistribution of 111 In-Labeled Antibodies When subcutaneous tumors reached a diameter of approximately 8 mm, tumors were irradiated with 0 or 30 Gy of X-rays. On post-exposure days 1, 3, and 7, mice (body weight, 22.1 ± 2.5 g) were intravenously injected with 37 kBq of an 111 In-labeled antibody (3-6, 12-2-7, TDEAR2, and control antibody). The total injected protein dose was adjusted to 20 µg per mouse by adding the corresponding intact antibody. At 30 min post-injection, as well as on days 1, 2, and 4 post-injection, mice (n = 5 per time point) were euthanized by isoflurane inhalation, and blood was obtained from the heart. Tumors and major organs were removed and weighed, and radioactivity was measured using a gamma counter. The data are expressed as the percentage of injected dose per gram of tissue (% ID/g). SPECT/CT with 111 In-Labeled Antibodies The BxPC-3 xenograft model mice (26.6 ± 0.5 g, n = 1 per group) were injected with approximately 1.85 MBq 111 In-labeled antibody 3-6 or control antibody via a tail vein 7 days after irradiation with 0 or 30 Gy of X-rays. The injected antibody dose was adjusted to 50 µg per mouse by adding the corresponding intact antibody. At 30 min post-injection, as well as on days 1, 2, 3, and 4 post-injection, the mice were anesthetized with isoflurane and imaged with a SPECT/CT Preclinical Imaging system VECTor/CT equipped with a multi-pinhole collimator (MILabs, Utrecht, Netherlands). The SPECT scan time was 15 min for the 30 min and day 1 time points, 20 min for day 2, 25 min for day 3, and 30 min for day 4. SPECT images were reconstructed using a pixel-based ordered-subsets expectation-maximization algorithm with two subsets and eight iterations on a 0.8 mm voxel grid without correction for attenuation. CT data were acquired using an X-ray source set at a peak voltage of 60 kV and 615 µA after the SPECT scan, and the images were reconstructed using a filtered back-projection algorithm for the cone beam. Images were merged using PMOD software (ver. 3.4; PMOD Technology, Zürich, Switzerland). Autoradiography On day 1 post-injection of 111 In-labeled antibody 3-6 or control antibody (1.85 MBq, 50 µg protein), tumors (n = 1 per group) were excised and frozen in Tissue-Tek O.T.C. compound (Sakura Finetek, Tokyo, Japan). Frozen sections (thickness, 20 µm) were fixed with 10% neutral buffered formalin, washed, and dried. The dried sections were exposed to an imaging plate (Fuji Film, Tokyo, Japan) and the imaging plate was scanned with an FLA-7000 image plate reader (Fuji Film). After reading, the sections were stained with hematoxylin and eosin (H&E). Signal intensity in six sections for each group was quantified by ImageJ (ver. 1.5.3, National Institutes of Health, Bethesda, MD, USA). Statistical Analysis Biodistribution data are expressed as the mean ± SD. The data were analyzed with two-way analysis of variance and the Sidak multiple comparison test using Prism 7 software (GraphPad Software, La Jolla, CA, USA). Signal intensity data are expressed as the mean ± SD and were analyzed with one-way ANOVA with the multiple comparison test using Prism. The criterion for statistical significance was p < 0.05. Conclusions Our anti-TNC antibody 3-6 labeled with 111 In was aggressively taken up by tumors irradiated with X-rays compared with nonirradiated tumors and a control antibody. A drug delivery targeting a molecule, such as TNC, responding to antitumor therapy has the potential to provide better outcomes when combined with conventional therapy for refractory cancer. Author Contributions: A.S., experimental design, and data collection, analysis, interpretation and writing original draft, review and editing; A.B.T., research design, data design, collection, analysis, interpretation and writing-original draft, review and editing; H.S., data collection; K.T., writing original draft, review and editing; M.K., antibody preparation, data interpretation, writing the manuscript; T.H., data interpretation, writing original draft, review and editing. All authors have read and agreed to the published version of the manuscript.
7,338.8
2020-10-21T00:00:00.000
[ "Biology" ]
Multiple Mechanistic Action of Brevinin-1FL Peptide against Oxidative Stress Effects in an Acute Inflammatory Model of Carrageenan-Induced Damage Amphibian skin is acknowledged to contain an antioxidant system composed of various gene-encoded antioxidant peptides, which exert significant effects on host defense. Nevertheless, recognition of such peptides is in its infancy so far. Here, we reported the antioxidant properties and underlying mechanism of a new antioxidant peptide, brevinin-1FL, identified from Fejervarya limnocharis frog skin. The cDNA sequence encoding brevinin-1FL was successfully cloned from the total cDNA of F. limnocharis and showed to contain 222 bp. The deduced mature peptide sequence of brevinin-1FL was FWERCSRWLLN. Functional analysis revealed that brevinin-1FL could concentration-dependently scavenge ABTS+, DPPH, NO, and hydroxyl radicals and alleviate iron oxidation. Besides, brevinin-1FL was found to show neuroprotective activity by reducing contents of MDA and ROS plus mitochondrial membrane potential, increasing endogenous antioxidant enzyme activity, and suppressing H2O2-induced death, apoptosis, and cycle arrest in PC12 cells which were associated with its regulation of AKT/MAPK/NF-κB signal pathways. Moreover, brevinin-1FL relieved paw edema, decreased the levels of TNF-α, IL-1β, IL-6, MPO, and malondialdehyde (MDA), and restored catalase (CAT) and superoxide dismutase (SOD) activity plus glutathione (GSH) contents in the mouse injected by carrageenan. Together, these findings indicate that brevinin-1FL as an antioxidant has potent therapeutic potential for the diseases induced by oxidative damage. Meanwhile, this study will help us further comprehend the biological functions of amphibian skin and the mechanism by which antioxidants protect cells from oxidative stress. Introduction Free radicals including superoxide anion radical, peroxyl radical, and hydroxyl radical are unstable and indispensable intermediates of aerobic metabolism during respiration in organisms. In general, they are highly reactive with the other groups or substances in the body due to their single and unbalanced electrons. Consequently, free radicals can trigger a cascade of damaging reactions such as lipid peroxidation as well as protein and DNA oxidation, affecting cellular sig-naling and culminating in cell damage and death [1,2]. ROS are effectually obliterated by the antioxidant defense system including nonenzymatic factors and antioxidant enzymes under normal conditions. But, under pathological conditions, the homeostasis between the generation and scavenging of ROS is broken in vivo, which is generally acknowledged to be involved in some health disorders such as diabetes mellitus, cancer, atherosclerosis, aging, and neurodegenerative and inflammatory diseases [2,3]. Thus, effectively scavenging excessive free radicals or preventing their generation has been momentous methods for the prevention and treatment of such diseases. Many synthetic antioxidants, such as vitamin C, butylated hydroxyanisole, and propyl gallate, are used for retarding lipid peroxidation. Yet their potential health hazards and low stability limit their further medical application [4]. Hence, the isolation and identification of natural antioxidants which can scavenge free radicals and protect cells from oxidative damage have been of great interest among researchers [5]. Amphibian has evolved an efficient antioxidant defense system in their skin to antagonize nonbiological injuries from ROS in their environments. Antioxidant peptides are important components of the amphibian skin antioxidant system, and more than 40 peptides with free radial scavenging functions have been identified by purification, proteomic analysis, or cDNA trapping from R. catesbeiana [6,7]. Fejervarya limnocharis is a mediumsized frog (30-60 mm) inhabiting throughout East, Southeast, and South Asia [8,9]. Except that a lectin-like peptide inhibiting HIV-1 entry has been identified by us, there is no any bioactive peptide reported from this species [10]. In this work, we firstly characterized brevinin-1FL from the skin of frog F. limnocharis which showed potent antioxidant activity in vitro. Then, we explored the protective effects of brevinin-1FL against H 2 O 2 -induced ROS generation, oxidative stress, and cytotoxic effect in PC12 cells for evaluating its pharmaceutical potential. Finally, we conducted animal experiment to test its antioxidant and anti-inflammatory activities in vivo. To the best of our knowledge, it is the first report about an antioxidant peptide from F. limnocharis. Animals and Ethical Statement. Male and female adult F. limnocharis frogs (n = 3) obtained from the countryside of Guangzhou, Guangdong Province, China (23.12°N, 113.28°E) are not a protected species without the need for specific permissions. After collection, the frogs were humanely euthanized using CO 2 , and the skin was subsequently sheared and stored in liquid nitrogen until use. Kunming mice (4 weeks, 18-20 g) were bought from the Laboratory Animal Center of Southern Medical University (Guangdong, China) and maintained in plastic cages under standard conditions at 25 ± 2°C and 55 ± 10% humidity, with free access to food and water on a 12 h light/dark rhythm. The Animal Care and Use Ethics Committee of Southern Medical University (no. L2018254) authorized all protocols and procedures involving live animals which were implemented in light of the international regulations for animal research. Molecular Cloning and Characterization of cDNA Encoding Brevinin-1FL. The skin mRNA and double-strand cDNA were prepared as previously reported by us [10]. The chemical parameters of brevinin-1FL were analyzed with the ExPASy Bioinformatics Resource Portal (http:// www.expasy.org/tools/). The assembled sequences were aligned with ClustalW (http://embnet.vital-it.ch/software/ ClustalW.html) on the basis of similarity with previously reported antimicrobial peptides (AMPs) from different amphibian species including brevinin-1FL. Peptide Internalization Analysis. To ensure whether brevinin-1FL can enter into the cells to exert antioxidant effects, PC12 cells (1 × 10 5 cells/well) were grown in a 24well plate overnight and then incubated with FITC-labeled brevinin-1FL at the final concentrations of 2, 4, and 8 μM at 37°C for 6 h before the cells were collected and detected by flow cytometry (BD FACSCanto II, MA, USA). To identify the effects of heparin on peptide internalization, 8 μM brevinin-1FL was preincubated with 10, 20, or 40 μg/mL heparin for 30 min, and then, the mixture was incubated with PC12 cells for 6 h before flow cytometry analysis. To identify the effects of the cellular energy state on the internalization, PC12 cells were preincubated with 10, 20, or 40 μM of sodium azide (NaN 3 ) or 12.5, 25, or 50 mM of ammonium chloride (NH 4 Cl) for 30 min and then incubated with 8 μM FITC-labeled brevinin-1FL for another 6 h before flow cytometry analysis. To examine the effects of time and temperature on the internalization, PC12 cells were incubated with 8 μM FITC-labeled brevinin-1FL for 1 h or 6 h at 4°C or 37°C. To identify the effects of H 2 O 2 on internalization, PC12 cells were incubated with 8 μM FITClabeled brevinin-1FL in the presence of 0.25 mM H 2 O 2 for 6 h before flow cytometry analysis. All experiments were performed in triplicate. Antioxidant Activity Measurement in Vitro. The ABTS radical scavenging activity was measured with a commercial kit according to the manufacturer's instruction (Beyotime, Shanghai, China). Briefly, 10 μL brevinin-1FL (0-20 μM) and 200 μL ABTS working solution were mixed in a 96-well plate and placed at room temperature for 21 min. 10 μL of distilled water was used as the negative control. The absorbance was measured at 734 nm with a microplate reader (Infinite M1000 Pro, Tecan Company, Switzerland). ABTS radical scavenging activity was computed as follows: ABTS scavenging activity ð%Þ = ðA blank − A sample Þ/A blank × 100, where A blank is the absorbance of ABTS solution with distilled water and A sample is the absorbance in the presence of brevinin-1FL. All samples were analyzed in triplicate and averaged. The 2,2-diphenyl-1-picrylhydrazyl (DPPH) radicalscavenging activity was measured with the method described previously by us [11]. A 10 μL aliquot of brevinin-1FL (0-20 μM) was mixed with a 100 μL methanolic solution of DPPH radical at the final concentration of 0.2 mM before 2 Oxidative Medicine and Cellular Longevity being shaken vigorously and left to stand at room temperature for 21 min in the dark. The absorbance was determined at 517 nm with a microplate reader. All samples were analyzed in triplicate and averaged. The NO scavenging activity of brevinin-1FL was measured with the Griess reagent. In short, 2.5 mM sodium nitroprusside was incubated with brevinin-1FL (0-40 μM) in a 96-well plate for 1 h at room temperature. NO generation was quantified with the Griess reagent, and the scavenging rate was calculated as the decrease in NO production compared to the group without brevinin-1FL treatment. The reducing power of brevinin-1FL was evaluated with the ferric reducing antioxidant power (FRAP) method as previously reported by us with minor modification [11]. The activated FRAP working reagent (180 μL) was loaded into a 96-well plate and incubated with 5 μL of brevinin-1FL (0-40 μM) or distilled water for 5 min at 37°C. The absorbance at 593 nm was measured with a microplate reader. Total antioxidant capacity was counted with the standard curve and expressed as the corresponding concentration of FeSO 4 solution. 2.6. Cell Viability Measurement. PC12 cells were cultured in RPMI-1640 medium (Gibco, Chicago, USA) containing 10% fetal bovine serum (Gibco, Chicago, USA) and 100 U/mL penicillin-streptomycin at 37°C in a 5% CO 2 incubator. Cell viability was determined using the MTT assay kit as previously reported by us with small modification [11]. In short, 1 × 10 4 cells/well PC12 cells were plated in 96-well plates and grown overnight. Next, PC12 cells were incubated with H 2 O 2 (0, 0.125, 0.5, and 1 mM) for 6 h or with brevinin-1FL (0-40 μM) for 24 h or with brevinin-1FL (0, 2, 4, and 8 μM) for 30 min before further coincubation with H 2 O 2 for 6 h at 37°C, respectively. 10 μL of MTT solution was loaded into each well and reincubated in the dark for 4 h. The culture supernatants were then carefully discarded, and 200 μL of DMSO was applied to dissolve the formazan crystals in each well. The absorbance at 570 nm was determined. 2.7. LDH Release Assay. The LDH release assay was carried out with the LDH kit (Beyotime, Shanghai, China) in light of the manufacturer's manual. In brief, PC12 cells were preincubated with brevinin-1FL (0, 2, 4, and 8 μM) for 30 min prior to coincubation with H 2 O 2 for another 6 h at 37°C. After centrifugation at 2,000 rpm for 5 min, 100 μL of the supernatant was mixed with 60 μL of substrate solution in the dark for 30 min, and the absorbance at 490 nm was determined with the microplate reader (Infinite M1000 Pro, Tecan Company, Switzerland). LDH release rate ð%Þ = where A blank is the absorbance of cell-free culture medium and A sample is the absorbance in supernatant of cells. All experiments were conducted in triplicate. 2.8. Cell Morphology Assessment. PC12 cells at a density of 5 × 10 5 cells/well were plated to 6-well plates and allowed to grow overnight before being pretreated with brevinin-1FL (0, 2, 4, and 8 μM) for 30 min prior to exposure to 0.25 mM H 2 O 2 for 6 h. Morphological observation was carried out with an inverted phase contrast microscope (CKX41, Olympus, Tokyo, Japan) at 100× magnification. About 4-5 single-plane photographs per well were acquired. 2.9. Antioxidant Capacity Measurement. The SOD and CAT activities as well as GSH and MDA contents in PC12 cells treated with brevinin-1FL were measured to further identify its antioxidant capacities. In brief, 5 × 10 5 cells/well PC12 were seeded into 6-well plates and allowed to grow overnight. Next, cells were incubated with brevinin-1FL (0, 2, 4, and 8 μM) for 30 min prior to incubation with 0.25 mM H 2 O 2 for another 6 h. After washing three times with cold PBS, the cells were harvested by centrifugation and lysed on ice. The supernatant was transferred to fresh tubes and stored on ice or frozen at -80°C before the measurement of MDA, SOD, CAT, and GSH using commercial assay kits (S0131, S0101, S0051, and S0052; Beyotime Institute of Biotechnology, Shanghai, China), respectively. The assay for SOD activity was based on the ability of SOD to inhibit water-soluble tetrazolium salt (WST-8) reduction by superoxide. Briefly, 20 μL of the supernatant was mixed with 160 μL of WST-8/enzyme working solution and 20 μL reacting solution and then incubated at 37°C for 30 min. Finally, the absorbance at 450 nm was determined with the microplate reader. One unit of SOD activity was defined as the amount of protein showing a 50% inhibitory effect on WST-8. The CAT activity was detected by a chromogenic substrate method. Briefly, 10 μL of the supernatant was treated with excess H 2 O 2 for 5 min. The remaining H 2 O 2 coupled with a chromogenic substrate was catalyzed with peroxidase to generate red N-4-antipyryl-3-chloro-5-sulfonate-p-benzoquinonemonoimine. The absorbance at 520 nm was detected by using a microplate reader. One unit of CAT activity is defined as the amount of enzyme catalyzing 1 μM of H 2 O 2 per mg per min at 25°C. The total GSH level was measured with the enzymatic recycling method based on the fact that GSH can be oxidized by 5,5-dithiobis-2-nitrobenzoic acid (DTNB) to generate yellow 2-nitro-5-thiobenzoic acid and reduced by NADPH in the presence of glutathione reductase. 10 μL of the supernatant was mixed with 150 μL of DNTB solution for 5 min at room temperature and then incubated with 50 μL of reaction solution containing NADPH and glutathione reductase for another 5 min. Finally, GSH concentration can be determined by measuring the absorption at 412 nm. The intracellular level of GSH was calculated based on cellular protein concentration. The concentration of MDA which is an indicator of lipid peroxidation in cells was measured using the thiobarbituric acid (TBA) method. In brief, 100 μL of the supernatant was mixed with 200 μL of the TBA reagent in a boiling water 3 Oxidative Medicine and Cellular Longevity bath for 15 min. After centrifugation at 2,000 rpm at room temperature for 10 min, the supernatant was measured at a wavelength of 530 nm under a microplate reader. MDA level unit was expressed as nmol/mg of protein. All experiments are repeated three times. 2.10. Cell Cycle and Apoptosis Measurement. The effects of brevinin-1FL on the cycle distribution and apoptosis of PC12 cells stimulated by H 2 O 2 were measured with flow cytometry (BD FACSCanto II, MA, USA). In brief, PC12 cells were seeded into 6-well plates at the density of 5 × 10 5 cells/well and allowed to grow overnight. The medium was discarded, and cells were subsequently incubated with brevinin-1FL (2, 4, and 8 μM) for 30 min before 0.25 mM H 2 O 2 was added. After cells fixed in cold 70% ethanol, the cells were collected by centrifugation and stained with propidium iodide and RNase (Beyotime, Shanghai, China) for 30 min at 37°C to measure the proportions of cell cycle distribution. For the apoptosis assay, after being grown in normal medium and treated with H 2 O 2 and brevinin-1FL (2, 4, and 8 μM) for 6 h, the cells were collected and stained with propidium iodide and Annexin V-FITC at room temperature for 30 min in the dark in light of the manufacturer's manual (Beyotime, Shanghai, China). All stained cells were analyzed by flow cytometry with a minimum of 10,000 cells. All experiments were performed in triplicate. The data were analyzed using FlowJo (ver. 7.6). ROS Detection. To evaluate the effect of brevinin-1FL on ROS generation in PC12 cells, DCFH-DA was used to examine the ROS level in differentially treated PC12 cells in light of the manufacturer's manual (Sigma-Aldrich; Darmstadt, Germany). In brief, 5 × 10 5 PC12 cells were incubated with brevinin-1FL (0, 2, 4, and 8 μM) for 30 min before 0.25 mM H 2 O 2 was added. After 6 h incubation, the cells were collected and gently washed with PBS. Then, cells were incubated with DCFH-DA for 30 min in the dark at 37°C, washed three times, and measured by flow cytometry (BD FACSCanto II, Mansfield, MA, USA). There were three samples in each group for the ROS detection assay. 2.12. Mitochondrial Membrane Potential (ΔΨm) Assay. ΔΨ m was determined with the JC-1 detection kit according to the manufacturer's manual (Beyotime, Shanghai, China). Briefly, the treated cells were collected and incubated with 500 μL JC-1 working solution at 37°C with 5% CO 2 for 20 min. Finally, the cells were washed before observation under the fluorescence microscopy at 400× magnification, and about 5-10 single-plane images per randomly selected area were acquired. 2.13. Western Blot Analysis. Western blots were carried out according to the method reported previously by us with minor modification [11]. In brief, cells were harvested and lysed with RIPA lysis buffer containing protease and phosphatase inhibitors (FDbio, Hangzhou, China) at 4°C for 15 min to obtain the sample for western blot analysis. Primary antibodies against phospho-AKT, AKT, phospho-ERK, ERK, phospho-JNK, JNK, phospho-p38, p38, p65, Bax, Bcl-2, PARP, cleaved PARP, caspase 3, cleaved caspase 3, and β-actin (4°C, 16 h, 1 : 2000; Cell Signaling Technology, Massachusetts, USA) and HRP-conjugated secondary antibodies (26°C, 1 h, 1 : 2000; Cell Signaling Technology, Massachusetts, USA) were used for western blot analysis. The band densities were quantified by using ImageJ software, and all experiments were performed in triplicate. Cells treated with medium alone were used as the negative control. 2.14. Carrageenan-induced Paw Edema Assay. The antiinflammatory and antioxidant activities were evaluated with the paw edema induced by carrageenan based on the method previously reported by us with minor modifications [12]. In short, the mice paw volume up to the ankle joint was measured with a plethysmometer (Taimeng PV-200 7500, Chengdu, China) as the baseline value before mice were given intraperitoneal injection with brevinin-1FL (10 mg/ kg), saline, or indomethacin (10 mg/kg). After 1 h, the plantar side of the right hind paw was injected with 50 μL of 1% carrageenan suspended in saline or saline to induce swelling and edema. The paw volume was then determined at 0, 1, 2, 3, 4, and 5 h with a plethysmometer. The animals were euthanized with an overdose of pentobarbital (200 mg/kg intraperitoneally) 5 hours after injection, and the right hind paws of all mice were surgically cut off to prepare sample for the measurement of inflammatory factors (IL-1β, IL-6, and TNF-α), oxidative stress-related indicators (SOD and CAT activity and GSH level as well as MDA content), and myeloperoxidase (MPO) activity and histological analysis. 2.15. Statistical Analysis. All experiments were repeated at least three times. All data were analyzed using the GraphPad Prism software version 5.03 (GraphPad Software, CA, USA) and expressed as the mean ± SD. One-way ANOVA with the Tukey multiple comparison posttest was used for multiplegroup analysis. A value of p < 0:05 was considered to represent a statistically significant difference. Identification and Characterization of Brevinin-1FL. The cDNA sequence encoding brevinin-1FL was cloned from the skin of F. limnocharis by the PCR-based method. As displayed in Figure 1, the cDNA of brevinin-1FL was 222 bp length and its deduced precursor contained 58 amino acid residues which possessed the classic sequence characteristic of amphibian defensive peptides, generally consisting of a predicted signal peptide sequence with 22 residues, an Nterminal acidic interval domain which was separated from the C-terminal mature peptide by a well-known KR protease cleavage site (Figure 1(a)). In light of the NCBI Basic Local Alignment Search Tool analysis, the deduced precursor shared high sequence identities to those of AMPs grouped into the brevinin-1 family. However, the amino acid sequence of its mature peptide named as brevinin-1FL, FWERCSRWLLN, lacked such similarity with any reported AMPs (Figure 1(b)). Brevinin-1FL had a theoretical PI of 9.24 with +3 net charge, and the aliphatic index was 104.33. Its relative mass was measured to 1509.82 Da. 4 Oxidative Medicine and Cellular Longevity 3.2. Antioxidant Activity of Brevinin-1FL in Vitro. ABTS and DPPH radical-scavenging assays were carried out to measure the antioxidant activity of brevinin-1FL. As illustrated in Figures 2(a) and 2(b), brevinin-1FL exhibited a dosedependent ABTS and DPPH radical-scavenging power across the measured concentrations (0, 2.5, 5, 10, and 20 μM). Brevinin-1FL eliminated approximately 64.20% of ABTS and 23.03% of DPPH at 21 min. NO, an important physiological mediator, has neurotoxic and proapoptotic effects when it is excessively generated [13]. As described in Figure 2(c), brevinin-1FL scavenged NO in a dosedependent manner. The total antioxidant activity of brevinin-1FL was also examined with the FRAP method. As shown in Figure 2(d), the ferric reducing ability of brevinin-1FL was increased with increasing concentrations. In addition, the antioxidant activity of brevinin-1FL was also tested with the DNA protection assay. Hydroxyl radical produced from the Fenton reaction can induce a single-strand break into supercoiled plasmid DNA and final formation of open circular DNA after incubation with plasmid DNA. As shown in Figure 2( Internalization of Brevinin-1FL into PC12 Cells via Endocytosis. Compared with the control group, brevinin-1FL could be rapidly internalized into PC12 cells as shown by the increasing fluorescence in the cells treated with FITC-labeled brevinin-1FL in dose-dependent manners (Figure 3(a)). Anionic heparin sulfate is an important component in the cell membrane and the extracellular matrix associated with the first step of the interaction between the membrane and a cell-penetrating peptide [14]. Thus, we further investigated the effects of heparin sulfate on the cellular internalization of brevinin-1FL. As displayed in Figure 3(b), the uptake of 8 μM FITC-labeled brevinin-1FL was reduced about 0.1%, 10.8%, and 15.6% after 1 h cotreatment with 10, 20, and 40 μg/mL heparin, respectively. Next, we examined whether the internalization of brevinin-1FL required energy. Both NH 4 Cl and NaN 3 are endocytic inhibitors because they can increase the pH of acidic endocytic vesicles and abolish ATP production within the cell membrane, respectively [15]. As shown in Figures 3(c) and 3(d), preincubation of PC12 cells with NaN 3 and NH 4 Cl for 1 h obviously inhibited the cellular uptake of brevinin-1FL in concentration-dependent manners. As a further confirmation assay, the effect of temperature on the cellular endocytosis of brevinin-1FL was also examined. As shown in Figure 3(e), the cellular uptake of brevinin-1FL at 37°C was significantly higher than that at 4°C after 1 h or 6 h incubation, suggesting an energy-dependent and thermo-sensitive endocytosis of brevinin-1FL into PC12 cells ( Figure S2). Notably, there was no obvious difference between different incubation times under the same temperature, indicating that brevinin-1FL can fast be internalized into PC12 cells. The influence of 0.25 mM H 2 O 2 on the cellular penetration of brevinin-1FL was tested; coincubation with H 2 O 2 in PC12 cells for 6 h could not significantly influence the cellular uptake of brevinin-1FL. All these results indicate that brevinin-1FL can be internalized into PC12 cells via endocytosis (Figure 3(f)). Effect of Brevinin-1FL on Oxidative Stress-induced Cell Death. The cytotoxicity of a series of concentrations of H 2 O 2 towards PC12 cells was measured to ensure the protective effect of brevinin-1FL on oxidative stress-induced cell death. As shown by MTT results in Figure 4 (Figures 4(b) and 4(c)). Lactate dehydrogenase (LDH) release was investigated to evaluate the protective mechanism of brevinin-1FL against oxidative stress-induced cell death. As shown in Figure 4(d), brevinin-1FL reduced the LDH release of H 2 O 2 -treated cells in a dose-dependent manner. Notably, after PC12 cells were treated with 0.25 mM H 2 O 2 for 6 h, the discrepancy between the proliferation inhibition rate and the inhibition rate of LDH release indicating other mechanisms exists in cell death induced by H 2 O 2 except necrosis (Figures 4(c) and 4(d)). In line with the MTT results, morphological observation of PC12 cells showed that the unstimulated cells grew better than H 2 O 2 -treated cells, with a dendritic shape and even distribution while treatment with brevinin-1FL reversed these features (Figure 4(e)). Effect of Brevinin-1FL on Intracellular SOD and CAT Activity and MDA and GSH Content. MDA is a peroxidation product under oxidative stress while SOD, CAT, and GSH are antioxidants in cells. They are generally applied in the research of antioxidant drugs as pharmacodynamic indicators [1]. When compared with the control group, three antioxidant contents in H 2 O 2 -treated PC12 cells were restored Effect of Brevinin-1FL on H 2 O 2 -induced ROS Production and Mitochondrial Membrane Potential. ROS play the critical role in the regulation of cell proliferation as well as survival and mediate oxidative damage to lipids, DNA, and proteins, representing a key pathogenic role in neurodegenerative diseases [1,2]. Hence, the effect of brevinin-1FL on H 2 O 2 -induced total intracellular ROS accumulation in PC12 cells was examined. As shown in Figure 6(a), ROS contents were evidently increased after H 2 O 2 treatment. But this increase was significantly inhibited by pretreatment with brevinin-1FL in a concentration-dependent manner. The ROS production is closely associated with the mitochondrial membrane potential (Δψm) which can be used to evaluate mitochondrial function and test the protective effect of antioxidant peptides on cells suffering from oxidative stress. As shown by the fluorescence in Figure 6(b), ROS can adjust AKT/MAPK/NF-κB signaling pathways which are involved in apoptosis and inflammation responses [1,2]. Therefore, western blot was carried out to examine the effects of brevinin-1FL on their activation in In contrast, the contents of p65 as well as phosphorylated ERK and AKT in the cytoplasm were significantly downregulated, which suggest that AKT/MAPK/NF-κB pathways are responsible for H 2 O 2 -induced apoptosis in PC12 cells. Nevertheless, brevinin-1FL reversed the changes induced by H 2 O 2 , demonstrating that the protective effects of brevinin-1FL in PC12 cells treated with H 2 O 2 are associated with its regulation of AKT/MAPK/NF-κB signaling pathways. Effect of Brevinin-1FL on Apoptosis and Cell Cycle Arrest. Excessive ROS generation can cause cell cycle arrest and apoptosis due to its damage to mitochondrial functions in PC12 cells [11,16]. As displayed in Figure 7(a), the number of apoptotic cells was obviously increased in H 2 O 2 -treated cells when compared to the control. But it was reduced by brevinin-1FL in a concentration-dependent manner, and the apoptotic inhibition rates in the presence of brevinin-1FL at 2, 4, and 8 μM were around 24.30%, 35.23%, and 70.15%, respectively. Furtherly, H 2 O 2 markedly augmented the cell numbers accumulating in the G0/G1 phase, which was accompanied by the reduced cell numbers in the S phase in PC12 cells, whereas brevinin-1FL concentrationdependently ameliorated this effect induced by H 2 O 2 (Figure 7(b)). In agreement, exposure to H 2 O 2 for 6 h resulted in an obvious expression increase in Bax and decrease in Bcl-2. However, the addition of 8 μM brevinin-1FL to cells significantly counteracted these changes induced by H 2 O 2 . At the same time, brevinin-1FL significantly reduced PARP and caspase 3 cleavage upregulated by H 2 O 2 (Figures 7(c) and 7(d)). Overall, the present data prove that brevinin-1FL effectively inhibits the H 2 O 2 -induced cell cycle arrest and apoptosis, consequently increasing the viabilities of PC12 cells. Antioxidant and Anti-Inflammatory Activity of Brevinin-1FL in Vivo. Carrageenan-induced acute inflammation is known to be related to the accumulation of ROS, lipid peroxidation, and the impediment of antioxidant defense activities [17,18]. We therefore assessed the antiinflammatory and antioxidant ability of brevinin-1FL in the mouse paws injected by carrageenan as previously reported by us [12]. As illustrated in Figures 8(a) and 8(b), the volumes of paw edema were markedly increased by 1.81-fold after carrageenan injection in comparison with the control group. However, brevinin-1FL alleviated the paw swelling induced by carrageenan. In agreement, treatment with brevinin-1FL also suppressed the activity of MPO which is the indicator of neutrophil migration in carrageenan-injected paw tissues (Figure 8(c)). MDA is a mediator of inflammatory processes and a marker of cellular injury triggered by ROS and oxidative stress while endogenous antioxidant enzymes like SOD and CAT as well as GSH greatly contribute to eradicating the damaging effects of ROS and oxidative stress [19]. Carrageenan administration abated cellular SOD and CAT activity and GSH level and increased MDA concentration when compared to the control (Figures 8(d)-8(g)). The expression changes of IL-1β, TNF-α, and IL-6 were examined in serum and tissue of mice after carrageenan administration. As shown in Figure 8(h) and Figure S3, the above raised trends in the carrageenan-injected mice were reversed by brevinin-1FL and indomethacin. Consistently, histopathology analysis displayed that brevinin-1FL significantly weakened carrageenan-induced leukocyte infiltration (Figure 8(i)). Notably, brevinin-1FL (10 mg/kg) compared to indomethacin possessed greater inhibition effect on the increase in MDA contents induced by carrageenan. Together, these data underscore that brevinin-1FL possesses the antioxidant and anti-inflammatory activities. Discussion Oxidative stress, the overproduction of ROS in cells and tissues, reflects an imbalance between antioxidants and free radicals and leads to the damage of cellular biomacromolecules such as DNA, proteins, and lipids, consequentially causing many human health disorders including neurological illness, inflammation, diabetes, cancer, and atherosclerosis [1,2]. Hence, natural and synthetic antioxidants protecting normal cells from damage derived from oxidants may have therapeutic potential and are increasingly recognized as a pivotal direction for the prevention and treatment of oxidation-associated diseases [1,3,5]. Some bioactive peptides identified from animals, especially from amphibian skin, have been found to display antioxidant and antiinflammatory activities [5,6]. However, there is no antioxidant peptide identified from F. limnocharis. Here, for the first time, we identify an antioxidant peptide from this tropical frog and explore its antioxidant effects plus underlying mechanism in H 2 O 2 -treated PC12 cells and carrageenanstimulated mice paws. Many antioxidant peptides containing different structures have been identified from amphibians, and the presence of critical residues of proline, leucine, phenylalanine, methionine, free cysteine, tyrosine, or tryptophan is responsible for their antioxidant activity [20][21][22][23][24][25][26]. In particular, cysteine containing the reducing thiol group provides more potent antioxidant capability than the above other amino acids to an antioxidant [27]. For brevinin-1FL, 1 free cysteine and 5 hydrophobic amino acids (2 leucine, 2 tyrosine, and 1 phenylalanine residues) are contained in its primary sequence (Figure 1(a)). Comparison with cathelicidin-OA1, antioxidin-I, and antioxidin-RL, brevinin-1FL contained higher proportion of hydrophobic amino acids and stronger antioxidant activity according to their radical-scavenging capability or protective effects on the cells with oxidative damage in vitro [21,28,29]. This result further demonstrates that the sequence of peptides has great effects on their antioxidant activity. It is a remarkable fact that the evolution and formation of antioxidant peptides in amphibians may be associated with long time exposure to sunshine and intense ultraviolet radiation [22,24]. The frog F. limnocharis captured from tropical Guangdong Province live in low altitude (23.12°N, 113.28°E) with long and strong sunlight radiation. Besides, these frogs generally live near pond ditches with few protections from sunlight, making them easy to receive sunlight radiation [30]. Thus, the present study supports the conclusion that the evolution of antioxidant peptides is associated with sunlight radiation as reported by other researchers [22,31,32]. Some brevinin-1-like peptides from the frog skin secretions like LFB, brevinin-1OS, and their N-terminal derivatives show potent antimicrobial activities [33,34]. Although it shares similar precursor structure with these peptides consisting of a signal peptide at the N-terminus followed by an acidic spacer region and a mature peptide at the C-terminus, brevinin-1FL like antioxidin-I and salamandrin-I does not show antibacterial activity against S. aureus ATCC 25923, E. coli ATCC 25922, and P. aeruginosa ATCC 27853 (data not shown) [24,35]. What is more, mature brevinin-1FL lacks structural similarity with any reported AMP which is consistent with our experimental antibacterial results and further proves the conclusion that a balance between hydrophobicity, positive charge, and degrees of α-helicity is crucial for keeping antimicrobial activity of peptide (Figure 1(b)) [34]. Thus, the presence of antioxidant peptide in this frog skin might reflect adaptation to the specific environment [36]. Antioxidants often keep homeostasis and prevent cells and tissues from oxidative stress-induced disorders by removing excessive free radicals [1,22]. MDA is a celldamaging peroxidation product of biomacromolecules on the surface of cell membranes caused by oxidative stress and can gradually lead to damage and dysfunction of all intracellular protein functions [37]. The hydroxyl group is thought as a DNA-damaging agent of physiological significance and can cause carcinogenesis or characteristic in the pathogenesis of neurodegenerative diseases like Alzheimer's and Parkinson's disease [3]. SOD, CAT, and GST are the key endogenous antioxidants which can reduce the contents of the oxidants and provide a first line of defense against their potentially damaging effects [19,38]. In this study, brevinin-1 FL has ability to scavenge free radicals like NO, hydroxyl radicals, ABTS + , and DPPH and reduce Fe 3+ in vitro (Figure 2), suggesting that it is an antioxidant peptide. The nervous system has high oxygen utilization, large amount of polyunsaturated fatty acids, and low contents of antioxidants, which make it very susceptible to oxidative assaults [3]. The PC12 cell line is particularly vulnerable to changes in O 2 concentration and usually used as a cellular model to research neuronal sensitivity to oxidative stress [39]. At present study, the rat differentiated PC12 cells are subjected to H 2 O 2 exposure for 6 h to mimic a neuronal in vitro model of oxidative injury. In agreement, H 2 O 2 significantly decreases the viability of PC12 cells via increasing ROS accumulation and MDA contents and decreasing the levels of endogenous antioxidants and Δψm (Figures 5 and 6), which suggest the neuronal sensitivity to oxidative damage. However, brevinin-1FL can be internalized into PC12 cells via endocytosis ( Figure 3) and successfully reverse the intracellular effects induced by H 2 O 2 , consequently decreasing the cycle arrest, apoptosis, and necrosis of PC12 cells caused
7,761.6
2022-09-05T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Presence of Phospholipid-Neutral Lipid Complex Structures in Atherosclerotic Lesions as Detected by a Novel Monoclonal Antibody* A novel monoclonal antibody (ASH1a/256C) that recognizes atherosclerotic lesions in human and Watanabe heritable hyperlipidemic (WHHL) rabbit aortae is described. When 123I-labeled ASH1a/256C antibody is injected intravenously into WHHL rabbits, it associates specifically with fatty streaks on the aorta. The antigen recognized by the antibody is lipid, based on extraction with chloroform and methanol from WHHL rabbit tissues. The antigen, purified by high performance liquid chromatography, was shown to be phosphatidylcholine (PC), which contains unsaturated fatty acyl groups based on analyses utilizing1H and 13C nuclear magnetic resonance, Fourier transfer-infrared spectrum, and mass spectrometry. The antibody did not react with other classes of phospholipids or neutral lipids when tested using an enzyme-linked immunosorbent assay. When PC was mixed with either cholesterol, cholesteryl ester, or triacylglycerol, however, the reactivity of the antibody to PC increased up to 8-fold. Homogenates of aorta tissue obtained from normal and WHHL rabbits were fractionated using sucrose density gradient ultracentrifugation in which neutral lipid droplets, cellular membranes, and proteins are separated. The phospholipid content in cellular membrane fractions from WHHL rabbits was twice as high as that of normal rabbits, and there was an enormous difference in the antigenic activity in these fractions. The content of cholesterol in the cellular membrane fraction of WHHL rabbits was approximately 50 times higher than that of normal rabbits. Addition of neutral lipids to the cellular membrane fraction of normal rabbit markedly increased the antigenic activity. Atheromatous lesions in thickened WHHL rabbit aortic intima that were rich in lipid droplets were stained positively with ASH1a/256C immunohistochemically. These results strongly suggest that PC-neutral lipid complex domains are formed in atherosclerotic lesions. Intracellular and extracellular accumulation of neutral lipids in the arterial intima is a typical feature of atherosclerotic lesions. In the early stages of atherosclerosis, foam cells that accumulate cholesteryl ester (CE) 1 droplets in their cytosol are formed from macrophages and smooth muscle cells (1)(2)(3). Several types of scavenger receptors, which are capable of binding and taking up modified low density lipoproteins (LDL), have been shown to play crucial roles in foam cell formation (4,5). In advanced lesions, neutral lipids are also accumulated in the extracellular space, and cholesterol crystals can form (1, 3, 6 -8). Neutral lipids may be deposited in the extracellular spaces when foam cells eventually die either by necrosis or apoptosis (9). However, little is known of the mechanisms of extracellular deposition of neutral lipids or the fate of foam cells. Furthermore, it is not known whether lipid accumulation affects cellular responses in the lesions. Multiple factors are closely involved in the formation of these lesions, including lipoprotein metabolism, smooth muscle cell proliferation, endothelial cell malfunction, formation of modified LDLs, and accumulation of foam cells (4, 5, 10 -12). To establish useful tools for the investigation of the mechanisms of atherogenesis, a series of monoclonal antibodies using homogenates of human atheroma as immunogen has been raised. Through characterization of these anti-atheroma antibodies, the presence of vitronectin (13,14), oxidized phosphatidylcholine (PC) (15,16), and cross-linked proteins (17) in human and rabbit atherosclerotic lesions have been demonstrated. In this study, a monoclonal antibody was selected that bound to fatty streaks using an in vitro artery wall binding assay. Strips of aorta from Watanabe heritable hyperlipidemic (WHHL) rabbits were incubated with hybridoma culture media followed by a 125 I-labeled second antibody. The antibodies that bound to the surface of fatty streak but not to the normal endothelium were selected. The monoclonal antibody ASH1a/ 256C (a murine monoclonal antibody against surface of human atheroma), which bound atherosclerotic lesions in vivo and immunohistochemically, recognized PC containing polyunsaturated fatty acyl groups (PUFA). The content of PC in atherosclerotic lesions was at most twice that of normals, although the antigenicity of the lesion homogenates was more than eight times higher than that of the normal aortae. The reactivity of this antibody to PC was greatly increased in the presence of neutral lipids, suggesting that certain complex structures of PC and neutral lipids are present in atherosclerotic lesions. MATERIALS AND METHODS Preparation of Monoclonal Antibody-Atherosclerotic areas of human abdominal aorta were cut into pieces and homogenized with a Polytron® homogenizer in SVE solution (0.25 M sucrose, 1 mM EDTA, 1% ethanol, pH 7.4). After centrifugation at 220 ϫ g for 10 min at 4°C, the supernatant was recovered and used as immunogen. BALB/c mice (8 weeks old, female) were immunized three times with the homogenate of human atheroma over a period of 3 months (18). Spleens were removed from the immunized mice 3 days after the final injection. The spleen cells were fused with the murine myeloma cell line P3/U1 using polyethylene glycol-4000 and cultured in HY medium (DMEM: NCTC109 medium ϭ 8:1 containing 1 mM sodium pyruvate, 5 g/ml insulin, 0.16 mg/ml oxaloacetate, 7% fetal calf serum) containing hypoxanthine, aminopterin, and thymidine (19). Antibody titers in the culture medium of hybridomas were tested by enzyme-linked immnosorbent assay (ELISA) and an in vitro binding assay to WHHL aorta. Hybridomas showing anti-atheroma reactivity were cloned by limiting dilution procedure twice. To select anti-atheroma antibodies, homogenates of human atheroma, WHHL rabbit atheroma, and normal aorta obtained from control rabbits as well as human and rabbit sera were used as antigens for ELISA. For those clones that were positive to homogenates of human and rabbit atheroma and negative to the other antigens, immunohistochemical staining of frozen sections (4 -6 m) of WHHL rabbit aorta and human atheroma were performed. Then the in vitro binding assay to WHHL aorta was performed for the selected clones that stained atherosclerotic lesions immunohistochemically. Strips of WHHL rabbit aorta (4 ϫ 15 mm) were incubated with the culture medium of the selected hybridoma clones followed by 125 I-labeled goat anti-mouse Ig(GϩM) (New England Nuclear Co.). After rinsing the strips with phosphate-buffered saline (PBS) five times, autoradiography was performed. The ascites obtained from mice bearing P3 U1 myeloma, which was not hybridized with any cells, was used as control. One of the clones that showed positive spots corresponding to areas of fatty streaks was isolated and was named ASH1a/256C. The antibody produced by this clone was partially purified from ascites of mice bearing the hybridoma using ammonium sulfate precipitation. Its immunoglobulin class was IgM. During investigating this antibody, the hybridoma clone has been recloned five times so far, and no change in the reactivity of the antibody has been observed. Purification of Antigen Recognized by ASH1a/256C-Aorta or kidney from WHHL rabbits were cut into pieces and homogenized using Polytron® homogenizer as described previously (14). After removed of cellular debris by centrifugation at 220 ϫ g for 10 min, the supernatant was collected. Lipids were extracted from the homogenate using the method of Bligh and Dyer (21). The lipid extracts were dried under an argon gas stream and then applied onto a silica gel column to separate phospholipids from neutral lipids. After washing the column with chloroform followed by chloroform:methanol (9:1) to remove neutral lipids, polar lipids including those with antigenic activity were eluted with chloroform:methanol:water (6:4:1). The eluate was then fractionated using straight phase high performance liquid chromatography (HPLC) (column: LiChrosorb Si60, 4 ϫ 250 mm, Merck, Germany) by gradient elution with hexane:2-propanol:water (44:55:1 to 33:55:12). The flow rate was 0.5 ml/min. The antigenic activity, which was eluted at 45 min, was separated completely from neutral lipids and glycolipids by this purification step. The antigen recovered from the HPLC was rechromatographed on the same column with another solvent system chloroform:methanol:water (4:5:1) at a flow rate of 0.2 ml/min. The antigenic activity was eluted as a single peak at 23 min. Two-dimensional Thin Layer Chromatography-The purified antigen and PC standard (4 g each) were spotted onto a silica gel TLC plate. The plates were developed with hexane:diethylether (1:1) followed by chloroform:methanol:water (6:4:1) in the same direction. The plate was then developed in the direction perpendicular to the first run with chloroform:methanol:acetic acid:acetone:water (6:2:4:2:1). The samples were visualized by spraying molybdophospholic acid onto the plate (23). Structural Analyses-Proton NMR spectra of the purified antigen (2.7 mg) and sn-1-palmitoyl-2-linoleoyl PC (2 mg) dissolved in (CD 3 ) 2 SO were obtained using a GSX-400 spectrometer (Jeol) with 512-pulse scanning at 400 MHz (24). Two-dimensional cross-relaxation spectra (NMR-COSY) were obtained using 256-pulse scanning at 400 MHz. Proton chemical shifts were indicated in ppm downfield from tetramethylsilane. 13 C NMR spectrum was obtained using 61,440-pulse scanning at 100 MHz using the same spectrometer. Carbon chemical shifts were indicated in ppm with reference to the internal solvent (CD 3 ) 2 SO. Fourier transfer infrared spectra of the antigen were obtained using a Fourier transfer infrared spectrum 8000 spectrometer (Jasco, Japan). Fast atom bombardment mass spectrometry of the antigen were obtained using JMS-SX102 A (Jeol), triethanolamine as the matrix. Liquid chromatography-linked mass spectra of the antigen were examined using JMS-LX2000 spectrometer (Jeol) with a Hiber LiChroCART RP-18 column (4 ϫ 250 mm; 7 m; Merck) under the same conditions as described above. Measurement of Antigenic Activity-Reactivity of ASH1a/256C to various materials was determined by ELISA. Aquaous samples, such as homogenates of atheroma, were coated onto 96-well microtiter plates (Falcon number 3912) that had been pretreated with 2% glutaraldehyde for 2 h. After incubating the plates at 37°C for 1 h, the surfaces of the microtiter wells were blocked by incubating with Tris-buffered saline containing 2% skimmed milk. The plates were incubated with ASH1a/ 256C antibody diluted with Tris-buffered saline containing 2% skimmed milk followed by alkaline phosphatase-conjugated goat antimurine Ig(GϩM) antibody (Tago Inc., AMI3705). After washing extensively with Tris-buffered saline containing 0.05% Tween 20, the plates were incubated with p-nitrophenylphosphate (1 mg/ml) dissolved in 1 M diethanolamine-HCl buffer, pH 9.8 at 37°C for the appropriate time periods. The absorbance at 405 nm was measured photometrically using an ELISA plate reader (Bio-Rad). When the antigenic activities of the lipids were tested, their methanol solutions were placed into microtiter wells without the pretreatment with glutaraldehyde. The plates were incubated at 37°C for 5-10 min to remove the methanol, after which the surfaces of the microtiter wells were blocked with 0.3 M sucrose. Density Gradient Ultracentrifugation of Atheromatous Lipids-Homogenates of atherosclerotic lesions from WHHL and normal rabbit aorta (6 mg of protein) were fractionated using a sucrose density gradient ultracentrifugation according to the method described previously (20). Briefly, a linear gradient of SVE solutions containing 53 to 0% sucrose was layered on top of the homogenates containing 64% sucrose. After centrifugation at 89,000 ϫ g for 75 min at 4°C using RPS-27 rotor (Hitachi), samples were collected from each ml from the bottom to the top of the gradient. Histochemical Study of WHHL Rabbit Aorta-Frozen sections (4 -6 m) of WHHL rabbit fatty streaks were obtained and fixed with 10% neutral formalin immediately after autopsy. The sections were incubated with ASH1a/256C (ascites) followed by fluorescein isothiocyanate-conjugated goat anti-mouse Ig(GϩM) (Organin Teknika Corp., Durham, NC). The adjacent WHHL section was stained with 0.1% oil-red O in 60% 2-propanol for 10 min, and the section was counter stained with Mayer's hematoxylin for 5 min after washing off any excess oil-red O with 2-propanol. Other Analytical Methods-The amounts of total cholesterol were measured by a cholesterol oxidase method using the Cholestase-V kit (Nissui, Co.) (25,26). Levels of phospholipids were determined by measuring phosphorus in organic extracts using malachite green according to the method of Zhou and Arther (27). Protein concentrations were measured by the Bradford method using the Bio-Rad protein assay kit with BSA as the standard (28). A New Antibody That Binds Specifically to Atheromatous Lesions-In an attempt to obtain monoclonal antibodies against atherosclerotic lesions, hybridoma clones from mice immunized with homogenates of fatty streak lesions of human atheroma were prepared. Anti-atheroma clones were selected by ELISA using homogenates of atheroma from humans and WHHL rabbits for initial screening, followed by immunohistochemical staining using frozen sections of WHHL rabbit aorta. Then candidate clones were further tested using a binding assay to WHHL rabbit aorta strips. Clones reactive to materials in human and rabbit sera proteins were omitted. A clone was finally established after these selections and was named ASH1a/256C (atheroma, surface, human). 123 I-Labeled ASH1a/256C antibody was injected intravenously into normal and WHHL rabbits. These rabbits were sacrificed 48 h after injection, and the distribution of the labeled antibody in isolated aortas was visualized by autoradiography (Fig. 1). Fatty streaks were observed in the WHHL rabbit aorta but not in the normal rabbit aorta. Lesion formation was prominent in the aortic arch and at the points of vessel branching. The radioactivity was co-localized with the atherosclerotic plaques in the WHHL rabbit aorta. In contrast, the area that was free of visible lesions in the WHHL aorta and the aorta from normal rabbit were negative. The antibody reacted strongly to atheromatous homogenates from human and WHHL rabbits but did not react to homogenates from normal rabbits (Fig. 2). Furthermore, this antibody also bound to atheromatous lesions in WHHL rabbit aortae as shown by an in vitro binding assay (see "Materials and Methods"). These results show that this monoclonal antibody recognizes atherosclerotic lesions both in vivo and in vitro. Antigen Purification-The antigen of ASH1a/256C was effectively extracted with chloroform and methanol from homogenates of rabbit aorta with the residual fractions having no antigenicity, suggesting that the antigen is likely to be lipid (Fig. 2). The reactivity of the antibody to the lipid extracts from the WHHL rabbit aorta was 8-fold greater than that from the same amount (10 g of protein) of normal rabbit aorta homogenate. When the same amount of phospholipid extracted from either the WHHL aorta or normal rabbit aorta was used as antigen, the antigenic activity in WHHL extract was 3.9-fold higher than the extract from normal rabbit by phospholipid basis (data not shown). When the antigenicity of homogenates of several tissues to ASH1a/256C was examined by ELISA, kidney and xanthoma as well as aorta from WHHL rabbits showed strong activities (data not shown). The lipid extracts obtained from atheroma and kidney of WHHL rabbits were fractionated by silica gel column chromatography followed by HPLC. The identity of the antigens obtained from atheroma and kidney was investigated by the following experiments. First, when the partially purified antigens were analyzed by TLC immunostaining using ASH1a/ 256C, both samples showed single bands with the same Rf values (Fig. 3). Second, the antigens from these tissues were eluted from the HPLC column with the same retention times were coated onto microtiter plates. Then ASH1a/256C was added to the plates to carry out the ELISA assay as described under "Materials and Methods" to measure their antigenicity. (data not shown). Finally, the same molecular mass numbers were obtained for these antigens by liquid chromatographymass spectrometry analysis (data not shown). Therfore, the antigens in atheroma and kidney could be identical. The antigen was purified from both the aorta and kidney of WHHL rabbits. The antigen purified from the kidney was used to perform structural analyses (see below), because the quantity of the antigen purified from rabbit aorta was very limited. Fig. 4 shows data from the antigen purification from kidney, and the profiles were almost the same as those of aorta. Initially, a step-wise elution from a silica gel column was performed to remove large amount of neutral lipids (Fig. 4A). The antigen was eluted in fraction III (chloroform:methanol:water, 6:4:1), whereas fractions I and II had no reactivity. Triacylglycerol and CE were mostly recovered in fraction I (data not shown). Fraction III was then applied to a straight phase HPLC with a gradient elution using hexane:2-propanol:water (44:55:1 to 33:55:12). The antigenic activity was eluted at 44 min as a single peak (Fig. 4B). This fraction was further purified on the same HPLC column using a different solvent system (Fig. 4C). The purified antigen, which was eluted at 23 min as a single peak by the second HPLC, showed a single spot on two-dimensional TLC (data not shown). Structural Analyses of the Antigen-The antigen purified from WHHL rabbit kidney underwent a number of structural analyses. No signal corresponding to either ketone, aldehyde, acid anhydride, or free carboxylic acid was observed by Fourier transfer infrared spectrum of the antigen; however, the spectrum did suggest the presence of ester bonds (CϭO; 1735 cm Ϫ1 ) (data not shown). The presence of two ester bonds (CϭO; 172 ppm) was confirmed by a 13 C NMR spectrum (Fig. 5A). Signals corresponding to two CϭC double bonds (127 and 129 ppm) were also observed in the 13 C NMR spectrum. Furthermore, one-dimensional and two-dimensional NMR analysis (NMR-COSY) of the antigen was performed to identify its molecular structure. The signals marked in alphabets in the 1 H NMR spectrum of the antigen were identified as described in the legend of Fig. 5. The spectrum of the antigen was found to be very similar to that of sn-1-palmitoyl-2-linoleoyl PC (Fig. 5, C and D). One particular signal (d ϭ 3.1 ppm; 9H, marked with asterisks in Fig. 5 (C and D), corresponds to signal e) did not interact with any other signal, suggesting that there is no proton in close proximity to the nine hydrogen atoms in the FIG. 3. Immunological identity of the antigens from atherosclerotic aorta and kidney of WHHL rabbits. Partially purified antigens were prepared as described under "Materials and Methods" and the legend of Fig. 4. The antigens from atherosclerotic aorta (lanes 1 and 2) and kidney (lanes 3 and 4) of WHHL rabbits were developed on a TLC plate with hexane:diethylether (1:1) followed by chloroform: methanol:water (6:4:1). Then the antigen was detected with ASH1a/ 256C as described under "Materials and Methods. " FIG. 4. Purification of the ASH1a/256C antigen. A, the lipid extract from WHHL rabbit kidney was applied to a silica gel column (bed volume, 15 ml), which was equilibrated with chloroform. The sample was eluted with chloroform (fraction I), then with chloroform:methanol (9:1) (fraction II), and finally with chloroform:methanol:water (6:4:1) (fraction III). Eluate was collected from each 10 ml. Neutral lipids were mostly eluted in fraction I. Antigenic activity (closed circles) was measured by ELISA. B, fraction III recovered in A was applied onto a straight phase HPLC column (first separation). The chromatography was carried out as a gradient elution with the following solvent system: hexane:2-propanol:water (44:55:1 to 33:55:12). The flow rate was 0.5 ml/min. The eluate was collected each minute. C, The partially purified antigen recovered in B was then applied to the same silica gel HPLC column as B but eluted isocratically with the solvent system chloroform: methanol:water (4:5:1). The flow rate was 0.2 ml/min. The eluate was collected each minute. FIG . 5. NMR analyses of the purified antigen. A, 13 C NMR spectrum of the purified antigen in (CD 3 ) 2 SO. B, 1 H NMR spectrum of the purified antigen in (CD 3 ) 2 SO. C and D, two-dimensional cross-relaxation spectra (NMR-COSY) of the purified antigen (2.7 mg) (C) and authentic sn-1-palmitoyl-2-linoleoyl PC (D) in (CD 3 ) 2 SO are shown. The signals at 3.1 ppm (asterisk) that do not interact any other peak were identified as antigen, as is the case with the N-trimethylamino group of the authentic sn-1-palmitoyl-2-linoleoyl PC. These results strongly suggest that the antigen is PC. Analyses of the antigen by fast atom bombardment mass spectrometry showed several peaks ranging from m/z ϭ 756 -808. One of the peaks (m/z ϭ 758) corresponds to palmitoyllinoleoyl PC. The molecular species of the antigenic PC were separated by reverse phase HPLC (Fig. 6). Several antigenic peaks appeared, and a major antigenic peak at 21 min and a large peak at 29 min were identified by liquid chromatographymass spectrometry as palmitoyl-linoleoyl PC and stearoyl-linoleoyl PC, respectively. It appears that the antigenic PC consists of several molecular species with different combinations of fatty acids. The possibility that certain compounds other than PC are present in the purified antigen is very unlikely for two reasons: first, the antigen was purified to homogeneity by twodimensional TLC by which most of the phospholipid classes were separated, and, second, all of the signals (apart from one corresponding to the N-trimethylamino group in the NMR-COSY analysis) interacted with other signals. Therefore all of the signals were related to one structure. These results confirm that the monoclonal antibody ASH1a/256C recognizes PC molecules containing PUFA. Specificity of the Antigen Recognition-To investigate specificity of ASH1a/256C to recognize PC, reactivity of the antibody to various phospholipids, neutral lipids, and PC-related com-pounds was examined by ELISA (Table I, experiment 1). The antibody did not react to phosphatidylethanolamine, monomethyl phosphatidylethanolamine, or dimethyl phosphatidylethanolamine, indicating that the binding was specific for the choline-containing head group. Because these three phospholipids were prepared from egg PC by a head exchange reaction, their fatty acid compositions are essentially the same (palmitic acid, 50%; oleic acid, 25%, palmitoleic acid; linoleic acid, 16%; stearic acid, 8%). Other phospholipids such as phosphatidylserine and phosphatidylinositol had no reactivity with the antibody. All the neutral lipids tested were also negative. Platelet-activating factor and sphingomyelin were not antigenic, although they share the choline head group. It seems that not only the choline head group but certain combinations of acyl groups are necessary for antigen recognition. Reactivity of the antibody to various molecular species of PC was examined using chemically synthesized PCs (Table I, recognition, but the number of PUFAs is not the determinant of the specificity of the antibody. When PC containing PUFA are incubated with metal ion, various peroxidation products including 9-CHO PC and 5-CHO PC are formed (16). ASH1a/256C failed to bind with the aldehyde-contianing oxidized products of PC (Table I, experiment 3). Preincubation of the antibody solution with 9-CHO PC, 5-CHO PC, egg lysoPC, or platelet-activating factor did not decrease the reactivity of the antibody to bind with 1-stearoyl-2-linleoyl PC (data not shown). sn-1-Stearoyl-2-linoleoyl PC was incubated with ferrous ion and ascorbate, and the change of the antigenicity of the PC during the peroxidation reaction was determined (Table II). The antigenicity for FOH1a/DLH3, which recognizes oxidized PC, appeared strongly after 3 h of oxidation, whereas the reactivity of ASH1a/256C to the PC decreased. These results indicate that the antibody does not bind with oxidized products of PC. The Effect of Neutral Lipids on the Antigenicity of PC-As mentioned above, the antigenic activity was effectively extracted with chloroform and methanol from homogenates from rabbit tissues. Recovery of the antigenic activity was, however, reduced significantly during the purification of the antigen. The final yield of antigen activity was approximately 6.6%. It is noteworthy that the specific activity of the antigen was normalized by the amount of phosphorus decreased during the purification. A possibility to be considered is that there may be activators of antigen-antibody interaction in the homogenates. One of the major characteristics of atherosclerotic lesions is accumulation of neutral lipids; to see whether neutral lipids enhance the antigenicity of PC, the reactivity of ASH1a/256C to PC in the presence of neutral lipids was measured using ELISA. Addition of cholesterol, CE, or triacylglycerol markedly enhanced its reactivity to sn-1-stearoyl-2-linoleoyl PC (Fig. 7), whereas the neutral lipids themselves were not reactive to the antibody (Table I). These results show that neutral lipids are capable of increasing the binding of the antibody to PC. LDL, a huge particle containing phospholipids, neutral lipids, and apolipoprotein B, was not found to be a good antigen. When human LDL, copper-oxidized LDL, or high density lipoprotein were coated onto microtiter plates, no reactivity was observed with the antibody ASH1a/256C (data not shown). The lipid droplets in aorta homogenates were separated from cellular membranes and proteins by sucrose density gradient ultracentrifugation. As shown in Fig. 8B, the antigenic activity in WHHL rabbit atheroma separated into two peaks, the top fractions and the middle fractions. These fractions correspond to lipid droplets and cellular membranes, respectively. The distribution of the antigenic activity corresponds to the ELISA was performed on antigens (6.5 nmol each/well) as described under "Materials and Methods." The results are expressed as relative reactivity to egg PC (experiment 1) or to sn-1-linoleoyl-2-stearoyl PC (experiment 2). The absorbance obtained for egg PC (experiment 1) and sn-1-linoleoyl-2-stearoyl PC (experiments 2 and 3) were 0.96, 1.08, and 1.08, respectively. II Effect of peroxidation of PC on its antigenicity to ASH1a/256C sn-1-Stearoyl-2-linoleoyl PC (0.4 mM) was incubated with ferrous sulfate (40 M) and ascorbate (0.4 mM) in PBS at 37°C for indicated periods. Oxidized lipids extracted from the reaction mixture were mixed with BSA and then placed onto microtiter wells as antigen (6.5 nmol PC/well). ELISA was performed using ASH1a/256C and FOH1a/DLH3 as described under "Materials and Methods." The results are expressed as relative reactivity to the highest values obtained by these antibodies. The absorbance obtained with ASH1a/256C (0 min) and FOH1a/DLH3 (3 h) were 0.685 and 0.579, respectively. Note that the reaction with ASH1a/256C was less effective under this experimental condition than the data in Table I, because the antigen suspended in PBS as lipid-BSA mixture was coated onto micrototier wells without glutaraldehydepretreatment. The decrease in the ELISA reaction of ASH1a/256C during the oxidation of PC was equally observed under the other experimental conditions. amounts of both phospholipid and cholesterol. In the case of normal rabbit aorta, there was no antigenic activity, although phospholipids were localized in fraction 7. The cholesterol content in fraction 7 was about 1 ⁄100 of that of the corresponding fraction of WHHL rabbit (Fig. 8A). These results suggest that antigenicity in rabbit aorta is greatly affected by cholesterol accumulation in the tissue. To confirm the effect of cholesterol on the reactivity of PC in atherosclerotic lesions, an aliquot of cholesterol was added to each fraction obtained from normal rabbit aorta by sucrose density gradient centrifugation (Fig. 9). The ASH1a/256C antibody strongly reacted to the top and middle fractions following the addition of cholesterol, especially to fraction 6, which contained cellular membrane phospholipids. Similar enhanced antigenicity was also observed by addition of either cholesteryl oleate or triolein (data not shown). These results show that addition of neutral lipids to normal vessel wall increases the antigenicity of PC as observed in WHHL rabbit atheroma. Immunohistochemical Analysis-Serial sections of WHHL rabbit atherosclerotic aorta were stained with ASH1a/256C and oil-red O to study the localization of antigenic PCs and lipid deposits. Large intracellular lipid droplets related to foam cells and small lipid droplets in extracellular matrix were observed when stained with oil-red O (Fig. 10A). ASH1a/256C stained the area where small lipid droplets were profusely deposited (Fig. 10B), whereas the antibody did not stain the endothelium and the media. These immunohistochemical observations, together with the other results, strongly suggest that the ASH1a/256C antibody does not recognize normal cellular membranes but rather that certain structures of PCs complexed with neutral lipids formed in atherosclerotic lesions. DISCUSSION This paper describes the preparation of a novel monoclonal antibody that recognizes fatty streaks in human atherosclerotic FIG. 8. Separation of the antigenic materials in atherosclerotic aorta by sucrose density gradient centrifugation. Homogenates of aortas from WHHL and normal rabbits were fractionated using sucrose density gradient centrifugation. The antigenic activity in each fraction was measured by ELISA (horizontal bars). Amounts of phospholipids (open circles) and total cholesterol (closed circles) were measured following lipid extraction with chloroform and methanol. aorta. This antibody was selected by reactivity to homogenates of atheroma using ELISA and to atheromatous plaques in aortic strips using an in vitro binding assay. The antibody also recognized atherosclerotic lesions of WHHL aorta in vivo. The antigen is a lipophilic compound, based on the effective extraction from WHHL rabbit aortae by use of organic solvents. The antigen was purified by repetitive HPLC to a single spot on two-dimensional TLC. From extensive spectrometric analyses the purified antigen was identified as PC. Other phospholipids and neutral lipids were inactive. By reverse phase HPLC, the purified antigen was shown to contain several antigenic molecular species of PC. One major antigenic species was confirmed to be sn-1-palmitoyl-2-linoleoyl PC, by comparison with authentic PC and by use of liquid chromatography-mass spectrometry. Judging from the reactivity of the antibody to various molecular species of PC and PC analogs, it was concluded that the choline head group is necessary for antigen recognition and that at least one PUFA is also required. It is intriguing that the monoclonal antibody that recognizes PC binds to atherosclerotic lesions in in vivo and in vitro binding assays, despite PC, a major component of cellular membranes, having a ubiquitous distribution in whole animal tissues. It is possible that the microenvironments of PC molecules in normal aorta and atherosclerotic lesions are different. The current data indicate that PC mixed with neutral lipids such as cholesterol was highly reactive with the antibody, although the neutral lipids themselves were not antigenic. Fractionation of aortic homogenates by density gradient centrifugation showed that fractions rich in both phospholipids and neutral lipids were antigenic, and, furthermore, addition of neutral lipids to the PC-rich fraction from normal aorta markedly increased its antigenicity. From these observations, it is proposed that the monoclonal antibody ASH1a/256C is likely to recognize particular conformations or packing structures of PC molecules that are formed in the presence of high concentrations of neutral lipids. In atherosclerotic lesions there are a number of foam cells that accumulate neutral lipids as cytoplasmic and lysosomal droplets (1)(2)(3). Immunohistochemical studies showed that the ASH1a/256C antigen present in atherosclerotic lesions of WHHL aorta preferentially found in areas rich in small lipid droplets but not in areas rich in oil-red O-positive foam cells, suggesting that the lipid droplets in foam cells are not putative antigenic PC-neutral lipid complexes. It is known that lipid droplets in the extracellular space are smaller in size than those in foam cells (8,29,30). Smaller lipid droplets contain mainly free cholesterol rather than CE (31)(32)(33)(34)(35), whereas intracellular lipid droplets consist mainly of cholesteryl oleate, which forms liquid crystal structures (6). It is possible that the lipids accumulated in the extracellular space may form certain types of phospholipid-neutral lipid mixed structures. Chao et al. (30) reported that in rabbit atherosclerotic lesions the lipid droplets deposited in the extracellular space were enriched with cholesterol and sphingomyelin. The small lipid droplets accumulated extracellularly may be liposome-like mutilamellar vesicles consisting of phospholipids and unesterified cholesterol (31,32). It has been shown that cell death either by necrosis or by apoptosis is frequently seen in atherosclerotic lesions (9,33,34). Lysosomal hydrolysis of CE in foam cells during the development of atherosclerosis increases the intracellular free cholesterol:phospholipid ratio, which causes damage to the cells (35)(36)(37) When lipid-laden foam cells die during necrosis, the cytosolic lipid droplets are released into extracellular spaces. Lipid droplets may interact with phospholipids derived from fragmented membranes to form a new complex structure. In the extracellular space, the molar ratio between free cholesterol and phospholipids changes during the development of atherosclerosis (37). It is thought that free cholesterol-derived cell death may produce extracellular deposits of lipid droplets that are rich in free cholesterol. When foam cells derived from J774 murine macrophages in culture were maintained for a week, the cells that eventually died left traces of cellular materials such as fragmented membranes, attached focal adhesions, and small lipid droplets. The scenario suggested above is also supported by our recent experiments showing that ASH1a/ 256C reacts to the small lipid droplets left after the foam cells die in culture. 2 The possibility that antigenic-PC neutral lipids form complexes without being accumulated in macrophages and smooth muscle cells cannot be ruled out. From a series of extensive electromicroscopic studies Guyton and co-workers (2,3,7,8,29) proposed that free cholesterol-rich particles in the extracellular space could be formed without prior accumulation of lipids in foam cells. This group has shown that extracellular lipid vesicles accumulate in early lesions prior to the appearance of lipid-laden foam cells. A number of monoclonal antibodies that recognize atherosclerotic materials have been prepared by many investigators; however, few of them have succeeded in identifying their antigenic materials. An anti-oxidized LDL monoclonal antibody, FOH1a/DLH3, that recognizes foam cells has previously obtained (15). Its antigen was identified as oxidized products of PC including 9-CHO PC (16). The specificity of ASH1a/256C is clearly different from that of FOH1a/DLH3. The former does not bind to OxPC or oxidized LDL, and the latter does not recognize native PC species. Another monoclonal antibody recognizing atherosclerotic lesions prepared in a previous study, EMR1a/212D, specifically stained extracellular regions of atheroscletic intima from WHHL rabbits in immunohistochemical studies (18). The antibody was shown to recognize rabbit vitronectin (13), and, using this antibody, accumulation of subtypes of vitronectin with small molecular masses was demonstrated (14). The antibody reported in the present study is unique in that it represents unusual structures of common lipid complexes. Further study is needed to understand the physical properties of the putative antigenic PC-neutral lipid complex in the lesions. Finally, this antibody can bind to atherosclerotic lesions in vivo, thus applications for immuno-diagnosis and drug delivery systems may be possible in the future.
7,662.2
1999-08-27T00:00:00.000
[ "Biology", "Medicine" ]
A 117 Line 2D Digital Image Correlation Code Written in MATLAB : Digital Image Correlation (DIC) has become a popular tool in many fields to determine the displacements and deformations experienced by an object from images captured of the object. Although there are several publications which explain DIC in its entirety while still catering to newcomers to the concept, these publications neglect to discuss how the theory presented is implemented in practice. This gap in literature, which this paper aims to address, makes it di ffi cult to gain a working knowledge of DIC, which is necessary in order to contribute towards its development. The paper attempts to address this by presenting the theory of a 2D, subset-based DIC framework that is predominantly consistent with state-of-the-art techniques, and discussing its implementation as a modular MATLAB code. The correlation aspect of this code is validated, showing that it performs on par with well-established DIC algorithms and thus is su ffi ciently reliable for practical use. This paper, therefore, serves as an educational resource to bridge the gap between the theory of DIC and its practical implementation. Furthermore, although the code is designed as an educational resource, its validation combined with its modularity makes it attractive as a starting point to develop the capabilities of DIC. Introduction Digital image correlation (DIC) determines the displacements and deformations at multiple points spanning the surface of an object (full-field displacements and deformations) from images captured of the object.It is type of a full-field, non-contact optical technique and these techniques are categorised as either interferometric or non-interferometric.The interferometric techniques, such as Electronic Speckle Pattern Interferometry and Moiré Interferometry, require a coherent light source and need to be isolated from vibrations [1].As such, their utilisation is in the confines of a laboratory.In contrast, non-interferometric techniques, DIC and the grid method require simple incoherent light and are more robust with regards to ambient vibrations and light variations [2].Thus, non-interferometric techniques are more attractive due to their less stringent requirements and are mostly used in open literature.DIC allows for a more straightforward setup compared to the grid method as it only requires a random, irregular pattern on the surface of the object instead of a regular grid. These advantages of DIC over other full-field, non-contact optical techniques, along with the decreasing cost and increasing performance of digital cameras, has led to widespread use of DIC in various fields.Some applications of DIC include: (i) performing human pulse monitoring [3,4]; (ii) analysing the stick-slip behaviour of tyre tread [5]; (iii) determining the mechanical properties of biological tissue [6][7][8]; (iv) in situ health monitoring of structures and components [9][10][11]; (v) analysing vibration of components [12,13]; and (vi) remote sensing applications [14][15][16][17].However, DIC has received the most attention, and thus development, for applications in experimental solid mechanics.As such, this paper will predominantly focus on DIC in the context of experimental solid mechanics applications. In the field of experimental solid mechanics, measuring the displacement and deformation experienced by a specimen, as a result of an applied load, is essential to quantify its mechanical properties.As such, DIC is advantageous for three reasons: Firstly, its full-field nature allows more complex constitutive equations to be used to determine more than one material property at a time, using methods such as the virtual fields method [18][19][20] and the finite element model updating method [21].Secondly, the non-contact nature of DIC avoids altering mechanical properties of the materials being tested, such as in the case of determining the material properties of biological tissue [6][7][8] and hyper-elastic materials [22].Lastly, DIC allows the specimen to be exposed to harsh environments, such as high-temperature applications, while still being able to take measurements, provided the specimen is visible [23]. When DIC was first introduced by Peters and Ranson in 1982 [24], it used a simple cross-correlation criterion with a zero-order shape function (SF) and could not account for the deformation of the specimen or variations in ambient light.Between 1983 and 1989, Sutton and his colleagues improved the technique by introducing the first-order SF [25], the normalised cross-correlation criterion which is more robust against light variations [26], the Newton-Raphson (NR) optimisation method [27] and bi-cubic b-spline interpolation [28].The two-dimensional (2D) DIC technique was extended to three dimensions (3D or stereovision DIC) in 1993 by Luo et al. [29] and to digital volume correlation (DVC) in 1999 by Bay et al. [30] using X-Ray tomography-computed images. The most significant contributions to the current state-of-the-art DIC technique, as identified by Pan [2], occurred during the 21st century.In 2000, Schreier et al. [31] proved that bi-quintic b-spline interpolation is the best interpolation method for accurate sub-pixel displacements.In the same year, Lu and Cary [32] introduced the second-order SF to account for more complex deformations.In 2004, Baker and Matthews [33] proposed the inverse compositional Gauss-Newton (IC-GN) optimisation method using the sum of squared difference correlation criterion which is more efficient than the NR method.However, Tong showed in 2005 [34] that the zero-mean normalised sum of squared difference (ZNSSD) correlation criterion is the most reliable and so Pan et al. [35] adapted the IC-GN method to use the ZNSSD criterion in 2013.Finally, Gao et al. [36] introduced the second-order SF to the IC-GN method in 2015.The IC-GN method is considered to be the state-of-the-art optimisation method because it has been shown to be theoretically equivalent to the NR method [33] while offering improved accuracy, robustness to noise and computational efficiency in practice [37]. The DIC process is complicated, comprising of several intricate elements, including correlation, camera calibration, transformation of displacements between the device and real-world coordinates and strain computation.Successful application of DIC requires an understanding of all these elements and thus newcomers to the field need to overcome a difficult learning curve.To this end, there are several papers which give a comprehensive breakdown of the theory involved in the DIC process, such as the papers by Pan et al. [35], Gao et al. [36] and Blaber et al. [38].However, in order to contribute towards the development of DIC, a deep understanding of the DIC process and its elements is required.It is incredibly time-consuming to gain this working knowledge due to a lack of publications that directly bridge the gap between the theory and its implementation in code.More specifically, papers either do not provide code that details the implementation of the theory in practice [35,36] or the code that they provide is too complex to be beneficial as a learning resource [38]. This paper aims to bridge the gap between the theory and implementation of DIC.It does this by firstly presenting the theory for a 2D, subset based DIC framework that is predominantly consistent with current state-of-the-art practices.Thereafter the implementation of the theory of the framework as the provided 117 line MATLAB code is discussed.Lastly the correlation aspect of the code is validated using the DIC Challenge image sets documented by Reu et al. [39].More specifically, its results are discussed in parallel with those obtained using either the commercial software package by LaVision (Davis) and the open-source software Ncorr [38] or, to results documented in the DIC Challenge paper [39], in order to draw conclusions. The framework, referred to as the ADIC2D framework, is implemented using MATLAB because its simple syntax does not distract the reader from the mathematics of the code.Additionally, its built-in functions are used to simplify the code and improve its efficiency.The code is modular, allowing readers to progressively build up their understanding of the code so that recognising the connection between the theory and code is straightforward.Moreover, this modularity allows for rapid adaption of the code thereby encouraging readers to develop the capabilities of DIC. Framework Theory DIC consists of four processes: calibration, correlation, displacement transformation and strain computation.Calibration involves determining the parameters of the camera model which relates the location of a point on an object in the real world to the location of the corresponding point in an image taken of the object.Correlation calculates how portions of the object, captured in the image set, displace throughout the image set.Displacement transformation then uses the parameters determined by calibration to transform the pixel displacements determined by correlation to metric displacements in the real world.Finally strain computation determines the strain fields experienced by the specimen from the displacement fields. Calibration Calibration determines the parameters of the camera model.ADIC2D uses the pinhole camera model to transform the location of a point in the real world to the idealised location of the point in the image.Then, a radial distortion model is used to relate the idealised location of this point to its actual distorted location, as illustrated in Figure 1. Remote Sens. 2020, 12, x FOR PEER REVIEW 3 of 31 The framework, referred to as the ADIC2D framework, is implemented using MATLAB because its simple syntax does not distract the reader from the mathematics of the code.Additionally, its builtin functions are used to simplify the code and improve its efficiency.The code is modular, allowing readers to progressively build up their understanding of the code so that recognising the connection between the theory and code is straightforward.Moreover, this modularity allows for rapid adaption of the code thereby encouraging readers to develop the capabilities of DIC. Framework Theory DIC consists of four processes: calibration, correlation, displacement transformation and strain computation.Calibration involves determining the parameters of the camera model which relates the location of a point on an object in the real world to the location of the corresponding point in an image taken of the object.Correlation calculates how portions of the object, captured in the image set, displace throughout the image set.Displacement transformation then uses the parameters determined by calibration to transform the pixel displacements determined by correlation to metric displacements in the real world.Finally strain computation determines the strain fields experienced by the specimen from the displacement fields. Calibration Calibration determines the parameters of the camera model.ADIC2D uses the pinhole camera model to transform the location of a point in the real world to the idealised location of the point in the image.Then, a radial distortion model is used to relate the idealised location of this point to its actual distorted location, as illustrated in Figure 1. Homogeneous Coordinates The pinhole camera model works with homogeneous coordinates as these allow rotation, translation, scaling and perspective projection to be applied using matrix multiplication.Anelement vector, which represents a point in -dimensional space, is converted to homogeneous coordinates by appending a scaling variable of unity to the end of the vector.Converting back from homogeneous coordinates involves dividing each element of the vector by the last element, the scaling variable, before removing the last element.Homogeneous coordinate vectors are indicated by underlining the variable name.For more information on homogeneous coordinates, refer to the work of Bloomenthal and Rokne [40]. Pinhole Camera Model The pinhole camera model relates the location of a point in the world coordinate system (CS) to its corresponding idealised location in the sensor CS.The 3D world CS is defined such that its x-y Homogeneous Coordinates The pinhole camera model works with homogeneous coordinates as these allow rotation, translation, scaling and perspective projection to be applied using matrix multiplication.An n-element vector, which represents a point in n-dimensional space, is converted to homogeneous coordinates by appending a scaling variable of unity to the end of the vector.Converting back from homogeneous coordinates involves dividing each element of the vector by the last element, the scaling variable, before removing the last element.Homogeneous coordinate vectors are indicated by underlining the variable name.For more information on homogeneous coordinates, refer to the work of Bloomenthal and Rokne [40]. Pinhole Camera Model The pinhole camera model relates the location of a point in the world coordinate system (CS) to its corresponding idealised location in the sensor CS.The 3D world CS is defined such that its x-y plane is coincident with the surface of the specimen under consideration: 2D DIC is limited to determining displacements that occur within this x-y plane.The 2D sensor CS is defined such that its x-y plane is coincident with the plane of the charge-coupled device which captures light rays incident upon its surface as an image. Let the homogeneous coordinates in the world and sensor CS be xw = xw ŷw ẑw 1 , respectively.Note that the circumflex indicates that the coordinates are ideal (undistorted).The pinhole camera model is given as [41]: where matrices V and K contain the extrinsic and intrinsic camera parameters respectively.The extrinsic camera parameters define a rotation matrix R and a translation vector T which define the position and orientation of the world CS relative to the position and orientation of the camera.Thus, the extrinsic camera parameters change if the relative position or orientation between the specimen and camera change. In contrast, the intrinsic camera parameters remain unchanged because they are only dependent on the camera system.The parameters ξ x and ξ y perform scaling from metric units to units of pixels.This paper uses millimetres as the metric units.The parameters c x and c y apply translation such that the origin of the sensor CS is at the top left of the image as shown in Figure 1.The parameter c s converts from an orthogonal CS to a skewed sensor CS.Here, c s = 0 since an orthogonal sensor CS is assumed.The parameter α is an arbitrary scaling variable of the homogeneous coordinates which is factored out.For more information on the pinhole camera model refer to the work of Zhang [41] and Heikkila et al. [42]. Radial Distortion Model According to Tsai [43] and Wei et al. [44], the difference between the ideal and actual image can be well accounted for by using only a radial distortion model.Radial distortion is caused by the lens system having different magnification levels depending on where the light ray passes through the lenses.The image experiences either an increase (pincushion distortion) or decrease (barrel distortion) in magnification with increasing distance from the optical axis.The radial distortion model requires that xs be converted to normalised ideal image coordinates, xn = xn ŷn T , using the inverse of the intrinsic parameter matrix as This equation includes a matrix to convert from homogeneous coordinates to Cartesian coordinates. xn is related to the normalised, distorted image coordinates, x n = x n y n T , as [41] x where κ 1 and κ 2 are the unit-less radial distortion parameters that quantify the severity of the distortion. x n is converted to distorted coordinates in the distorted sensor CS, x = x y T , as 2.1.4.Calibration Process Calibration determines the extrinsic, intrinsic and radial distortion parameters using images taken of a calibration plate.A calibration plate is an object with a flat surface having a high contrast regular pattern which contains distinctive, point-like features called calibration targets (CTs).It is used to define a set of 3D coordinates in the world CS and a corresponding set of distorted, 2D coordinates in the distorted sensor CS. The 3D coordinates of these CTs in the world CS are predefined.In fact, they lie on the x-y plane of the world CS and define its position and orientation.The set of corresponding distorted, 2D coordinates in the sensor CS can be determined by locating the CTs in an image taken of the calibration plate.These two sets of 3D and 2D coordinates are used to solve for the parameters of the camera model, which describe the relationship between the two.This is done in two steps. The first step determines initial estimates for the extrinsic and intrinsic camera parameters using the closed form solution method proposed by Zhang [41].The initial estimate of the radial distortion parameters is set to zero. The second step works with two sets of CTs in the distorted sensor CS: the true CTs, x true = x true y true T , obtained directly from the calibration images and the calculated CTs, , obtained by transforming the known CTs of the world CS to the distorted sensor CS using the camera model and the current estimate of the calibration parameters.The difference between the true and calculated CTs is quantified as the total projection error, E proj , given as There are L many calibration images and M many CTs per calibration image.The second step uses iterative non-linear least-squares optimisation to solve for the calibration parameters which minimise E proj .Note that multiple calibration images are used in order to form an over-determined system of equations.This makes the calibration process less sensitive to noise inherent in the images.For more information on the calibration process refer to the work of Zhang [41] and Heikkila et al. [42]. The last process in calibration corrects T for the thickness of the calibration plate, ρ, such that the x-y plane of the world CS is coincident with the surface of the specimen under consideration.The corrected translation vector, T spec , that replaces T in Equation ( 1) is determined as where T and R are the translation vector and rotation matrix determined by the above calibration process. Correlation Correlation considers two images: a reference image, F, representing the specimen at time t = 0, and a deformed image, G, representing the specimen at time t = 1.F is broken up into subsets which are groups of neighbouring pixels.Conceptually, correlation attempts to determine how a reference subset (f) must displace and deform such that it matches a corresponding subset, the investigated subset (g) in G.In practice, however, f remains unchanged while its pixel centre positions (hereafter referred to as pixel positions) are displaced and deformed according to W, a predefined SF, resulting in the query points of the investigated subset.The investigated subset is obtained by sampling the deformed image at these query points.To better understand this, some details of correlation need to be explained. Correlation operates in the distorted sensor CS, as illustrated in and the distance from x o to x i , ∆x i = ∆x i ∆y i T , as Remote Sens. 2020, 12, x FOR PEER REVIEW 6 of 31 Similarly, the corresponding th query point of , = [ ] , is based on and the distance from to , ∆ = [Δ Δ ] , as ∆ is defined relative to because is unknown prior to correlation.and are a special case of Δ and Δ for the pixel at the centre of the investigated subset.∆ is determined using which modifies ∆ according to a given displacement and deformation quantified by the shape function parameters (SFPs), , as Each pixel of the investigated subset, , is populated by sampling the light intensity of the deformed image at .However, images are discrete and so interpolation must be used to obtain these light intensities of at non-integer locations.As such, and are treated as functions which return the light intensity at a location in the image.For this involves interpolation.The pixels of and are populated by sampling these functions as The similarity between and is quantified by the correlation criterion.Correlation aims to find the SFPs which define an investigated subset which closely matches the reference subset. Correlation Criterion The two most popular types are the ZNSSD and zero-mean normalised cross-correlation (ZNCC) criteria, which are robust against offset and scaling changes in light intensity.The ZNSSD criterion, which has a range of { ∈ ℝ|0 ≤ ≤ 4}, where 0 indicates a perfect match, is calculated as Similarly, the corresponding ith query point of g, x i = x i y i T , is based on x o and the distance from x o to x i , ∆x i = ∆x i ∆y i T , as ∆x i is defined relative to x o because x d is unknown prior to correlation.u and v are a special case of ∆x i and ∆y i for the pixel at the centre of the investigated subset.∆x i is determined using W which modifies ∆x i according to a given displacement and deformation quantified by the shape function parameters (SFPs), P, as Each pixel of the investigated subset, g i , is populated by sampling the light intensity of the deformed image at x i .However, images are discrete and so interpolation must be used to obtain these light intensities of G at non-integer locations.As such, F and G are treated as functions which return the light intensity at a location in the image.For G this involves interpolation.The pixels of f and g are populated by sampling these functions as The similarity between f and g is quantified by the correlation criterion.Correlation aims to find the SFPs which define an investigated subset which closely matches the reference subset. Correlation Criterion The two most popular types are the ZNSSD and zero-mean normalised cross-correlation (ZNCC) criteria, which are robust against offset and scaling changes in light intensity.The ZNSSD criterion, which has a range of {C ZNSSD ∈ R|0 ≤ C ZNSSD ≤ 4}, where 0 indicates a perfect match, is calculated as where I is the number of pixels contained within a subset, f = (g i − g) 2 are the normalisation functions of subsets f and g, respectively.Similarly, the ZNCC criterion, which has a range of , where 1 indicates a perfect match, is given as Pan et al. [45] proved that these two criteria are related as The more computationally efficient ZNSSD criterion is evaluated within ADIC2D; however, it is reported as the ZNCC coefficient, using Equation (13), because its range is more intuitive.For more information on correlation criteria refer to the work of Pan et al. [45]. Shape Function The most common SFs are the zero (W SF0 ), first (W SF1 ) and second-order SFs (W SF2 ) expressed as [32] where u and v represent the displacement of x o in the x-and y-directions respectively, and their derivatives (subscript x and y) define the deformation with respect to the reference subset.Specifically, u x , u xx , v y and v yy represent elongation while u y , v x , u yy , v xx , u xy and v xy represent shearing of the subset.Higher order SFs, containing higher order displacement derivatives, allow for more complex deformation as shown in Figure 3.In contrast, b-spline interpolation builds up an interpolation equation from locally supported basis functions.More specifically, a basis function is defined at each data point and the coefficients of all these basis functions are determined simultaneously from the data.This is done such that the summation of the basis functions forms the interpolation equation as shown in Figure 4(c).For cubic This enables higher order SFs to more reliably track subsets in complex displacement fields.The elements of P, for each SF order, are stored as In contrast, b-spline interpolation builds up an interpolation equation from locally supported basis functions.More specifically, a basis function is defined at each data point and the coefficients of all these basis functions are determined simultaneously from the data.This is done such that the summation of the basis functions forms the interpolation equation as shown in Figure 4(c).For cubic Polynomial interpolation fits a local polynomial equation of order n to a window of data of size n + 1 as shown in grey in Figure 4b for cubic polynomial interpolation.The resulting interpolation equation is a piecewise polynomial where only the central portion of each local polynomial equation is used.The interpolation equation is C 0 and C 1 continuous for linear and cubic polynomial interpolation, respectively.Refer to the work of Keys [46] for more information on cubic polynomial interpolation. In contrast, b-spline interpolation builds up an interpolation equation from locally supported basis functions.More specifically, a basis function is defined at each data point and the coefficients of all these basis functions are determined simultaneously from the data.This is done such that the summation of the basis functions forms the interpolation equation as shown in Figure 4c.For cubic b-spline, the interpolation equation is C 2 continuous.Refer to the work of Hou et al. [47] for an in-depth discussion of bi-cubic b-spline interpolation. The interpolation method should be as exact as possible in order for correlation to determine sub-pixel displacements reliably and efficiently because interpolation is the most time consuming part of correlation for iterative, sub-pixel DIC [48]. Gaussian Filtering High order interpolation methods, such as bi-cubic b-spline interpolation, are sensitive to high frequency noise contained in the images [49].A Gaussian low-pass filter is used to attenuate the high frequency noise of each image of the image set in order to reduce the bias of the displacement results caused by the interpolation method.Gaussian filtering convolves a 2D Gaussian point-spread function with the image.The Gaussian function consists of a window size, β (in pixels), and standard deviation, σ g , to determine a weighted average light intensity at each pixel position in the filtered image from a window of pixels in the unfiltered image.The Gaussian point-spread function is scaled such that the sum of itself equals 1. Although interpolation is only required for G, all the images of the image set (including F) need to be filtered such that the light intensity patterns of the subsets, considered by the correlation criterion, are directly comparable.Despite the fact that variance of the displacement results is independent of the interpolation method, it is dependent on the image detail which is reduced by smoothing [50].Therefore β and σ g should be chosen to reduce bias while not significantly increasing variance.For more information on Gaussian filtering refer to Pan's work [49]. Optimisation Method The optimisation problem aims to minimise the correlation criterion (Equation (11)) by using the IC-GN method to iteratively solve for the optimal SFPs.An illustration of this process is shown in Figure 5. Substituting Equation (10) into Equation (11) results in an expression in terms of F and G being obtained.In addition, Equation (11) is modified to include an iterative improvement estimate, ∆P.Normally, iterative updating uses the forward additive implementation in which both ∆P and P are applied to the investigated subset as P + ∆P.However, for the inverse compositional implementation ∆P is applied to the reference subset and the current estimate of P is applied to the investigated subset.Thus, the objective function is given as Taking the first-order Taylor series expansion of Equation ( 16) in terms of ∆P gives where is the light intensity gradient of f and ∂P is the Jacobian of the SF at each pixel position.For the zero, first and second-order SFs ∂W i ∂P is given as [32,33] and Remote Sens. 2020, 12, x FOR PEER REVIEW 11 of 31 Stopping Criterion Iterations stop once the change in SFPs, ‖Δ ‖, is below a specified threshold referred to as the stopping criterion value ( ) [35].The expressions for ‖Δ ‖ for the SF orders are [36] where = √ is the furthest distance from . Displacement Transformation Displacement transformation maps and from the distorted sensor CS to the world CS.First, the position of the investigated subset, , is determined as An exact analytical solution for the inverse of Equation ( 3) does not exist because it requires determining the roots of a polynomial of degree greater than four [51].As such distortion is removed from the reference and investigated subset positions using non-linear, least-squares optimisation.Setting Equation (17) to zero and taking the derivative with respect to ∆P gives the first-order, least-squares solution.Rearranging to make ∆P the subject of the equation yields where H is the Hessian given by Equation (20) and the remaining terms, within the summation, of Equation ( 19) form the Jacobian, J. H is independent of the SFPs and remains constant during iterations.Thus, Equation ( 20) can be pre-computed before iterations begin. Note that since ∆P is applied to the reference subset, each iteration solves for a set of SFPs which if applied to the reference subset would improve the correlation criterion.However, instead of applying ∆P to the reference subset it is used to improve the estimate of the SFPs of the investigated subset. More specifically, the updated SFPs of the investigated subset, P update , are obtained by composing the inverted iterative improvement, ∆P, with the current estimate, P, as where ω is a function which populates a square matrix with the values of the SFPs as [36] ω where A 1 through A 18 are and The optimisation method is computationally efficient because before iterations begin the following are computed: (i) H and its inverse; (ii) the interpolation coefficients of G; and (iii) the image gradients of F using the Prewitt gradient operator.Each iteration step involves evaluating W (Equation ( 14)) using the current estimate of P to obtain ∆x i , which is used by Equation (8) to compute x i , interpolating G at x i in order to compute g, g and g and finally computing ∆P using Equation (19).For each iteration P is updated using Equation (21).Iterations continue until the stopping criterion deems that P is a solution.The correlation coefficient is then computed using Equation (11) substituted into Equation (13) and u and v are obtained from the SFPs. Displacement Transformation Displacement transformation maps u and v from the distorted sensor CS to the world CS.First, the position of the investigated subset, x d , is determined as An exact analytical solution for the inverse of Equation ( 3) does not exist because it requires determining the roots of a polynomial of degree greater than four [51].As such distortion is removed from the reference and investigated subset positions using non-linear, least-squares optimisation. The resulting undistorted sensor coordinates of the subset before, xo = xo ŷo T , and after deformation, xd = xd ŷd T , are transformed to the world CS using the inverse of the pinhole camera model as The corrected translation vector determined by Equation ( 6) is used in Equation ( 25 Strain Computation Strains are computed from the gradients of the displacements determined using Equation (26).A method of smoothing displacements before differentiation is recommended because these displacements contain noise which is amplified by differentiation.The method of point-wise least-squares proposed by Pan et al. [52] fits a planar surface to a window of displacement data using linear, least-squares optimisation with the subset of interest located at the centre of the window.The resulting equation for the planar surface is differentiated to determine the displacement gradients for the subset of interest.This is done for each subset and these displacement gradients are used to calculate the strains. Framework Implementation The ADIC2D framework, provided in Appendix A, is called from the command prompt as "ProcData = ADIC2D(FileNames, Mask, GaussFilt, StepSize, SubSize, SubShape, SFOrder, RefStrat, StopCritVal, WorldCTs, ImgCTs, rho)" requiring input variables as defined in Table 1 and providing an output variable as a structured array containing data for each analysed image d and subset q as detailed in Table 2. ADIC2D Function ADIC2D is the main function and is outlined in Table 3.Its purpose is to set up the DIC problem and call the appropriate subroutines.ADIC2D defines variables on a per image and subset basis to allow for complete flexibility in assigning Xos, SubSize, SubShape and SFOrder, i.e., on a per subset basis.Although ADIC2D is capable of this, it assigns the same SubSize, SubShape and SFOrder to each subset (in line 8 based on the inputs) since this is the most common use case.Output variables are pre-assigned in line 8 to allow for the collection of input data used and efficient storage of computed variables.Note that the SFPs are stored in a vector P which corresponds to the size of the second-order SFP vector in Equation (15).Thus, the second-order SFPs of P, not used by the specified SF order, remain zero. Table 1.Description of the required input variables for the ADIC2D framework. Variable Variable Description FileNames Cell array of character vectors containing the image file names of the image set d. All images need to be the same size. Mask Logical matrix, which is the same size as the images, indicating which pixels should not be analysed during correlation. WorldCTs Location of CTs in the world CS defined according to MATLAB's estimateCameraParameters function. ImgCTs Location of CTs in the sensor CS defined according to MATLAB's estimateCameraParameters function.rho Calibration plate thickness in millimetres. Table 2. Accessing the output variables for image d (contained in ProcData(d)) and subset number q. ImgName Image name. ImgFilt(b) Standard deviation (b = 1) and window size (b = 2) for the Gaussian filter respectively in pixels. Xos(b,q) Reference subset position in the distorted sensor CS (b = 1 for x o and b = 2 for y o ). Xow(b,q) Reference subset position in the world CS (b = 1 for xo w and b = 2 for ŷo w ).P(b,q) SFPs (b = 1 for u and b = 7 for v). Uw(b,q) Displacement in the world CS (b = 1 for ûw and b = 2 for vw ). Iter(q) Number of iterations until stopping criterion is satisfied (maximum of 100 iterations). Line Numbers Task Performed Lines 2-4 Compute image names, number of images and size of the first image; Lines 5-6 Create regularly spaced reference subset positions, Xos; Line 7 Remove subsets containing invalid pixels which are defined by Mask; Line 8 Pre-assign ProcData structure; Line 9 Call subroutine ImgCorr to perform image correlation; Line 10 Call subroutine CSTrans to perform transformation from the distorted sensor CS to the world CS; ADIC2D calls the subroutine ImgCorr to perform the image correlation as presented above.ImgCorr's input variables are n (the total number of images in the set), the pre-assigned variables in ProcData, FileNames, RefStrat and StopCritVal.The output variables are P, C, Iter and StopVal which are stored in ProcData.The computed SFPs are then passed to CSTrans to transform displacements to the world CS.CSTrans's input variables are n, ProcData, WorldCTs, ImgCTs and rho.The output variables are Xow, Uw and MATLAB's CamParams (containing the intrinsic, extrinsic, and radial distortion parameters) which are stored in ProcData.Note that within the subroutines ProcData is shortened to PD. The presented framework assumes a constant, regularly spaced Xos defined using StepSize and SubSize.Subsets which contain pixels that Mask indicates should not be analysed are removed. Correlation Implementation Correlation is performed using five subroutines: (i) ImgCorr, which performs the correlation on an image bases, i.e., between F and G; (ii) SubCorr, which performs the correlation on a subset basis; (iii) SFExpressions, which defines anonymous functions based on the SF order; (iv) SubShapeExtract, which determines input data for SubCorr based on the subset shape, size and position; and (v) PCM, which determines initial estimates for the displacement SFPs. SubCorr's input variables are the interpolation coefficients, f i , ∇f i , SubSize, SFOrder, Xos, ∆x i , initial estimates for P and StopCritVal.Note that throughout Section 3.2 variables with subscript i refer to the full set of this variable for a subset (i.e., ∇f i refers to ∇f i ∀ i ∈ I).SubCorr's output variables are P, C, Iter and StopVal.SFExpressions's input variable is SFOrder with outputs as anonymous functions to compute W, ∇f i ∂W i ∂P and P .Moreover, two functions are included to compute ω (given in Equation ( 22)) and to extract the SFPs from ω. The framework considers two subset shapes, square and circular, which are commonly employed in subset based DIC.For circular subsets SubSize defines the diameter of the subset.SubShapeExtract is used to determine f i , ∇f i and ∆x i for a subset based on the inputs SubSize, SubShape, Xos, F, ∇F and SubExtract.∇F is the light intensity gradient of the entire reference image and SubExtract is an anonymous function, defined in line 2 of ImgCorr, which extracts a square subset from a matrix based on the position and size of the subset.PCM returns u and v based on inputs F, G, SubSize, Xos (passed as two vectors as required by arrayfun) and SubExtract. Furthermore, two reference strategies are considered, namely, an absolute and an incremental strategy.The absolute strategy defines the first image as F (i.e., FileNames(1)), whereas the incremental strategy defines the previous image as F (FileNames(d-1)).The incremental strategy handles large deformations between images more reliably; however, if total displacements are required, it suffers from accumulative errors.The variable RefStrat is set to 0 or 1 for the absolute or incremental strategy respectively.Alternate reference strategies may be set by modifying line 8 in ImgCorr. Moreover, ADIC2D considers the zero, first and second-order SFs, as outlined in Section 2.2.2.Set SFOrder to 0, 1 or 2 for the zero, first and second-order SFs, respectively. ImgCorr Function ImgCorr uses two nested for-loops as summarised in Table 4.The outer loop cycles through the image set, whereas the inner loop cycles through the subsets.ImgCorr reads the appropriate image pairs F and G from the image set, depending on the chosen reference strategy, and filters both using MATLAB's imgaussfilt function.Alternate image filters can be employed by modifying line 5 and 9. Bi-cubic b-spline interpolation coefficients are computed using MATLAB's griddedInterpolant function.Alternate interpolation methods can be set by either modifying line 6 by replacing 'spline' with 'linear' or 'cubic', or replacing it with an alternate interpolation algorithm, such as MATLAB's spapi function for higher order spline interpolation.griddedInterpolant was used for computational efficiency. For an incremental strategy, Xos is displaced using the displacement SFPs from the previous correlation run, to track the same light intensity patterns within the reference subsets.These displacements SFPs are rounded, as suggested by Zhou et al. [53], such that the pixel positions of the reference subset have integer values and avoid the need for interpolating the reference subset.Correlation of each subset requires SFP initial estimates.For the first run, ADIC2D uses a Phase Correlation Method (PCM) to determine initial estimates.Subsequent correlation runs use the previous correlation run's SFPs as an initial estimate.However, PCM is used for every run in the incremental strategy, as it allows for better stability if large displacements are expected.PCM can be used between each run by replacing line 15 with line 13.Moreover, alternate initial estimate strategies can be implemented by changing line 13.The PCM algorithm is discussed in Section 3.2.5. The inner loop correlates each subset by using SubShapeExtract to determine the data for a subset while SubCorr uses this data to perform correlation of the subset.The loop can be implemented using parallel processing to reduce computation time by changing line 18 to a parfor-loop.However, during a parfor-loop the outputs of SubCorr cannot be saved directly to a structure variable.It is for this reason that they are saved to the temporary storage variables (initiated in line 17) during the loop and assigned to PD thereafter. SubShapeExtract Function SubShapeExtract returns the data sets of f i , ∇f i and ∆x i for a subset based on its intended shape, size and position, as outlined in Table 5.Note that these output data sets are in the form of vertical vectors.Alternative subset shapes can be added to this function provided they produce the same output data sets. For a square subset SubExtract is used to extract the appropriate elements from the input matrices (F and ∇F) which correspond to the pixels of the subset.∆x i is determined in line 7 according to SubSize. For circular subsets the same process is followed.This results in temporary data sets f i , ∇f i and ∆x i which correspond to a square subset of size equal to the diameter of the intended circular subset.A mask identifying which elements, of these data sets, fall within the radius of the intended circular subset is computed in line 13 using ∆x i .This mask is used to extract the appropriate elements from the temporary data sets of the square subset resulting in the appropriate data sets for the circular subset.Compute ∆x i using SubSize; Line 13 Determine mask of elements that fall within the circular subset; Line 14-16 Use mask to extract appropriate data for circular subset; Line 17 end switch SubCorr Function SubCorr is at the heart of ADIC2D and performs the subset-based correlation, as summarised in Table 6.It follows the theoretical framework presented in Section 2.2.Initialise flag ← 0, iter ← 0 and ∆P ← 1 ; Line 7 while flag = 0, do Line 8 Compute ∆x i Equation ( 14), using estimates of P Line 9 Compute g using interpolation coefficients; Line 10 Compute normalisation values g and g; Line 11 Compute ∆P using Equation ( 23 SFExpressions Function SFExpressions returns five anonymous functions based on the SF order specified and is outlined in Table 7. W, defines Equation ( 14), dFdWdP defines ∇f i ∂W i ∂P , SFPVec2Mat defines Equation ( 22), Mat2SFPVec extracts P from SFPVec2Mat and StopCrit defines Equation (23).Additional SFs, such as higher order polynomials, can be added after line 20 provided they are consistent with the outputs of SFExpressions. Line Numbers Task Performed Line 2 switch SFOrder Line 3-8 case SFOrder = 0, do assign functions for zero-order SF; Line 9-14 case SFOrder = 1, do assign functions for first-order SF; Line 15-20 case SFOrder = 2, do assign functions for second-order SF; Line 21 end switch 3.2.5.PCM Function PCM performs correlation using the zero-order SF in the frequency domain to obtain initial displacement estimates.The algorithm is summarised in Table 8.PCM is efficient; however, it is limited to integer pixel displacements and can only use square subsets.Moreover, PCM is only capable of determining a reliable initial estimate if the displacement is less than half of SubSize.For more information on PCM, refer to the work of Foroosh et al. [54]. Line Numbers Task Performed Line 2 Compute normalised cross-power spectrum in the frequency domain; Line 3 Convert back to spatial domain; Line 4 Find index of the maximum correlation coefficient; Line 5 Compute index vector which relates indices of the correlation coefficient matrix to the displacements they correspond to; Line 6-7 Obtain displacements using index of the maximum correlation coefficient; CSTrans Function CSTrans performs CS and displacement transformations from the distorted sensor CS to the world CS as outlined in Table 9. CSTrans uses MATLAB's image calibration toolbox to determine calibration parameters according to Section 2.1 which are used to perform the transformations detailed in Section 2.3.Note that the extrinsic calibration parameters, extracted in line 8, are based on the final set of CTs in the sensor CS (ImgCTs(:,:,end)).Alternate calibration algorithms may be implemented by replacing lines 13 and 14. Validation ADIC2D was validated using the 2D DIC Challenge image sets that were created using TexGen [55] or Fourier methods [56] as documented by Reu et al. [39].Homoscedastic Gaussian noise was applied to each image set to simulate camera noise.As stated by Reu et al. [39], "image noise is specified as one standard deviation of the grey level applied independently to each pixel".The respective noise levels are listed in Table 10.Samples 1-3 contain rigid body translations to assess the performance of the ADIC2D framework in the "ultimate error regime" [57].This type of analysis aims to highlight the errors caused by contrast and noise, in the absence of complex displacement fields, interacting with the numerical processes of correlation [39,58].Sample 14 contains a sinusoidal displacement field with increasing frequency.This type of analysis aims to highlight the compromise between noise suppression and spatial resolution (SR) [39].CS transformations were not performed during the validation process, by setting WorldCTs = 0, ImageCTs = 0 and rho = 0.A stopping criterion of StopCritVal = 10 −4 , limited to 100 iterations per subset (line 12 in SubCorr), was used.The Gaussian image filter was set to FiltSize = 5 as this offers the best compromise between reducing bias and avoiding increasing variance [49].FiltSigma is specified on a per sample basis. Quantifying Error Bias, variance, root-mean square error (RMSE) and SR were used to quantify errors.Bias refers to the mean of the absolute error (MAE u , MAE v ) between the correlated and true values, while variance refers to the standard deviation of the absolute error (σ u , σ v ).These are computed as where u calc q and v calc q are the correlated, u true q and v true q the true displacements in the x-and y-direction respectively and Q is total number of subsets.Bornert et al. [57] introduced a RMSE which summarises the full-field displacement errors as a single number calculated as , and Strain bias, variance and RMSE are calculated in the same way.SR is defined as the highest frequency of a sinusoidal displacement field at which the code is capable of capturing the peak displacements and strains within 95% and 90% of the true values, respectively [39].SR is reported as the period such that lower values indicate better performance across all error metrics. Samples 1-3 Samples 1-3 were correlated using ADIC2D, Justin Blaber's Ncorr (version 1.2) and LaVision's DaVis (version 8.4).Ncorr was used as it is well-established [59,60] and its correlation process is similar in theory to ADIC2D with the exception that it uses bi-quintic b-spline interpolation and the reliability-guided displacement tracking (RGDT) strategy proposed by Pan [61].DaVis uses bi-sextic b-spline interpolation and was included to compare ADIC2D to a commercial software package. The following procedure is used to determine the error metrics for each sample on a per algorithm and per subset basis: (i) the displacement errors in the x-and y-direction were computed for each ADIC2D is capable of dealing with high frequency displacement fields.For a subset size of 71 pixels ADIC2D performs similarly to code A (within 0.1% difference) with the exception of an improved SR (51%) and higher maximum bias (5%).As the subset size decreases so does the RMSE, bias and SR while variance increases.Figure 6 illustrates this increase in noise suppression with increase in subset size.For SubSize = 25 pixels, the error metrics increase (except strain SR as illustrated in Figure 7b), indicating a limitation of ADIC2D with regards to noise suppression and SR for smaller subset sizes (as shown in Figure 7a).The strain SR does not increase because strain experiences more spatial filtering than displacement for the reasons outlined in the DIC Challenge paper [39].Although ADIC2D cannot achieve results similar to code G, the results in Table 14 indicate that the noise suppression and SR are within the range of established DIC codes evaluated in the DIC Challenge paper [39]. measurement points was used to compute strain data.Table 14 shows the displacement and strain results in the x-direction, for the last image of the set, that were analysed using the MATLAB code provided by the DIC Challenge [39].Codes A and G published in [39], which exhibit the best noise suppression (variance) and SR, respectively, are included for comparison.Subsets of size 25, 31, 51 and 71 pixels had 43,700, 43,700, 42,600 and 40,600 subsets per image.ADIC2D is capable of dealing with high frequency displacement fields.For a subset size of 71 pixels ADIC2D performs similarly to code A (within 0.1% difference) with the exception of an improved SR (51%) and higher maximum bias (5%).As the subset size decreases so does the RMSE, bias and SR while variance increases.Figure 6 illustrates this increase in noise suppression with increase in subset size.For SubSize = 25 pixels, the error metrics increase (except strain SR as illustrated in Figure 7(b)), indicating a limitation of ADIC2D with regards to noise suppression and SR for smaller subset sizes (as shown in Figure 7(a)).The strain SR does not increase because strain experiences more spatial filtering than displacement for the reasons outlined in the DIC Challenge paper [39].Although ADIC2D cannot achieve results similar to code G, the results in Table 14 indicate that the noise suppression and SR are within the range of established DIC codes evaluated in the DIC Challenge paper [39]. Discussion The code was designed with modularity in mind.Firstly, it is modular in that each main task is performed by a separate subroutine such that the reader can progressively build up their understanding of the overall code by considering individual subroutines.This is particularly evident for the correlation subroutines which separate correlation such that the logistics of preparing data for correlation (ImgCorr), the core correlation operations (SubCorr), the effect of different SF orders on correlation (SFExpressions), how data sets are prepared for different subset shapes (SubShapeExtract) and determining initial estimates of the SFPs (PCM) can be considered separately. Secondly, the code allows for changing of the SF order, subset shape, interpolation method and Gaussian filtering parameters.Although the effect of these on the displacement and strain results is well documented [31,49,62], this code allows the reader to easily investigate the effect of these in a practical manner. The effect of the subset shape is subtle.The displacement determined at the centre of a subset is essentially the average of the displacement experienced by the light intensity pattern contained within the subset.However, the farther a pixel is from the subset centre, the less representative its displacement is of the displacement occurring at the subset centre.As such, circular subsets have become favoured since the pixels of their pixels are evenly distributed around the subset centre in a radially symmetric manner.However, since the trade-off is not significant and square subsets are simpler from a mathematical and programming viewpoint, many DIC algorithms still use square subsets. Thirdly, the code is modular in that it allows the subset size, subset shape and SF order to be assigned on a per subset and per image basis.Traditionally, DIC makes use of a single subset size, subset shape and SF order for all subsets across all images.However, there has been a growing interest in the field of DIC to create algorithms which adaptively assign these parameters such that they are the most appropriate for the displacement and speckle pattern that the subset is attempting to track resulting in more reliable displacements being computed.The modularity of ADIC2D means it is straightforward to couple it with such an adaptive strategy. In order to keep the code simple two aspects were neglected that would have otherwise made the correlation aspect of ADIC2D consistent with the current state-of-the-art as identified by Pan [2].Firstly, ADIC2D makes use of bi-cubic b-spline interpolation, as opposed to the recommended biquintic b-spline interpolation.As stated in the work of Bornert et al. [57] the errors in the "ultimate Discussion The code was designed with modularity in mind.Firstly, it is modular in that each main task is performed by a separate subroutine such that the reader can progressively build up their understanding of the overall code by considering individual subroutines.This is particularly evident for the correlation subroutines which separate correlation such that the logistics of preparing data for correlation (ImgCorr), the core correlation operations (SubCorr), the effect of different SF orders on correlation (SFExpressions), how data sets are prepared for different subset shapes (SubShapeExtract) and determining initial estimates of the SFPs (PCM) can be considered separately. Secondly, the code allows for changing of the SF order, subset shape, interpolation method and Gaussian filtering parameters.Although the effect of these on the displacement and strain results is well documented [31,49,62], this code allows the reader to easily investigate the effect of these in a practical manner. The effect of the subset shape is subtle.The displacement determined at the centre of a subset is essentially the average of the displacement experienced by the light intensity pattern contained within the subset.However, the farther a pixel is from the subset centre, the less representative its displacement is of the displacement occurring at the subset centre.As such, circular subsets have become favoured since the pixels of their pixels are evenly distributed around the subset centre in a radially symmetric manner.However, since the trade-off is not significant and square subsets are simpler from a mathematical and programming viewpoint, many DIC algorithms still use square subsets. Thirdly, the code is modular in that it allows the subset size, subset shape and SF order to be assigned on a per subset and per image basis.Traditionally, DIC makes use of a single subset size, subset shape and SF order for all subsets across all images.However, there has been a growing interest in the field of DIC to create algorithms which adaptively assign these parameters such that they are the most appropriate for the displacement and speckle pattern that the subset is attempting to track resulting in more reliable displacements being computed.The modularity of ADIC2D means it is straightforward to couple it with such an adaptive strategy. In order to keep the code simple two aspects were neglected that would have otherwise made the correlation aspect of ADIC2D consistent with the current state-of-the-art as identified by Pan [2].Firstly, ADIC2D makes use of bi-cubic b-spline interpolation, as opposed to the recommended bi-quintic b-spline interpolation.As stated in the work of Bornert et al. [57] the errors in the "ultimate error regime" are reduced by increasing the degree of the interpolation method particularly for smaller subsets.This is reflected in Table 13, which shows that although the error metrics of ADIC2D are better than that of Ncorr for larger subsets, the opposite is true for the subset size of 21 pixels. Secondly, ADIC2D does not use the RGDT strategy.While ADIC2D uses the optimal SFPs of a subset for the previous image pair as an initial estimate of the SFPs for the current image pair, RGDT only does this for the subset with the best correlation coefficient for the previous image pair.It then uses the SFPs of this subset, for the current image pair, as initial estimates to correlate its neighbouring subsets.It then repeatedly identifies the subset with the best correlation coefficient, which has neighbouring subsets which have not yet been correlated, and uses its SFPs to correlate its neighbouring subsets.This is repeated until all the subsets have been correlated for the current image pair. Thus, ADIC2D is susceptible to propagating spurious SFPs of a subset through the image set which the RGDT strategy would have avoided.The effect of this is reflected in the results of Table 11 which shows how ADIC2D struggles to perform as consistently as Ncorr in the presence of contrast changes in the image set. Despite this, ADIC2D performs on par with established DIC algorithms.More specifically, (i) it is capable of dealing with contrast changes as shown in Table 11; (ii) it handles high levels of noise within the images sufficiently well as reflected in the results of Table 12; (iii) although displacement results of smaller subsets suffer due to its lower order bi-cubic b-spline interpolation, its interpolation method is sufficient achieving results similar to Ncorr as show in Table 13; and (iv) it has noise suppression and spatial resolution characteristics that fall within the range of those reported for established DIC algorithms as shown in Figure 7. Thus, ADIC2D can be considered sufficiently reliable for use in the field of experimental solid mechanics.However, ADIC2D is not limited to this field since its modularity means it can be easily adapted for various applications and specific use cases.Furthermore, validation of ADIC2D coupled with its modularity not only makes it attractive as a learning resource, but also as a starting point to develop the capabilities of DIC. Conclusions This paper presents the theory of a 2D, subset based DIC framework (ADIC2D) that is predominantly consistent with current state-of-the-art techniques, and illustrates its numerical implementation in 117 lines of MATLAB code.ADIC2D allows for complete flexibility in assigning correlation attributes on a per image and per subset basis.ADIC2D includes Gaussian image filtering parameters, square or circular subset shape selection, zero, first and second-order SFs, reference image strategy selection, interpolation method flexibility, image calibration using MALAB's image calibration toolbox and displacement transformation.Moreover, the presented code is modular.Sections of the framework can readily be changed enabling the reader to gain a better understanding of DIC as well as to contribute to the development of new DIC algorithm capabilities.Validation of ADIC2D shows that it performs on par with established DIC algorithms.Translation vector corrected for the thickness of the calibration plate u Displacement in the x-direction in the distorted sensor coordinate system u true True displacement of subset in the x-direction in the distorted sensor coordinate system u calc Calculated displacement of subset in the x-direction in the distorted sensor coordinate system ûw Undistorted metric displacement in the x-direction in the world coordinate system x true = x true y true T True location of the calibration targets in the distorted sensor coordinate system Figure 1 . Figure 1.Schematic diagram illustrating how the camera model is comprised of the pinhole camera model and radial distortion model. Figure 1 . Figure 1.Schematic diagram illustrating how the camera model is comprised of the pinhole camera model and radial distortion model. Figure 2 . f's centre position, x o = x o y o T , has been displaced by u and v in the x-and y-direction, respectively, to obtain g's centre position, x d = x d y d T .The ith pixel position of f, given by x i = x i y i T , is based on x o Figure 2 . Figure 2. Schematic diagram illustrating how the pixel positions of the reference and investigated subsets are related to one another within the distorted sensor CS. Figure 2 . Figure 2. Schematic diagram illustrating how the pixel positions of the reference and investigated subsets are related to one another within the distorted sensor CS. Figure 3 .Figure 4 . Figure 3. Schematic diagram illustrating the allowable deformation of a subset for various SF orders. Figure 3 . Figure 3. Schematic diagram illustrating the allowable deformation of a subset for various SF orders. T 31 Figure 3 . Figure 3. Schematic diagram illustrating the allowable deformation of a subset for various SF orders.2.2.3.InterpolationInterpolation determines the value at a query point ( ) in an image by fitting an equation to the surrounding light intensity data and evaluating the equation at .Polynomial interpolation and bspline interpolation, shown in Figure4for the one-dimensional case, are the most popular types for DIC.Polynomial interpolation fits a local polynomial equation of order to a window of data of size + 1 as shown in grey in Figure4(b) for cubic polynomial interpolation.The resulting interpolation equation is a piecewise polynomial where only the central portion of each local polynomial equation is used.The interpolation equation is C 0 and C 1 continuous for linear and cubic polynomial interpolation, respectively.Refer to the work of Keys[46] for more information on cubic polynomial interpolation. Figure 4 . Figure 4. Graphical representation of the interpolation equations for: (a) linear polynomial; (b) cubic polynomial; and (c) cubic b-spline interpolation methods. Figure 4 . Figure 4. Graphical representation of the interpolation equations for: (a) linear polynomial; (b) cubic polynomial; and (c) cubic b-spline interpolation methods. ).The resulting position of the reference, xo w = xo w ŷo w T , and investigated subsets, xd w = xd w ŷd w T , in the world CS are used to determine the metric displacement experienced by the subset, ûw vw T Figure 6 . Figure 6.Comparison of the x-displacement for Sample 14 for a subset size of: (a) 25 pixels; and (b) 51 pixels. Figure 6 . Figure 6.Comparison of the x-displacement for Sample 14 for a subset size of: (a) 25 pixels; and (b) 51 pixels. Figure 7 . Figure 7.Comparison of the noise suppression (variance) and SR for various subset sizes in terms of: (a) displacement; and (b) strain results. Figure 7 . Figure 7.Comparison of the noise suppression (variance) and SR for various subset sizes in terms of: (a) displacement; and (b) strain results. u x , u y , u xx , u xy , u yy Derivatives of the x-direction displacement V Extrinsic camera parameters v Displacement in the y-direction in the distorted sensor coordinate system v true True displacement of the subset in the y-direction in the distorted sensor coordinate system v calc Calculated displacement of the subset in the y-direction in the distorted sensor coordinate system vw Undistorted metric displacement in the y-direction in the world coordinate system v x , v y , v xx , v xy , v yy Derivatives of the y-direction displacement W Shape function ∂W i ∂P Jacobian of the shape function in terms of the shape function parameters for pixel i xw = xw ŷw ẑw T Ideal world coordinates xs = xs ŷs T Table 14 . [39]le 14 error analysis in the x-direction for last image of the image set[39]. Table 14 . [39]le 14 error analysis in the x-direction for last image of the image set[39].
14,033.6
2020-09-08T00:00:00.000
[ "Computer Science", "Engineering" ]
A preliminary study of mecA gene expression and methicillin resistance in staphylococci isolated from the human oral cavity Introduction: Staphylococci are common human commensals that acquire methicillin resistance via the mecA gene. Methicillin resistance in staphylococci from various clinical sources has been assessed using cefoxitin disc diffusion test (CDDT) and PCR detection of the mecA gene. However, oral staphylococci have been studied less frequently compared with other clinical sources. There are no previous studies on methicillin resistance in oral staphylococci in Sri Lanka. Objective: This study aimed to demonstrate methicillin resistance in staphylococci isolated from the human oral cavity using CDDT and PCR detection of mecA gene. Materials and methods: Twenty-five determine the incidence of staphylococci in the oral cavity and their antimicrobial sensitivity. Introduction Staphylococci are important human commensals inhabiting the skin, nasal mucosa and the oral mucosa. [1][2][3][4] Staphylococci are notorious opportunistic pathogens that are responsible for the majority of hospital acquired infections worldwide. 5,6 While S. aureus is the leading pathogenic species, other species of coagulase negative staphylococci (CoNS) have also emerged as pathogens, especially in immunocompromised patients and patients with prosthetic devices. 7 Several investigations support the fact that staphylococci are human oral colonizers both in health and disease. For instance, staphylococci have been abundantly isolated from the subgingival biofilm collected from patients with chronic periodontitis as well as from healthy individuals. [8][9][10] A recent analysis of subgingival biofilm collected from patients with chronic periodontitis as well as from healthy individuals identified both S. aureus and CoNS including S. auricularis, S. epidermidis and S. saprophyticus as oral microorganisms. 8 Staphylococci are also found to colonize removable partial dentures along with Candida and enteric bacilli. 11,12 Development of antimicrobial resistance in staphylococci is a serious challenge faced by the clinicians. 5,6 In addition to the production of beta-lactamase, staphylococci generate antimicrobial resistance through the mecA gene that encodes penicillin binding protein-2a (PBP-2a) responsible for methicillin resistance. 13,14 PBP-2a has lower affinity for β-lactam antibiotics compared to the typical penicillin binding protein-2 (PBP2) produced by methicillin susceptible S. aureus (MSSA) as it blocks the active site from binding β lactams. 15,16 Consequently, staphylococci which carry chromosomally confined mecA gene are considered highly virulent due to their resistance to all β-lactam antibiotics. Methicillin resistance in staphylococci is often detected by antimicrobial disc diffusion or broth dilution methods whereas detection of the mecA gene by PCR is a rapid and far more reliable technique. [17][18][19][20] Although methicillin resistance in staphylococci isolated from different human sources has been studied extensively 5 , there are very few studies on methicillin resistance in oral staphylococci. 9 The purpose of this study therefore was to investigate methicillin resistance in oral staphylococci, using the CDDT and PCR for the detection of the mecA gene. Isolates of staphylococci A total of 25 Staphylococcus isolates collected from the oral cavities of patients attending the Dental (Teaching) Hospital, Peradeniya, Sri Lanka were used for the study. These isolates were collected in an earlier study during which patients' informed consent was obtained to use such organisms for future research. The isolates included 9 samples collected by subgingival plaque sampling and 16 samples collected using the concentrated oral rinse technique. 8 None of the samples were identifiable by the personal details of the patient. Ethical approval was obtained from the ethics review committee of the Faculty of Dental Sciences, University of Peradeniya. Freeze-stored bacteria samples were recovered by culture on blood agar at 37 °C for 24-48 h. Identity of the bacteria was reconfirmed by cultural characteristics on blood agar, Gram stain, catalase and coagulase tests. Cefoxitin disc diffusion test (CDDT) The antibiotic sensitivity of staphylococci was tested using the CDDT following the Clinical and Laboratory Standards Institute (CLSI). 21 Standard suspensions of bacteria (0.5 McFarland) were prepared and inoculated onto Muller Hinton Agar (MHA) plates. After placing cefoxitin 30µg discs in the center of the plates, they were incubated at 37 ºC for 18-24h and the zones of inhibition were measured. For coagulase positive staphylococci (S. aureus), an inhibition zone diameter of ≤ 21mm was considered as methicillin resistant and ≥ 22mm was considered as methicillin sensitive whereas for CoNS, inhibition zone diameter of ≤ 24mm was considered as methicillin resistant and ≥ 25mm was considered as methicillin sensitive (CLSI M100 21 ). Extraction of DNA The species characterization and demonstration of mecA gene in the genomic DNA of staphylococci were performed according to a method described previously with minor modifications. 22 All 25 staphylococcal isolates, standard isolates of MSSA (ATCC 25923) and MRSA (ATCC 43300) were subjected to DNA extraction. Bacterial DNA was extracted from fresh bacterial cultures grown overnight on blood agar medium. From the fresh bacterial cultures, 3 to 4 loopfuls were harvested into 10mM TE buffer (10mM Tris-HCl pH, 7.5 /25mM EDTA) and subsequently washed twice with 10mM TE buffer. The resultant pellet after centrifugation was suspended in 0.6ml of 10mM TE buffer followed by addition of 10-20µl of lysozyme (50mg/ml) to the cell suspension and incubated at room temperature for 30min. The suspension was mixed gently after addition of 20µl of proteinase K (10mg/ml) and 60µl of SDS (10%) and the final suspension was incubated at 50 °C for 1h. The suspension was then mixed well with 0.6ml of phenol/chloroform and centrifuged at 13000rpm for 15min. 30µl of 5M NaCl was added to the aqueous layer extracted from the centrifuged product. This phenol/chloroform step was repeated once more with 10 min centrifugation and the resulting aqueous solution mixed with two volumes of absolute ethanol and centrifuged at 10000rpm for 5min. The supernatant was discarded, and the pellet washed with 70% ethanol. Finally, the DNA pellet was dried and dissolved with 50-100µl of TE buffer and stored at -20 ºC. The quality of the DNA was assessed by electrophoresis in 1% agarose gel. Species characterization and the detection of mecA gene by multiplex PCR 16S rRNA gene amplification was performed as an internal control using the primers given in Weisburg et al. 23 Accordingly, FD1 (5'-AGAGTTTGATCCTGGCTCAG -3') and RD1 (5'-AAGGAGGTGATCCAGCC -3') primers were used to amplify the region of 16S rRNA gene with the amplicon size of 1500bp. For the amplification of mecA gene with the amplicon size of 532bp PCR was performed using the primers described previously. 17,22 The mecA locus was amplified using forward and reverse primers, (5'-AAAATCGATGGTAAAGGTTGG-3'/5'-AGTTCTGCAGTACCGGATTTGC-3') respectively. The amplifications were performed in 15μl reaction volumes each with 5μl of Taq mix (2X GoTaq Green® master mix reaction buffer [pH,8.5] with 400µM dATP, 400µM dATP 400µM dGTP, 400µM dTTP, 400µM dCTP, and 3mM MgCl2), 0.5μl of each primer, 6ng templates DNA and nuclease free water. The reactions were carried out in a thermal cycler using the following program. Initial denaturation at 94 ºC for 5min, followed by 35 cycles of 1min of denaturation at 94 ºC, 1min of annealing temperature 55 ºC, 30 seconds of extension at 72 ºC and final extension at 72 ºC for 10min. The amplified PCR products were subsequently visualized on 1.5 % agarose gel stained with ethidium bromide (1µg/ml) for confirmation of PCR amplification. Finally, PCR products were visualized under UV and photographed. Confirmation of methicillin resistance by CDDT and PCR CDDT identified 2 isolates which were methicillin resistant with zones of inhibition (ZOI) of 19mm and 17mm respectively. Both these isolates were recognized as CoNS by the coagulase test. Accordingly 11% (2/18) of CoNS showed methicillin resistance. The remaining 16 CoNS isolates had ZOI well above 25mm, and all coagulase positive staphylococci (presumed S. aureus) isolates had inhibitory zones well above 22mm and were considered methicillin sensitive. Discussion Although staphylococci are known to be frequent colonizers of the oral cavity, the incidence of methicillin resistance in oral staphylococci is poorly studied. 1,2,24 Hence, the current study investigated methicillin resistance and the responsible mecA gene in oral staphylococci isolated from a group of Sri Lankan patients. Although a very limited study, the present study showed that the majority (72%) of oral staphylococci were CoNS supporting the previous findings of Loberto et al. 8 Only two of the coagulase negative staphylococci and none of the coagulase positive staphylococci in the present study were methicillin resistant. The present study shows that methicillin resistant staphylococci are found in the oral cavity of patients presenting to the Dental (Teaching) Hospital, Peradeniya, Sri Lanka. A retrospective analysis of data relevant to diagnostic oral microbiology in the UK showed that a small proportion (5%) of S. aureus isolated from oral specimens were MRSA. 2 However, these investigators did not report on the methicillin resistance of CoNS isolates. In contrast, another study that compared oral colonization of opportunistic pathogens including staphylococci in elderly Japanese patients with oral cancer and a healthy group showed that a large proportion, 9 of 13 oral S. aureus (69.2%) were MRSA. These investigators also demonstrated that 1 of 9 oral CoNS isolates (11.1%) were methicillin resistant. 25 Data obtained in the current study should be carefully interpreted due to the limited number of samples used in the analysis. Further studies using a larger sample would be beneficial to confirm the incidence of oral staphylococci and their antimicrobial resistance. Both methicillin resistant isolates in the current study were collected from subgingival plaque samples of patients with chronic periodontitis lesions. Although some investigators [8][9][10] have isolated staphylococci from the subgingival biofilm collected from patients with chronic periodontitis as well as from healthy individuals, antimicrobial resistance of those staphylococci has not been adequately studied. Therefore, the detection of methicillin resistant isolates in subgingival plaque samples of patients with chronic periodontitis lesions warrants further investigations of antimicrobial resistance in staphylococci associated with periodontitis lesions. As a phenotypic method for the detection of MR in staphylococci, the disc diffusion test was carried out using cefoxitin which is considered as the most reliable antibiotic for this purpose at present. Multiplex PCR assay was used to demonstrate the mecA gene in staphylococci. It has already been suggested that the detection of mecA gene with PCR offers rapid, simple, and accurate identification of methicillin resistance in staphylococci. 19 PCR and CDDT corroboration in this very limited study agrees with the previous reports that CDDT is in concordance with the PCR for demonstration of mecA gene. 20 In conclusion, S. aureus and CoNS with or without methicillin resistance may colonize the human oral cavity as discussed above. Therefore, further studies with an increased sample size are warranted to confirm the exact prevalence of methicillin resistance in oral staphylococci.
2,436
2019-04-26T00:00:00.000
[ "Medicine", "Biology" ]
Order-sensitivity and equivariance of scoring functions : The relative performance of competing point forecasts is usually measured in terms of loss or scoring functions. It is widely accepted that these scoring function should be strictly consistent in the sense that the expected score is minimized by the correctly specified forecast for a certain statistical functional such as the mean, median, or a certain risk measure. Thus, strict consistency opens the way to meaningful forecast comparison, but is also important in regression and M-estimation. Usually strictly consistent scoring functions for an elicitable functional are not unique. To give guidance on the choice of a scoring function, this paper introduces two additional quality criteria. Order-sensitivity opens the pos- sibility to compare two deliberately misspecified forecasts given that the forecasts are ordered in a certain sense. On the other hand, equivariant scoring functions obey similar equivariance properties as the functional at hand – such as translation invariance or positive homogeneity. In our study, we consider scoring functions for popular functionals, putting special em-phasis on vector-valued functionals, e.g. the pair (mean, variance) or (Value at Risk, Expected Shortfall). Introduction From the cradle to the grave, human life is full of decisions. Due to the inherent nature of time, decisions have to be made today, but at the same time, they are supposed to account for unknown and uncertain future events. However, since these future events cannot be known today, the best thing to do is to base the decisions on predictions for these unknown and uncertain events. The call for and the usage of predictions for future events is literally ubiquitous and even dates back to ancient times. In those days, dreams, divination, and revelation were considered as respected sources for forecasts, with the most prominent example being the Delphic Oracle which was not only consulted for decisions of private life, but also for strategic political decisions concerning peace and war. With the development of natural sciences, mathematics, and in particular statistics and probability theory, the ancient metaphysical art of making qualitative forecasts turned into a sophisticated discipline of science adopting a quantitative perspective. Subfields such as meteorology, mathematical finance, or even futurology evolved. Acknowledging that forecasts are inherently uncertain, two main questions arise: (i) How good is a forecast in absolute terms? (ii) How good is a forecast in relative terms? While question (i) deals with forecast validation, this paper focuses on some aspects of question (ii) which is concerned with forecast selection, forecast comparison, or forecast ranking. Specifically, we present results on order-sensitivity and equivariance of consistent scoring functions for elicitable functionals. These results may provide guidance for choosing a specific scoring function for forecast comparison within the large class of all consistent scoring functions for an elicitable functional of interest. We adopt the general decision-theoretic framework following Gneiting (2011); cf. Savage (1971); Osband (1985); Lambert, Pennock and Shoham (2008). For some number n ≥ 1, one has to be negatively oriented, that is, if a forecaster reports the quantity x ∈ A and y ∈ O materializes, she is assigned the penalty S(x, y) ∈ R. The observations y t can be real-valued (GDP growth for one year, maximal temperature of one day), vector-valued (wind-speed, weight and height of persons), functional-valued (path of the exchange rate Euro-Swiss franc over one day), or also set-valued (area of rain on a given day, area affected by a flood). In this article, we focus on point forecasts that may be vector-valued, which is why we assume A ⊆ R k for some k ≥ 1 and we equip the Borel set A with the Borel σ-algebra. One is typically interested in a certain statistical property of the underlying (conditional) distribution F t of Y t . We assume that this property can be expressed in terms of a functional T : F → A such as the mean, a certain quantile, or a risk measure. Examples of vector-valued functionals are the covariance matrix of a multivariate observation or a vector of quantiles at different levels. Common examples for scoring functions are the absolute loss S(x, y) = |x − y|, the squared loss S(x, y) = (x − y) 2 (for A = O = R), or the absolute percentage loss S(x, y) = |(x − y)/y| (for A = O = (0, ∞)). Forecast comparison is done in terms of realized scores (1.1) That is, a forecaster is deemed to be the better the lower her realized score is. However, there is the following caveat: The forecast ranking in terms of realized scores not only depends on the forecasts and the realizations (as it should definitely be the case), but also on the choice of the scoring function. In order to avoid impure possibilities of manipulating the forecast ranking ex post with the data at hand, it is necessary to specify a certain scoring function before the inspection of the data. A fortiori, for the sake of transparency and in order to encourage truthful forecasts, one ought to disclose the choice of the scoring function to the competing forecasters ex ante. But still, the optimal choice of the scoring function remains an open problem. One can think of two situations: (i) A decision-maker might be aware of their actual economic costs of utilizing misspecified forecasts. In this case, the scoring function should reflect these economic costs. (ii) The actual economic costs might be unclear and the scoring function might be just a tool for forecast ranking. However, the directive is given in terms of the functional T : F → A one is interested in. For situation (i) described above, one should use the readily economically interpretable cost or scoring function. Therefore, the only concern is situation (ii). In this paper, we consider predictions in a one-period setting, thus, dropping the index t. This is justified by our objectives to understand the properties of scoring functions S which do not change over time and is common in the literature (Murphy and Daan, 1985;Diebold and Mariano, 1995;Lambert, Pennock and Shoham, 2008;Gneiting, 2011). Assuming the forecasters are homines oeconomici and adopting the rationale of expected utility maximization, given a concrete scoring function S, the most sensible action consists in minimizing the expected score E F S(x, Y ) with respect to the forecast x, where Y follows the distribution F , thus issuing the Bayes act arg min x∈A E F S(x, Y ). Hence, a scoring function should be incentive compatible in that it encourages truthful and honest forecasts. In line with Murphy and Daan (1985) and Gneiting (2011), we make the following definition. Clearly, elicitability and consistent scoring functions are naturally linked also to estimation problems, in particular, M-estimation (Huber, 1964;Huber and Ronchetti, 2009) and regression with prominent examples being ordinary least squares, quantile, or expectile regression (Koenker, 2005;Newey and Powell, 1987). The necessity of utilizing strictly consistent scoring functions for meaningful forecast comparison is impressively demonstrated in terms of a simulation study in Gneiting (2011). However, for a given functional T : F → A, there is typically a whole class of strictly consistent scoring functions for it, such as all Bregman functions in case of the mean (Savage, 1971); further examples are given below. Patton (2017) shows that the forecast ranking based on (1.1) may depend on the choice of the strictly consistent scoring function for T in finite samples, and even at the population level if we compare two imperfect forecasts with each other. Therefore, we naturally have a threefold elicitation problem: (i) Is T elicitable? (ii) What is the class of strictly F-consistent scoring functions for T ? (iii) What are distinguished strictly F-consistent scoring functions for T ? Even though the denomination and the synopsis of the described problems under the term 'elicitation problem' are novel, there is a rich strand of literature in mathematical statistics and economics concerned with the threefold elicitation problem. Foremost, one should mention the pioneering work of Osband (1985), establishing a necessary condition for elicitability in terms of convex level sets of the functional, and a necessary representation of strictly consistent scoring functions, known as Osband's principle (Gneiting, 2011). Whereas the necessity of convex level sets holds in broad generality, Lambert (2013) could specify sufficient conditions for elicitability for functionals taking values in a finite set, and Steinwart et al. (2014) showed sufficiency of convex level sets for real-valued functionals satisfying certain regularity conditions. Moments, ratios of moments, quantiles, and expectiles are in general elicitable, whereas other important functionals such as variance, Expected Shortfall or the mode functional are not (Savage, 1971;Osband, 1985;Weber, 2006;Gneiting, 2011;Heinrich, 2014). Concerning subproblem (ii) of the elicitation problem, Savage (1971), Reichelstein and Osband (1984), Saerens (2000), and Banerjee, Guo and Wang (2005) gave characterizations for strictly consistent scoring functions for the mean functional of a one-dimensional random variable in terms of Bregman functions. Strictly consistent scoring functions for quantiles have been characterized by Thomson (1979) and Saerens (2000). Gneiting (2011) provides a characterization of the class of strictly consistent scoring functions for expectiles. The case of vector-valued functionals apart from means of random vectors has been treated substantially less than the one-dimensional case (Osband, 1985;Banerjee, Guo and Wang, 2005;Lambert, Pennock and Shoham, 2008;Frongillo and Kash, 2015a,b;Fissler and Ziegel, 2016). The strict consistency of S only justifies a comparison of two competing forecasts if one of them reports the true functional value. If both of them are misspecified, it is per se not possible to draw a conclusion which forecast is 'closer' to the true functional value by comparing the realized scores. To this end, some notions of order-sensitivity are desirable. According to Lambert (2013) we say that a scoring function S is F-order-sensitive for a one-dimensional functional T : F → A ⊆ R if for any F ∈ F and any x, z ∈ A such that either This means, if a forecast lies between the true functional value and some other forecast, then issuing the forecast in-between should yield a smaller expected score than issuing the forecast further away. In particular, order-sensitivity implies consistency. Vice versa, under weak regularity conditions on the functional, strict consistency also implies order-sensitivity if the functional is real-valued; see Nau (1985, Proposition 3), Lambert (2013, Proposition 2), Bellini and Bignozzi (2015, Proposition 3.4). This article is dedicated to a thorough investigation of order-sensitive scoring functions for vector-valued functionals, thus contributing to a discussion of subproblem (iii) of the elicitation problem. Furthermore, we investigate to which extent invariance or equivariance properties of elicitable functionals are reflected in their respective consistent scoring functions. Lambert, Pennock and Shoham (2008) introduced a notion of componentwise order-sensitivity for the case of A ⊆ R k . Friedman (1983) and Nau (1985) considered similar questions in the setting of probabilistic forecasts, coining the term of effectiveness of scoring rules which can be described as order-sensitivity in terms of a metric. In Section 3, we consider three notions of order-sensitivity in the higher-dimensional setting: metrical order-sensitivity, componentwise ordersensitivity, and order-sensitivity on line segments. We discuss their connections (Lemma 3.5) and give conditions when such scoring functions exist (Lemma B.2, Propositions 3.7, 3.8, Corollary 3.16) and of what form they are for the most relevant functionals, such as vectors of quantiles (Propositions 3.11, 3.12, Example 3.14), expectiles (Proposition 3.15), ratios of expectations (Propositions 3.6, 3.9, 3.10, 3.17), the pair of mean and variance (Proposition 3.18, Example 3.19), and the pair consisting of Value at Risk and Expected Shortfall (Proposition 3.20, Example 3.21), two important risk measures in banking and insurance. Complementing our results on order-sensitivity, in Section 2, we consider the analytic properties of the expected score x →S(x, F ), x ∈ A ⊆ R k , for some scoring function S and some distribution F ∈ F. The (strict) consistency of S for some functional T is equivalent the expected score having a (unique) global minimum at x = T (F ). Order-sensitivity ensures monotonicity properties of the expected score. As a technical result, we show that under weak regularity assumptions on T , the expected score of a strictly consistent scoring function has a unique local minimum -which, of course, coincides with the global minimum at x = T (F ) (Proposition 2.6). Accompanied with a result on self-calibration (Proposition 2.8), a continuity property of the inverse of the expected score, which ensures that the minimum of the expected score is well-separated in the sense of van der Vaart (1998), these two findings may be of interest on their own right in the context of M-estimation (Theorem 2.9). In Section 4, we consider functionals having an invariance or equivariance property such as translation invariance or homogeneity. It is a natural question whether a functional T that is, for example, translation equivariant has a consistent scoring function that respects this property in the sense that if we evaluate forecast performance of translated predictions and observations, the ranking of predictive performance remains the same as that of the original data. In parametric estimation problems, such a scoring function may allow to translate the data without affecting the estimated parameter values. For one-dimensional functionals, invariance of the scoring function often determines it uniquely up to equivalence while this is not necessarily the case for higher-dimensional func-tionals (Proposition 4.7 and Corollary 4.12). In Appendix A, we gather a list of common assumptions, which were originally introduced in Fissler and Ziegel (2016). Appendix B consists of technical results, while all proofs are of the main part of this paper are deferred to Appendix C. Monotonicity It is appealing that one does not have to specify a topology on F to define mixture-continuity because it suffices to work with the induced Euclidean topology on [0, 1] and on A ⊆ R k . It turns out that mixture-continuity of a functional is strong enough to imply order-sensitivity in the case of one-dimensional functionals (see Nau (1985, Proposition 3), Lambert (2013, Proposition 2), Bellini and Bignozzi (2015, Proposition 3.4)), and desirable monotonicity properties of the expected scores also in higher dimensions (Propositions 2.4 and 2.6). At the same time, numerous functionals of applied relevance are mixture-continuous, and we start by giving examples and a sufficient condition (Proposition 2.2). It is straight forward to see that the ratio of expectations is mixture-continuous. Moreover, by the implicit function theorem, one can verify the mixturecontinuity of quantiles and expectiles directly under appropriate regularity conditions (e.g., in the case of quantiles, all distributions in F should be C 1 with non-vanishing derivatives). Generalizing Bellini and Bignozzi (2015, Proposition 3.4c), we give a sufficient criterion for mixture-continuity in the next proposition. Our version is not restricted to distributions with compact support (however, the image of the functional must be bounded), and we formulate the result for k-dimensional functionals. Similarly to the original proof of Bellini and Bignozzi (2015), a sufficient criterion for the continuity ofS(·, F ) for any F ∈ F is that for all y ∈ O, the score S(x, y) is quasi-convex and continuous in x. 2 Recall that, under appropriate regularity conditions on F, the asymmetric piecewise linear loss S α (x, y) = (1{y ≤ x} − α)(x − y) and the asymmetric piecewise quadratic loss S τ (x, y) = |1{y ≤ x} − τ |(x − y) 2 are strictly consistent scoring functions for the α-quantile and the τ -expectile, respectively, and both, S α as well as S τ , are continuous in their first argument and convex. Hence, Proposition 2.2 yields that both quantiles and expectiles are mixture-continuous. Steinwart et al. (2014) used Osband's principle (Osband, 1985) and the assumption of continuity of T with respect to the total variation distance to show order-sensitivity. Bellini and Bignozzi (2015) showed that the weak continuity of a functional T implies its mixture-continuity. Consequently, one can also derive the order-sensitivity in the framework of Steinwart et al. (2014) directly using only mixture-continuity. Lambert (2013) showed that it is a harder requirement to have order-sensitivity if T (F) is discrete. Then both approaches, invoking Osband's principle or using mixture-continuity, do not work because the interior of the image of T is empty. Moreover, mixture-continuity implies that the functional is constant (such that only trivial cases can be considered). Furthermore, it is proven in Lambert (2013) that for a functional T with a discrete image, all strictly consistent scoring functions are order-sensitive if and only if there is one order-sensitive scoring function for T .In particular, there are functionals admitting strictly consistent scoring functions that are not order-sensitive, one such example being the mode functional. 3 Let us turn attention to vector-valued functionals now. To understand the monotonicity properties of the expected score of a mixture-continuous elicitable functional T : F → A ⊆ R k , it is useful to consider paths γ : [0, 1] → A ⊆ R k , γ(λ) = T (λF + (1 − λ)G) for F, G ∈ F. If T is elicitable, a classical result asserts that T necessarily has convex level sets (Gneiting, 2011, Theorem 6). This implies that the level sets of γ can only be closed intervals including the case of singletons and the empty set. This rules out loops and some other possible pathologies of γ. Furthermore, under the assumption that T is identifiable as defined below, one can even show that the path γ is either injective or constant; see Lemma B.1. Definition 2.3 (Identifiability). Let In line with Gneiting (2011, Section 2.4), one can often obtain an identification function as the gradient of a sufficiently smooth scoring function. However, the converse intuition is not so clear -at least in the higher dimensional setting k > 1: Not all strict identification functions can be integrated to a strictly consistent scoring function. They have to satisfy the usual integrability conditions (Königsberger, 2004, p. 185); see also Fissler and Ziegel (2016, Corollary 3.3) and the discussion thereafter. Proposition 2.4. Let F be convex and T : F → A ⊆ R k be mixture-continuous and surjective. Let S : Remark 2.5. (i) Proposition 2.4 remains valid if S is only F-consistent. Then, we merely have that the function [0, 1] λ →S(γ(λ), F ) is decreasing, so the last inequality in Proposition 2.4 is not necessarily strict. (ii) If one assumes in Proposition 2.4 that T is also identifiable, one can use the injectivity of γ implied by Lemma B.1 to see that the function [0, 1] λ →S(γ(λ), F ) is strictly decreasing. Under certain (weak) regularity conditions, the expected scores of a strictly consistent scoring function has no other local minimum apart from the global one at x = T (F ). Proposition 2.6. Let F be convex and T : F → A ⊆ R k be mixture-continuous and surjective. If S : A × O → R is strictly F-consistent for T , then for all F ∈ F the expected scoreS(·, F ): A → R has only one local minimum which is at x = T (F ). Self-calibration With Proposition 2.4 it is possible to prove that, under mild regularity conditions, strictly consistent scoring functions are self-calibrated which turns out to be useful in the context of M-estimation. Definition 2.7 (Self-calibration). A scoring function The notion of self-calibration was introduced by Steinwart (2007) in the context of machine learning. In a preprint version of Steinwart et al. (2014), 5 the authors translate this concept to the setting of scoring functions as follows (using our notation): "For self-calibrated S, every δ-approximate minimizer ofS(·, F ), approximates the desired property T (F ) with precision not worse than ε. [. . . ] In some sense order sensitivity is a global and qualitative notion while self-calibration is a local and quantitative notion." In line with this quotation, self-calibration can be considered as the continuity of the inverse of the expected scoreS(·, F ) at the global minimum x = T (F ) -and as such, it is a local property of the inverse. This property ensures that convergence of the expected score to its global minimum implies convergence of the forecast to the true functional value. On the other hand, self-calibration of a scoring function S is equivalent to the fact that the argmin T (F ) of the expected scoreS(·, F ) is a well-separated point of minimum in the sense of van der Vaart (1998, p. 45) -as such being a global property of the expected score itself. That means that for any ε > 0 It is relatively straight forward to see that self-calibration implies strict consistency: In the preprint version of Steinwart et al. (2014) it is shown for k = 1 that order-sensitivity implies self-calibration. The next Proposition shows that the kind of order-sensitivity given by Proposition 2.4 also implies self-calibration for k ≥ 1. Proposition 2.8. Let F be convex, A ⊆ R k be closed, and T : F → A be a surjective and mixture-continuous functional. We end this subsection about self-calibration by demonstrating its applicability in the context of M-estimation. Theorem 2.9. Let S : A × O → R be an F-self-calibrated scoring function for a functional T : F → A ⊆ R k . Then, the following assertion holds for all F ∈ F. The proof of Theorem 2.9 is a direct consequence of van der Vaart (1998, Theorem 5.7). Recall that under some additional regularity conditions, it is also possible to derive a Central Limit Theorem associated to the consistency result established in Theorem 2.9. The rate is driven by the dependence structure of the observations Y 1 , Y 2 , . . .. If they are independent the rate is typically n −1/2 . The form of the scoring function only enters via the asymptotic covariance. For details, we refer the reader to Chapter 5.3 in van der Vaart (1998). A detailed discussion of the asymptotic covariance and related efficiency considerations of the estimator are beyond the scope of this paper. Different notions of order-sensitivity The idea of order-sensitivity is that a forecast lying between the true functional value and some other forecast is also assigned an expected score lying between the two other expected scores. If the action domain is one dimensional, there are only two cases to consider: both forecasts are on the left-hand side of the functional value or on the right-hand side. However, if A ⊆ R k for k ≥ 2, the notion of 'lying between' is ambiguous. Two obvious interpretations for the multidimensional case are the componentwise interpretation and the interpretation that one forecast is the convex combination of the true functional value and the other forecast. Definition 3.1 (Componentwise order-sensitivity). A scoring function S : A × O → R is called componentwise F-order-sensitive for a functional T : F → A ⊆ R k , if for all F ∈ F, t = T (F ) and for all x, z ∈ A we have that: Moreover, S is called strictly componentwise F-order-sensitive for T if S is componentwise F-order-sensitive and if x = z in (3.1) implies thatS(x, F ) < S(z, F ). Remark 3.2. In economic terms, a strictly componentwise order-sensitive scoring function rewards Pareto improvements 6 in the sense that improving the prediction performance in one component without deteriorating the prediction ability in the other components results in a lower expected score. 6 The definition of the Pareto principle according to Scott and Marshall (2009): "A principle of welfare economics derived from the writings of Vilfredo Pareto, which states that a legitimate welfare improvement occurs when a particular change makes at least one person better off, without making any other person worse off. A market exchange which affects nobody adversely is considered to be a 'Pareto-improvement' since it leaves one or more persons better off. 'Pareto optimality' is said to exist when the distribution of economic welfare cannot be improved for one individual without reducing that of another." Definition 3.3 (Order-sensitivity on line segments). Let · be the Euclidean norm on R k . A scoring function S : is increasing. If the map ψ is strictly increasing, we call S strictly F-ordersensitive on line segments for T . These two notions of order-sensitivity do not allow for a comparison of any two misspecified forecasts, no matter where they are relative to the true functional value. An intuitive requirement could be 'the closer to the true functional value the smaller the expected score', thus calling for the notion of a metric. Since, for a fixed functional T and some fixed distribution F , we always have a fixed reference point T (F ) and we have the induced vector-space structure of If the assertion does not depend on the choice of p, we shall usually omit the p in the notation. For other choices of A, it would be also interesting to replace the norm by a metric in the following definition. Definition 3.4 (Metrical order-sensitivity). Let If additionally the inequalities in (3.2) are strict, we say that S is strictly metrically F-order-sensitive for T relative to · p . Similarly to (strict) consistency, all three notions of (strict) order-sensitivity are preserved when considering two scoring functions that are equivalent. 7 The notion of componentwise order-sensitivity corresponds almost literally to the notion of accuracy-rewarding scoring functions introduced by Lambert, Pennock and Shoham (2008). Metrically order-sensitivity scoring functions have their counterparts in the field of probabilistic forecasting in effective scoring rules introduced by Friedman (1983) and further investigated by Nau (1985). Actually, the latter paper has also given the inspiration for the notion of ordersensitivity on line segments. It is obvious that any of the three notions of (strict) order-sensitivity implies (strict) consistency. The next lemma formally states this result and gives some logical implications concerning the different notions of order-sensitivity. The proof is standard and therefore omitted. Componentwise order-sensitivity Under restrictive regularity assumptions, Lambert, Pennock and Shoham (2008, Theorem 5) claim that whenever a functional has a componentwise order-sensitive scoring function, the components of the functional must be elicitable. Moreover, assuming that the measures in F have finite support, they assert that any componentwise order-sensitive scoring function is the sum of strictly consistent scoring functions for the components. Lemma B.2 shows the first claim under less restrictive smoothness assumptions on the scoring function. For many common examples of functionals, the second claim can be shown relaxing the restrictive condition on F. . . k}, are mixture-continuous and elicitable with strictly F-consistent scoring functions S m : A m × O → R, then they are order-sensitive according to Lambert (2013, Proposition 2) and Bellini and Bignozzi (2015, Proposition 3.4). Therefore, the sum k m=1 S m (x m , y) is strictly componentwise F-order-sensitive for (T 1 , . . . , T k ). More interestingly, one can establish the reverse of the last assertion. Any strictly componentwise ordersensitive scoring function must necessarily be additively separable. In Fissler and Ziegel (2016, Section 4), we established a dichotomy for functionals with elicitable components: In most relevant cases, the functional (the corresponding strict identification function, respectively) satisfies Assumption (V4) therein (e.g., when the functional is a vector of different quantiles and / or different expectiles with the exception of the 1/2-expectile), or it is a vector of ratios of expectations with the same denominator, or it is a combination of both situations. Under some regularity conditions, Fissler and Ziegel (2016, Propositions 4.2 and 4.4) characterize the form of strictly consistent scoring functions for the first two situations, whereas Fissler and Ziegel (2016, Remark 4.5) is concerned with the third situation. For this latter situation, any strictly consistent scoring function must be necessarily additive for the respective blocks of the functional. And for the first situation, Fissler and Ziegel (2016, Proposition 4.2) yields the additive form of S automatically. It remains to consider the case of Fissler and Ziegel (2016, Proposition 4.4), that is, a vector of ratios of expectations with the same denominator. The notion of componentwise order-sensitivity has an appealing interpretation in the sense that it rewards Pareto improvements of the predictions; see Remark 3.2. The results of Lemma B.2 and Proposition 3.6 give a clear understanding of the concept including its limitations to the case of functionals only consisting of elicitable components. Ehm et al. (2016) introduced Murphy diagrams for forecast comparison of quantiles and expectiles. Murphy diagrams have the advantage that forecasts are compared simultaneously with respect to all consistent scoring functions for the respective functional. For many multivariate functionals such as ratios of expectations, the methodology cannot be readily extended because there are no mixture representations available for the class of all consistent scoring functions. Proposition 3.6 shows that when considering only componentwise ordersensitive consistent scoring functions, the situations is different and mixture representations (and hence Murphy diagrams) are readily available for forecast comparison. Metrical order-sensitivity For a real-valued functional T there can be at most one strictly metrically ordersensitive scoring function, up to equivalence. To show this, we use Osband's principle and impose the corresponding regularity conditions. Proposition 3.7. Let T : F → A ⊆ R be a surjective, elicitable and identifiable functional with an oriented strict F-identification function and (VS1) (with respect to both scoring functions) hold, then S and S * are equivalent almost everywhere. For the higher-dimensional setting we can show a slightly more limited version of Proposition 3.7. Two scoring functions that are additively separable as in (3.3) and that are strictly metrically order-sensitive for the same functional must necessarily be equivalent. For most practically relevant cases -namely when we consider an p -norm with p ∈ [1, ∞) and when the functional possesses an identification function satisfying Assumption (V4) or that are ratios of expectations with the same denominator -Lemma 3.5, Proposition 3.6 and Fissler and Ziegel (2016, Proposition 4.2) yield that any metrically order-sensitive scoring function -presuming there is one -is additively separable. Hence, for these situations, metrically order-sensitive scoring functions are unique, up to equivalence. Then S * is strictly metrically F-order-sensitive (with respect to the same p -norm as S) if and only if λ 1 = · · · = λ k . Next, we use the derived theoretical results to examine when some popular functionals admit strictly metrically order-sensitive scoring functions, and if so, of what form they are. Ratios of expectations with the same denominator We start with the one-dimensional characterization. Proposition 3.9. Let F be convex and p, q and assume that T is surjective and int(A) = ∅ is convex. Then the following two assertions are true: (i) Any scoring function which is equivalent to , then any scoring function S * : A × O → R, which is strictly metrically F-order-sensitive and satisfies Assumptions (S1) and (VS1), is equivalent to S defined at (3.4) almost everywhere. Now, we turn to the multivariate characterization. and assume that T is surjective and int(A) = ∅. Then, the following assertions are true: (i) Any scoring function which is equivalent to is strictly metrically F-order-sensitive for T with respect to the 2 -norm. , then any scoring function S * : A × O → R, which is strictly metrically F-order-sensitive with respect to the 2 -norm and satisfies Assumptions (S1) and (VS1), is equivalent to S defined at (3.5) almost everywhere. , then there is no scoring function S * : A × O → R which satisfies Assumptions (S1) and (VS1) and which is strictly metrically Forder-sensitive with respect to an p -norm with p ∈ [1, ∞) \ {2}. Savage (1971, Section 5) has already shown that in case of the mean, the squared loss is essentially the only symmetric loss in the sense that it is the only metrically order-sensitive loss for the mean. See also Patton (2017, Section 2.1) for a discussion that symmetry -or metrical order-sensitivity -is not necessary for strict consistency of scoring functions with respect to the mean. Quantiles Since we treat only point-valued functionals in this article, we shall assume that the α-quantile of F is a singleton and identify the set with its unique element (henceforth, we shall refer to this assumption as F having a unique α-quantile). 9 Furthermore, note that assuming the identifiability of the α-quantile with the canonical identification function V α (x, y) = 1{y ≤ x} − α on a class F amounts to assuming that F (q α (F )) = α for all F ∈ F. 10 Proposition 3.11. Let α ∈ (0, 1) and F be a family of distribution functions there is no strictly metrically F-order-sensitive scoring function for T α satisfying Assumption (S1). The reasons for the non-existence of a strictly metrically order-sensitive scoring function for the α-quantile are of different nature in the two cases that α = 1/2 and that α = 1/2 in the proof of Proposition 3.11. In both cases, we used Osband's principle to derive a representations of the derivative of the expected score. Assuming that the derivative has the form as stated in Osband's principle, one can directly derive a contradiction for α = 1/2. However, for α = 1/2, this form merely implies that the distributions in F must be symmetric around their medians. This is not contradictory to the form of the gradient derived via Osband's principle, but only to the assumption that F is convex. Dropping this assumption, we can derive the following Lemma. The proof is straight forward from Lemma B.3. Proposition 3.12. Let F be a family of distribution functions on R with unique medians T 1/2 : F → R and finite first moments. If all distributions in F are symmetric around their medians in the sense that for all F ∈ F, x ∈ R, then any scoring function that is equivalent to the absolute loss S : R × R → R, S(x, y) = |x − y|, is strictly metrically F-order-sensitive with respect to the median. 9 Recall that the α-quantile of a distribution F consists of all points 10 Actually, assuming F is convex and rich enough, this holds for any identification function for the α-quantile. Indeed, consider some distribution function F 0 ∈ F and some level α ∈ (0, 1). Fix some As mentioned above, under the conditions of Proposition 3.12, the necessary characterization of strictly consistent scoring functions via Osband's principle is not available. In particular, this means that we cannot use Proposition 3.7. Indeed, if the distributions in F are symmetric around their medians in the sense of (3.6) and under the integrability condition that all elements in F have a finite first moment, the median and the mean coincide. Hence, any convex combination of a strictly consistent scoring function for the mean and the median provides a strictly consistent scoring function. A fortiori, any scoring function which is equivalent to S(x, y) = (1−λ)|x−y|+λ|x−y| 2 , λ ∈ [0, 1] is strictly metrically Forder-sensitive. However, the class of strictly metrically F-order-sensitive scoring functions is even bigger - Lehmann and Casella (1998, Corollary 7.19, p. 50) show that (subject to integrability conditions) for an even and strictly convex function Φ : R → R, the score S(x, y) = Φ(x − y) is strictly metrically F-ordersensitive for the median. Note that if the distributions in F are symmetric, their center of symmetry, which is the functional solving (3.6), is unique (Fissler, 2017, Lemma 4.1.34), even if the median is not unique. The result of Lehmann and Casella (1998, Corollary 7.19, p. 50) holds for this center of symmetry. Acknowledging that some popular choices for Φ are not strictly convex (see Example 3.14), the following proposition gives a refinement of their result. y)), and for x, z ∈ R the set M x,z = {y ∈ R : Ψ x (y) − Ψ z (y) > 0}. If for all F ∈ F and for all x, z ∈ R with |x| > |z| one has that P(Y − C(F ) ∈ M x,z ) > 0, Y ∼ F , then S is strictly metrically F-order-sensitive for C. In particular, if for all F ∈ F and for all be a convex and even function, and S If Φ is strictly convex then M x,z = R for all |x| > |z|. Example 3.14. Let F be a class of symmetric distributions and S(x, y) = Φ(x − y). (i) If Φ(t) = |t| 2 , the squared loss arises. Since Φ is strictly convex, the squared loss is strictly metrically F-order-sensitive. (ii) For Φ(t) = |t|, S takes the form of the absolute loss. Then S is strictly metrically F-order-sensitive (and strictly F-consistent) if and only if C(F ) ∈ supp(F ) for all F ∈ F. 11 (iii) Another prominent example of a metrically order-sensitive scoring function for the center of a symmetric distribution besides the absolute or the squared loss is the so-called Huber loss which was presented in Huber (1964) and arises upon taking S(x, y) = Φ(x − y) with We emphasize that there are not only metrically-order sensitive strictly consistent scoring functions for the center of symmetric distributions. One can also use asymmetric scoring functions, for example those for the median or the mean, to elicit the center of symmetry. Due to the negative result of Proposition 3.11 we dispense with an investigation of scoring functions that are metrically order-sensitive for vectors of different quantiles. Expectiles The special situation of the 1/2-expectile, which coincides with the mean functional, was already considered in Subsection 3.3.1, so let τ = 1/2. It is obvious that the canonical scoring function for the τ -expectile, that is, the asymmetric squared loss is not metrically order-sensitive since x → S τ (x + y, y) is not an even function. A fortiori, it turns out that (under some assumptions) there is no strictly metrically F-order-sensitive scoring function for the τ -expectile for τ = 1/2. Proposition 3.15. Let τ ∈ (0, 1), τ = 1/2, and T τ = μ τ : F → A ⊆ R, int(A) = ∅ convex, be the τ -expectile. Assume that T τ is surjective, and that Assumption (V1) holds with respect to the strict F-identification function V τ (x, y) = 2|1{y ≤ x} − τ | (x − y). Suppose thatV (·, F ) is twice differentiable for all F ∈ F and that there is a strictly F-consistent scoring function S : Interestingly, the arguments provided in the proof of Proposition 3.15 leads to an alternative proof that the squared loss is the only strictly metrically ordersensitive scoring function for the mean, up to equivalence; see Remark C.1 for details. Order-sensitivity on line segments Recalling Lemma 3.5, every componentwise order-sensitive scoring function is also order-sensitive on line segments. However, for the particular class of linear functionals, the following corollary shows that any strictly consistent scoring function is already strictly componentwise order-sensitive on line segments. 12 Corollary 3.16. If F is convex and T : F → A ⊆ R k is linear and surjective, then any strictly F-consistent scoring function for T is strictly F-order-sensitive on line segments. Corollary 3.16 immediately leads the way to the result that the class of strictly order-sensitive scoring functions on line segments is strictly bigger than the class of strict componentwise order-sensitive scoring functions (for some functionals with dimension k ≥ 2.) E.g. consider a vector of expectations satisfying the conditions of Proposition 3.6 which are the same as the ones in Fissler and Ziegel (2016, Proposition 4.4). Due to the latter result, there are strictly consistent scoring functions -and hence, with Corollary 3.16, strictly order-sensitive on line segments -which are not additively separable. By Proposition 3.6 they cannot be strictly componentwise order-sensitive. We can extend the result of Corollary 3.16 to the case of ratios of expectations with the same denominator. is strictly F-order sensitive on line segments, where φ is strictly convex differentiable function on A. Fissler and Ziegel (2016, Proposition 4.4) shows that essentially all strictly consistent scoring functions for T in the above Proposition 3.17 are of the form at (3.7); see also Frongillo and Kash (2015a, Theorem 13). Order-sensitivity on line segments is stable under applying an isomorphism via the revelation principle (Gneiting, 2011, Theorem 4). However, dropping the linearity assumption on the bijection in the revelation principle, order-sensitivity on line segments is generally not preserved; see Subsection 3.4.1. The pair (mean, variance) The pair (mean, variance) is of importance not only from an applied point of view but it is also an interesting example in the theory about elicitability. Due to the lack of convex level sets, variance is not elicitable (Gneiting, 2011, Theorem 6). However, the pair (mean, variance) is a bijection of the (elicitable) pair (mean, second moment), and, invoking the revelation principle (Gneiting, 2011, Theorem 4), variance is jointly elicitable with the mean. The revelation principle provides an explicit link between the class of strictly consistent scoring functions for the first two moments which are of Bregman-type (Fissler and Ziegel, 2016, Proposition 4.4) and the respective class for mean and variance. As the pair (mean, variance) has of a non-elicitable component, if fails to be componentwise order-sensitive (Lemma B.2) and therefore, it is also not metrically order-sensitive. A priori, order-sensitivity on line segments is not ruled out. Corollary 3.16 implies that any strictly consistent scoring function for the pair of the first and second moment is order-sensitive on line segments. Even though the bijection connecting (mean, variance) with the pair of the first two moments is not linear, the following proposition gives necessary and sufficient conditions for scoring functions to be order-sensitive on line segments for (mean, variance). Example 3.19 shows the existence of order-sensitive scoring functions on line segments for (mean, variance). Proposition 3.18. Let F be a class of distributions on R with finite second moments such that the functional T = (mean, variance) : scoring function that is (jointly) continuous and for any y ∈ R, the function A x → S(x, y) be twice continuously differentiable. Then S is F-order-sensitive on line segments for T if and only if S is of the form , is a convex, three times continuously differentiable function such that the second order partial derivatives Example 3.19. An example for a class of strictly convex C 3 -function φ : A → R satisfying (3.9) and (3.10) with equality is given by For the case b 1 = b 2 = b 3 = 0, the resulting scoring function of the form at (3.8) is (3.11) Interestingly, this results not only in an order-sensitive scoring function on line segments for the pair (mean, variance), but it is also a mixed positively homogeneous scoring function of degree −2; see Section 4.2. The pair (Value at Risk, Expected Shortfall) Value at Risk (VaR) and Expected Shortfall (ES) are popular risk measures in banking and insurance. For a financial position Y with distribution F and a level α ∈ (0, 1), they are defined as (2014) where numerous further references are given. VaR α , as a quantile, is elicitable under mild regularity conditions, whereas ES α fails to be elicitable (Gneiting, 2011). However, recently it was shown in Fissler and Ziegel (2016, Theorem 5.2 and Corollary 5.5) that the pair (VaR α , ES α ) is elicitable and the class of strictly convex scoring functions was characterized to be of the form (3.12) (under the conditions of Osband's principle, Fissler and Ziegel (2016, Theorem 3.2, Corollary 3.3)). Note that the proof of Fissler and Ziegel (2016, Theorem 5.2(ii) and Corollary 5.5) is imprecise for the case that a distribution F ∈ F is not continuous at its α-quantile. Moreover, one needs to impose additional assumptions on the action domain A which are satisfied, for example, if A coincides with the maximal action domain {(x 1 , x 2 ) ∈ R 2 : x 1 ≥ x 2 }; see Fissler and Ziegel (2019) for details. Proposition 3.20. Let α ∈ (0, 1), F be a class of continuously differentiable distribution functions on R with finite first moments and unique α-quantiles. Let A ⊆ {(x 1 , x 2 ) ∈ R 2 : x 1 ≥ x 2 } be convex. Define A 2 as the projection of A onto the second coordinate axis and let S : A × R → R be a scoring function of the form with g : R → R differentiable and increasing and φ : A 2 → R twice differentiable, and φ > 0, φ > 0. If 13) then S is strictly F-order-sensitive on line segments for (VaR α , ES α ). One might wonder if Proposition 3.20 establishes an alternative set of conditions for strict consistency of scoring functions for (VaR α , ES α ) different from the ones introduced in Fissler and Ziegel (2019, Proposition 2). Indeed, this is the case since strict order-sensitivity on line segments implies strict consistency. However, it is not the condition at (3.13) which is essential for the strict consistency, but rather the condition that g be increasing and φ > 0, and φ > 0. Example 3.21. Consider the action domain for all F ∈ F, x ∈ A (Lambert, Pennock and Shoham, 2008;Steinwart et al., 2014). One possible generalization of orientation for higher-dimensional functionals is the following. Let T : for all v ∈ S k−1 := {x ∈ R k : x = 1}, for all F ∈ F and for all s ∈ R such that T (F ) + sv ∈ A. Our notion of orientation differs from the one proposed by Frongillo and Kash (2015a). In contrast to their definition, our definition is per se independent of a (possibly non-existing) strictly consistent scoring function for T . Moreover, whereas their definition has connections to the convexity of the expected score, our definition shows strong ties to order-sensitivity on line segments. If the gradient of an expected score induces an oriented identification function, then the scoring function is strictly order-sensitive on line segments, and vice versa. However, the existence of an oriented identification function is not sufficient for the existence of a strictly order-sensitive scoring function on line segments. The reason is that -due to integrability conditions -the identification function is not necessarily the gradient of some (scoring) function. Equivariant functionals and order-preserving scoring functions Many statistical functionals have an invariance or equivariance property. For example, the mean is a linear functional, and hence, it is equivariant under linear transformations. So E[ϕ(X)] = ϕ(E[X]) for any random variable X and any linear map ϕ : R → R (of course, the same is true for the higher-dimensional setting). On the other hand, the variance is invariant under translations, that is Var(X − c) = Var(X) for any c ∈ R, but scales quadratically, so Var(λX) = λ 2 Var(X) for any λ ∈ R. The next definition strives to formalize such notions. ). If a functional T is elicitable, π-equivariance can also be expressed in terms of strictly consistent scoring functions; see also Gneiting (2011, p. 750). The proof of Lemma 4.3 is direct. It implies that the scoring function is also strictly F-consistent for T . Similarly to the motivation of order-sensitivity of scoring functions, for fixed π : Φ → Φ * , it is a natural requirement on a scoring function S that for all ϕ ∈ Φ the ranking of any two forecasts is the same in terms of S and in terms of S π,ϕ . Definition 4.4 (π-order-preserving). Let π : Φ → Φ * . A scoring function S : for all F ∈ F and for all x, x ∈ A, where S π,ϕ is defined at (4.1). S is linearly π-order-preserving if for all ϕ ∈ Φ and for all x, x ∈ A there is a λ > 0 such that for all y ∈ O. If S is linearly π-order-preserving with a λ > 0 independent of x, x ∈ A, then we call S uniformly linearly π-order-preserving. The following lemma is immediate. Lemma 4.5. Let π : Φ → Φ * . If a scoring function S : A × O → R is linearly π-order-preserving, it is π-order-preserving with respect to any class F of probability distributions on O. The two practically most relevant examples of uniform linear π-order preservingness are translation invariance and positive homogeneity of scoring functions, or, to be more precise, of score differences. They are described in the two subsequent subsections. Translation invariance Consider a translation equivariant functional such as the mean treated in Example 4.2 (ii). Then, a scoring function S : R k × R k → R is said to have translation invariant score differences if it is uniformly linearly π-equivariant with λ = 1 for all ϕ ∈ Φ. In formulae, we require S to satisfy We say that a functional T : Adopting this notion, we say that a scoring function S : Then, the following assertions hold. for all x ∈ R k and for all F ∈ F. Using Fissler and Ziegel (2016, Proposition 4.4) one can establish the converse of Proposition 4.7: If V is a linearly (id R k , id R k )-invariant strict F-identification function, then (4.4) implies that S has linearly (id R k , id R k )-invariant score differences. The following lemma shows how to normalize scores with translation invariant score differences to obtain a translation invariant score. In case of the mean functional on R, Proposition 4.7 has already been shown by Savage (1971) who showed that the squared loss is the only strictly consistent scoring function for the mean that is of prediction error form, up to equivalence. 13 Furthermore it implies that general τ -expectiles and α-quantiles have essentially one linearly (id R , id R )-invariant strictly consistent scoring function only, namely the canonical choices The uniqueness -up to equivalence -disappears for k > 1. For example, for the the 2-dimensional mean functional, the previous results yield that any scoring function S : is strictly consistent for the 2-dimensional mean functional and it is linearly (id R 2 , id R 2 )-invariant, for any h 11 > 0 and h 11 h 22 − h 2 12 > 0. Due to the additive separability of strictly consistent scoring functions for vectors consisting of different quantiles and expectiles (Fissler and Ziegel, 2016, Proposition 4.2), strictly consistent scoring functions that are linearly (id R , id R k )invariant for these vectors are not unique. However, the only flexibility in that class consists in choosing different weights for the respective summands of the scores. The pair ( consistent scoring function for 13 That means that the scoring function is a function in x − y only. T that is (jointly) continuous, and for any y ∈ R, the function A x → S(x, y) is twice continuously differentiable. If S has linearly (M O , M A )-invariant score differences, then there is a λ ≥ 0 and an F-integrable functional a : R → R such that S(x 1 , x 2 , y) = λ(x 1 − y) 2 + a(y). In particular, S cannot be strictly F-consistent for T . Then, the following assertions hold: is strictly F-consistent for T and has linearly (M O , M A )-invariant score differences with M O = id R , M A = (1, 1) . (ii) Under the conditions of Fissler and Ziegel (2016, Theorem 5.2(iii)), there are strictly F-consistent scoring functions for T with linearly (M O , M A )invariant score differences if and only if there is some c > 0 such that (4.6) holds. Then, any such scoring function is necessarily equivalent to S d defined at (4.7) almost everywhere, with d ≥ c. The scoring function S c has a close relationship to the class of scoring functions S W proposed in Acerbi and Szekely (2014); see Fissler and Ziegel (2016, Equation (5.6)). Indeed, S c (x 1 , x 2 , y) = c 1{y ≤ x 1 } − α (x − y) + S W (x 1 , x 2 , y) with W = 1. That means it is the sum of the standard α-pinball loss for VaR α -which is translation invariant -and S 1 . In the same flavor, the condition at (4.6) is similar to the one at Fissler and Ziegel (2016, Equation (5.7)). Since ES α ≤ VaR α , the maximal action domain where S c is strictly consistent is the stripe A c = {(x 1 , x 2 ) ∈ R 2 : x 2 ≤ x 1 < x 2 + c}. Of course, by letting c → ∞, one obtains the maximal sensible action domain {(x 1 , x 2 ) ∈ R 2 : x 1 ≥ x 2 } for the pair (VaR α , ES α ). However, considering the properly normalized version S c /c, this converges to a strictly consistent scoring function for VaR α as c → ∞, but which is independent of the forecast for ES α . Hence, there is a caveat concerning the tradeoff between the size of the action domain and the sensitivity in the ES-forecast. This might cast doubt on the usage of scoring functions with translation invariant score differences for (VaR α , ES α ) in general. Interestingly, the scoring function S c at (4.7) has positively homogeneous score differences if and only if c = 0. However, A 0 = ∅, which means that the requirement of translation invariance and homogeneity for score differences are mutually exclusive in case of strictly consistent scoring functions for (VaR α , ES α ). Homogeneity If one is interested in a positively homogeneous functional of degree one such as the mean, expectiles, quantiles, or ES, a scoring function S : R × R → R is said to have positively homogeneous score differences of degree b ∈ R for this functional if the scoring function is uniformly linearly π-equivariant with Φ = {R x → cx ∈ R, c > 0} the multiplicative group, π the identity on Φ and λ = c b in (4.2). This means that S needs to satisfy for all x, z, y ∈ R and c > 0. Since positive homogeneity of score differences is equivalent to invariance of forecast rankings under a change of unit, it has been argued that it is important in financial applications (Acerbi and Szekely, 2014). Nolde and Ziegel (2017) give a characterization of scoring functions with positively homogeneous score differences for many risk measures of applied interest, such as VaR / quantiles, expectiles, and the pair (VaR, ES); cf. Patton (2011) for results concerning the mean functional. If the functional T is vector-valued, the degree of homogeneity can be different in the respective components, e.g. in case of the pair (mean, variance) or the vector consisting of the first k moments; cf. Example 4.2(vi). One can denote this property by mixed positive homogeneity, which means in case of the vector of the first k moments that T (L(cY )) = Λ(c)T (L(Y )) (4.9) for all c > 0, where Λ(c) is the k × k-diagonal matrix with diagonal elements c, c 2 , . . . , c k . 14 In this situation, an interesting instant for uniformly linearly πorder-preserving scoring functions S : A × R → R are those with mixed positively homogeneous score differences of degree b ∈ R. That is, for all x, z ∈ A, y ∈ R, and for all c > 0. With k = 2, corresponding assertions hold for the pair (mean, variance) and the respecitve scoring functions. Proposition 4.11. Let Let S : A × R → R be a consistent scoring function for the vector of the first k moments of the form − (y, y 2 , . . . , y k ) + a(y), (4.11) where φ : A → R is convex and differentiable with gradient ∇φ (considered as a row vector). Then S has mixed positively homogeneous score differences of degree b ∈ R if and only if for all c > 0 the map is constant. Recall that the scoring functions of the form at (4.11) are essentially all consistent scoring functions for the vector of different moments (Fissler and Ziegel, 2016, Proposition 4.4). Using Proposition 4.11 it is straight forward to derive consistent scoring functions for (mean, variance) with mixed positively homogeneous score differences. . Let Assumptions (F1) and (V1) be satisfied with the strict F-identification function be a strictly F-consistent scoring function for T that is (jointly) continuous and for any y ∈ R, the function A x → S(x, y) be twice continuously differentiable. Then S has mixed positively homogeneous score differences of degree b ∈ R if and only if S(x 1 , x 2 , y) = −φ(x 1 , x 2 + x 2 1 ) + ∇φ(x 1 , x 2 + x 2 1 ) x 1 − y x 2 + x 2 1 − y 2 + a(y), (4.13) where φ : A → R is strictly convex, twice continuously differentiable, and moreover for all c > 0 the map is constant. It appears that the class of (strictly) convex functions φ satisfying (4.12) is rather flexible. One subclass is the class of additively separable functions φ. That is, where each φ m needs to be convex and Reviewing Nolde and Ziegel (2017, Theorem 5) and restricting attention to the case A ⊆ (0, ∞) k , φ m can be an element of the class Ψ b/m , where Ψ b consists of functions ψ b : (0, ∞) → R of the form On the other hand, there are choices of φ not satisfying such an additive decomposition as in (4.15). One such example can be found in Example 3.19 for b = −2, and is of the form φ(x 1 , x 2 ) = (x 2 − x 2 1 ) −1 for x 2 > x 2 1 . Appendix A: Assumptions We present a list of assumptions used in this paper. For more details about their interpretations and implications, please see Fissler and Ziegel (2016) were they were originally introduced. Assumption (V1). Let F be a convex class of distribution functions on R and assume that for every Note that if V : A×R → R k is a strict F-identification function for T : F → A which satisfies Assumption (V1), then for each x ∈ int(A) there is an F ∈ F such that T (F ) = x. Assumption (V2). For every F ∈ F, the functionV (·, F ) is continuous. Assumption (F1). For every y ∈ R there exists a sequence (F n ) n∈N of distributions F n ∈ F that converges weakly to the Dirac-measure δ y such that the support of F n is contained in a compact set K for all n. Assumption (VS1). Suppose that the complement of the set ·) and S(x, ·) are continuous at the point y} has (k + d)-dimensional Lebesgue measure zero. Assumption (S2). For every F ∈ F, the functionS(·, F ) is continuously differentiable and the gradient is locally Lipschitz continuous. Furthermore,S(·, F ) is twice continuously differentiable at t = T (F ) ∈ int(A). Lemma B.2. Let Proof. Let S be metrically F-order sensitive for T relative to d. Proof of Proposition 2.6. Let F ∈ F with t = T (F ). Due to the strict Fconsistency of S, the expected scoreS(·, F ) has a local minimum at t. Assume there is another local minimum at some x = t. Then there is a distribution G ∈ F with x = T (G). Consider the path γ : [0, 1] → A, λ → T (λF +(1−λ)G). Due to Proposition 2.4 the function λ →S(γ(λ), F ) is decreasing and strictly decreasing when we move on the image of the path from x to t. HenceS(·, F ) cannot have a local minimum at x = γ(0). Proof of Proposition 2.8. Let F ∈ F, t = T (F ) and ε > 0. Define Due to the continuity ofS(·, F ), the minimum is well-defined and, as a consequence of the strict F-consistency of S for T , δ is positive. Let x ∈ A. If C.2. Proofs for Section 3 Proof of Proposition 3.6. Due to the fact that for fixed y ∈ O, V (x, y) is a polynomial in x, Assumption (V3) is automatically satisfied. Let h : int(A) → R k×k be the matrix-valued function given in Osband's principle; see Fissler and Ziegel (2016, Theorem 3.2). By Fissler and Ziegel (2016, Proposition 4.4(i)) we have that for all r, l, m ∈ {1, . . . , k}, l = r, where the first identity holds for almost all x ∈ int(A) and the second identity for all x ∈ int(A). Moreover, the matrix h rl (x) l,r=1,...,k is positive definite for all x ∈ int(A). If we can show that h lr = 0 for l = r, we can use the first part of (C.1) and deduce that for all m ∈ {1, . . . , k} there are positive functions g m : for all (x 1 , . . . , x k ) ∈ int(A). Then, we can conclude like in the proof of Fissler and Ziegel (2016, Proposition 4.2(ii)). 15 Fix l, r ∈ {1, . . . , k} with l = r and F ∈ F such that T (F ) ∈ int(A). Due to the strict F-consistency of S l,z defined at (B.1) we have that and by assumptionq(F ) > 0. Using the surjectivity of T we obtain that h lr (t) = 0 for all t ∈ int(A), which ends the proof. Proof of Proposition 3.7. We apply Osband's principle, that is, Fissler and Ziegel (2016, Theorem 3.2 for all F ∈ F, x ∈ int(A). Due to the strict F-consistency of S and the orientation of V , it holds that h ≥ 0. We show that actually h > 0. Applying Lemma B.3, one has thatS . Hence, also the derivative with respect to x of the left-hand side of (C.3) must coincide with the derivative on the right-hand side. This yields, using (C.2), y) is an oriented strict F-identification function for T . Applying Osband's principle to S * , one obtains a function h * : Due to the analogue of (C.3) for S * and (C.4), one obtains . By a similar reasoning as above, one can deduce that h * must be constant and positive. Now, the claim follows by Fissler and Ziegel (2016, Proposition 3.4); see Fissler and Ziegel (2019) for a correction. Again with Lemma B.3 one obtains the assertion. (ii) The only interesting direction is to assume that S * is strictly metrically F-order-sensitive (with respect to the same p -norm as S). We will show that Setting ε :=S 1 (x 1 , F ) −S 1 (z 1 , F ) > 0, one obtains with the same calculation Proof of Proposition 3.9. (i) We can apply Lemma B.3. Let F ∈ F. Then is an even function in x. Moreover, equivalence of scoring functions preserves (strict) metrical order-sensitivity. (ii) The convexity of A is implied by the mixture-continuity of T and the convexity of F. Then, the claim follows with Proposition 3.7. We prove (ii) and (iii) together. Assume there is a scoring function S * satisfying the conditions above, so in particular, it is strictly metrically F-ordersensitive with respect to the p -norm for p ∈ [1, ∞). Invoking Lemma 3.5(i), S * is strictly componentwise F-order-sensitive for T . Thanks to Proposition 3.6, S * is additively separable. By Proposition 3.9(i), it is of the form a m (y). If p = 2, part (i) and Proposition 3.8(ii) yield that λ 1 = · · · = λ k , and hence, S and S * are equivalent. For p = 2, we obtainS(T (F ) + x, Proof of Proposition 3.11. Assume that there exists a strictly metrically Forder-sensitive scoring function S α : R × R → R satisfying Assumption (S1). Due to Lemma B.3, for any F ∈ F and any x ∈ R Using Osband's principle (Fissler and Ziegel, 2016, Theorem 3.2) and taking the derivative with respect to x on both sides, this yields for some positive function h : R → R (the fact that h ≥ 0 follows from the strict consistency of S α and the surjectivity of T α , and h > 0 follows like in the proof of Proposition 3.7). Assume that T α (F 0 ) = 0. For λ ∈ R, we have T α (F 0 (· − λ)) = λ. Therefore, (C.5) implies Setting λ = ±x, one can see that h(±∞) := lim x→±∞ h(x) exists and that On the other hand, for fixed λ ∈ R, we obtain As a consequence, the only remaining possibility is α = 1/2. For fixed x ∈ R, we have implying that h must be constant using (C.5), and that F 0 must be symmetric around its median, i.e. F 0 (x) = 1 − F 0 (−x) for all x ∈ R. 16 Moreover, since h is constant, (C.5) implies that also any other distribution F ∈ F must be symmetric around its median, i.e. F (T 1/2 (F ) + x) = 1 − F (T 1/2 (F ) − x) for all x ∈ R. However, if F 0 is symmetric around its median, then any translation F λ of F 0 is symmetric around its median. But then, there is a convex combination of F 0 and F λ with mixture-parameter β ∈ (0, 1), β = 1/2, such that βF 0 +(1−β)F λ is not symmetric around its median if λ = 0. Consequently, the conditions of the proposition are violated such that a strictly metrically F-order-sensitive function for the median does not exist in this setting. Proof of Proposition 3.13. Let |x| > |z|. Note that due to the convexity of Φ, it holds that Ψ x ≥ Ψ z . Let F ∈ F with center of symmetry c = C(F ) and let Y ∼ F . Then, using the fact that Φ is even and that This shows the strict metrical F-order-sensitivity. The strict F-consistency follows upon taking z = 0. Proof of Proposition 3.15. Under the assumptions, Osband's principle yields the existence of a function h : int(A) → R, h > 0 (by an argument like in the proof of Proposition 3.7) such that for all Using the same argument as in the proof of Osband's principle (Fissler and Ziegel, 2016, Theorem 3.2), h is twice differentiable. Assume that S is metrically F-order sensitive. Then, due to Lemma B.3, for any F ∈ F the function g F : A x → g F (x) =S(T τ (F ) + x, F ) is an even function. Hence, invoking the smoothness assumptions, the third derivative of g F must be odd. So necessarily g F (0) = 0. Denoting t F = T τ (F ), some tedious calculations lead to Recalling that h > 0 and τ = 1/2 implies g F1 (0) = g F2 (0). So S cannot be metrically F-order-sensitive. Remark C.1. Inspecting the proof of Proposition 3.15, equation (C.7) yields for τ = 1/2 for any F ∈ F, t F = T τ (F ). With the surjectivity of T τ this proves that h = 0, such that h is necessarily constant. Hence, we get an alternative proof that the squared loss is the only strictly metrically order-sensitive scoring function for the mean, up to equivalence. Proof of Corollary 3.16. The linearity of T implies that T is mixture-continuous. Then the assertion follows directly by Proposition 2.4 and the special form of the image of the path γ in the proof therein, which is a line segment. Proof of Proposition 3.17. Let F ∈ F, t = T (F ), v ∈ S k−1 and 0 ≤ s < s such that t + sv, t + s v ∈ A. ThenS(t + sv, F ) =q(F )(−φ(t + sv) + s∇φ(t + sv)v). The subgradient inequality yields Proof of Proposition 3.18. Let S be F-order-sensitive on line segments. This implies that S is F-consistent. Using the revelation principle, S : A × R → R, is an F-consistent scoring function for T = (T 1 , T 2 + T 2 1 ): F → A , the pair of the first and second moment. Moreover, S fulfils the same regularity conditions as S. Fissler and Ziegel (2016, Proposition 4.4) holds mutatis mutandis also for consistent scoring functions with φ convex. It is straight forward to check that the conditions for Fissler and Ziegel (2016, Proposition 4.4) are fulfilled for S and T with the canonical identification function V : where a : R → R is some F-integrable function and φ : A → R is a convex C 3 -function with gradient ∇φ (considered as a row vector) and Hessian ∇ 2 φ = (φ ij ) i,j=1,2 . In summary, (C.8) yields the form at (3.8). Now, we verify conditions (3.9) and (3.10). Let F ∈ F, with (t 1 , t 2 ) = T (F ). C.3. Proofs for Section 4 Proof of Proposition 4.7. If a random variable Y has distribution F with F ∈ F, we write F − z for the distribution of Y − z where z ∈ R k . To show the first part, consider any F ∈ F and z ∈ R k . Then Since V is a strict F-identification function for T , T (F − z) = T (F ) − z. For the second part, Fissler and Ziegel (2016, Theorem 3.2) implies that there exists a matrix-valued function h : R k → R k×k such that for all x ∈ R k and for all F ∈ F. We will show that h is constant. Sincē S(x, F ) −S(x , F ) =S(x − z, F − z) −S(x − z, F − z) for all x, x , z ∈ R k and F ∈ F, we obtain by taking the gradient with respect to x where the second identity is due to the linear (id R k , id R k )-invariance of V . So (C.15) is equivalent toV Now, one can use Assumption (V1) and Fissler and Ziegel (2016, Remark 3.1), which implies that Since x, z ∈ R k were arbitrary, the function h is constant. Proof of Lemma 4.8. If S has linearly (id R k , id R k )-invariant score differences, S satisfies (4.3) for all x, x , y, z ∈ R k . Due to Lemma 4.3, T must be π id R k ,id R kequivariant, hence, T (δ y ) − z = T (δ y−z ). This yields that S 0 defined at (4.5) is linearly (id R k , id R k )-invariant. Since S and S 0 are of equivalent form, also S 0 is strictly F-consistent for T . The non-negativity follows directly from the fact that F contains all point measures and from the strict consistency. Proof of Proposition 4.10. The scoring function S c is of equivalent form as given at (3.12) with g(x 1 ) = −x 2 1 /2 + cx 1 and φ(x) = (α/2)x 2 2 . This means that φ is strictly convex and the function x 1 → x 1 φ (x 2 )/α + g(x 1 ) is strictly increasing in x 1 if and only if x 2 + c > x 1 , that is, if and only if (x 1 , x 2 ) ∈ A c . Moreover, one can verify that the action domain A c satisfies the conditions introduced in Proof of Proposition 4.11. Suppose φ satisfies (4.12). This implies that for any c > 0 the map z → φ(Λ(c)z) − c b φ(z) is an affine function. Moreover, a Taylor expansion yields that for all Then, a direct calculation yields the result. Now, suppose (4.10) is satisfied. Its left-hand side equals whereas the right-hand side is Both terms are polynomials in y of degree k, which leads to the identity Proof of Corollary 4.12. The form at (4.13) follows as in the proof of Proposition 3.18. The rest follows by Proposition 4.11.
17,066
2017-11-27T00:00:00.000
[ "Economics", "Mathematics" ]
Stable fermion mass matrices and the charged lepton contribution to neutrino mixing We study the general properties of hierarchical fermion mass matrices in which the small eigenvalues are stable with respect to perturbations of the matrix entries and we consider specific applications to the charged lepton contribution to neutrino mixing. In particular, we show that the latter can account for the whole lepton mixing. In this case a value of $\sin \theta_{13} \gtrsim m_e/m_\mu \sin\theta_{23} \approx 0.03$, as observed, can be obtained without the need of any fine-tuning, and present data allow to determine the last row of the charged lepton mass matrix with good accuracy. We also consider the case in which the neutrino sector only provides a maximal 12 rotation and show that i) present data provide a $2\sigma$ evidence for a non-vanishing $31$ entry of the charged lepton mass matrix and ii) a plausible texture for the latter can account at the same time for the atmospheric mixing angle, the $\theta_{13}$ angle, and the deviation of the $\theta_{12}$ angle from $\pi/2$ without fine-tuning or tension with data. Finally, we show that the so-called"inverted order"of the 12 and 23 rotations in the charged lepton sector can be obtained without fine-tuning, up to corrections of order $m_e/m_\mu$. Introduction The experimental determination of lepton mass and mixing parameters has made remarkable progress in the last 15 years, gradually unveiling an unexpected pattern, which has often challenged the theoretical prejudice. Such an experimental information is essential to the ambitious program of understanding the origin of flavour breaking. This program has been most often carried out in a top-down approach based on flavour symmetries or other organizing principles. In this paper we would like to revisit the problem from a different point of view, in a bottom-up approach based on a general "stability" assumption, according to which the smallness of some fermion masses does not arise from special correlations among the entries of the mass matrix, and as a consequence it is stable with respect to small variations of the matrix entries. Our analysis will lead to constraints on the structure of fermion mass matrices. The latter contain of course additional parameters that are not physical in the Standard Model (SM) -their form depends in particular on the basis in flavour space in which they are written. The idea underlying our approach is that in a certain basis in flavour space, associated to the unknown physics from which they originate, the entries of the fermion mass matrix can be considered as independent fundamental parameters, i.e. parameters that are not correlated, neither as a consequence of a non-abelian symmetry, nor accidentally. We consider such an assumption motivated and timely, as an experimental evidence of such correlations, which would have been welcome as a smoking gun of underlying symmetries, failed so far to show up in the measurement of θ 13 and θ 23 [1,2]. For example, neutrino mass models leading to the so-called "tri-bimaximal" (TBM) mixing structure [3] for the neutrino mass matrix m ν require 3 independent correlations among the entries of m ν (m ν 12 = m ν 13 , m ν 22 = m ν 33 , m ν 11 + m ν 12 = m ν 22 + m ν 23 ), see e.g. ref. [4], that can be accounted for by discrete symmetries (with a highly non-trivial construction needed to achieve a consistent and complete picture, including quarks and the charged fermion hierarchies). In the light of recent data, such models require sizeable corrections from the charged lepton sector [5][6][7][8][9][10][11][12][13][14], making the TBM scheme as predictive as simple models without correlations (see however refs. [11][12][13][14] for a a possible prediction for the CP phase). In the following, we will concentrate in particular on the charged fermion (lepton) mass matrices, which are particularly suited for our approach due to the significant hierarchy among their eigenvalues. 1 This makes unlikely that the small eigenvalues arise as a consequence of accidental correlations among much larger quantities, an important element in our analysis, and is a sign of a non-anarchical origin of its matrix entries. We will see that our approach allows to draw interesting conclusions on their contribution to lepton mixing. The precise formulation of our assumption will be given in Section 2. Let us see here in a qualitative and intuitive way how assuming the absence of certain special correlations among matrix elements can translate into relevant information on the structure of the fermion mass matrices, using a simple and well known 2 family example: the charged lepton mass matrix M E , restricted to the second and third families, (throughout this paper we will use a "RL" convention for the charged fermion mass matrices). Suppose that M E = U T e c M diag E U e , where M diag E = Diag(m µ , m τ ) and U e , U e c are rotations by angles θ, θ c , respectively, that are both large, tan θ ∼ tan θ c ∼ 1. As a consequence, all the four entries of M E are of the same order of magnitude as the tau mass m τ , and the observed relative smallness of m µ is a consequence of a precise correlation among those four entries, Requiring, according to our assumption, that the smallness of m µ does not result from a fine-tuned cancellation among two correlated terms M 33 M 22 and M 23 M 32 (as in eq. (1)), we conclude that which provides relevant information on the structure of the m E matrix. Interestingly, the above conditions can equivalently be obtained by requiring that the lightest eigenvalue m µ , or equivalently the product m µ m τ = |det M |, is stable with respect to small variations of the matrix entries. The stability of an anomalously small quantity X(a) with respect to a small variation ∆a a of the variable a is measured by the quantity In the ∆a → 0 limit, the definition above coincides with the "fine-tuning" or "sensitivity" parameter often used to measure the naturalness of the Higgs mass (for reasons that will become clear later, we prefer to keep a finite form here). The larger is ∆ a , the more unstable is the smallness of X. When a is assumed to be an independent fundamental parameter of the theory, it is desirable to have ∆ a 1, in such a way that the smallness of X(a) can be considered "natural", i.e. not accidental. In the case of our 2×2 mass matrix M , we can require that the small quantity m 2 m 3 = |det M |, or det M itself, is stable with respect to variations of the matrix elements M ij and calculate the corresponding sensitivity parameters: Therefore, the assumption in eq. (3) is equivalent to imposing for (one or) all the entries M ij , ij = 1, 2, and is therefore nothing but a stability assumption, at least if the parameters M ij can be considered independent. The arguments above on the structure of our toy 2×2 lepton mass matrix are well known and underlie textures that have been widely considered in the literature. For example, textures with M 32 , M 33 ∼ m τ , M 22 , M 23 ∼ m µ have been considered since ref. [15] as possible explanations for the origin of the large atmospheric angle. The purpose of this paper is to analyse in a rigorous and complete way the consequence on the structure of a full 3 × 3 hierarchical mass matrix of the systematic application of the above ideas. In Sections 2 and 3 we precisely define the assumption we make, which generalises eq. (3), and we study its connection with the absence of correlations in the full determinant and 2 × 2 sub-determinants of 3 × 3 fermion matrices. We also give different characterisations of stable mass matrices valid for any n × n matrix. This Section will make use of a number of useful results on mass matrices collected in the Appendices. In Section 4 we will consider examples of applications of our results to the charged lepton contributions to neutrino mixing. In particular, we will revisit the issue of whether the charged lepton contribution can account for all neutrino mixing and show that this is indeed possible without fine-tuning. We will also consider the case in which the charged lepton mass matrix combines with a maximal 12 rotation originating in the neutrino sector and we will see that this also leads to a plausible texture for the lepton mass matrix. In Section 5 we summarise our results. The stability assumption In this Section we define the assumption we make in this paper, in the general case of a n × n matrix M , and we study its consequences, including an explicit equivalent formulation in terms of constraints on products of matrix elements, which is the basis of the analysis carried out in the next Sections. The proofs of the statements in this Section are given in Appendix B. Let M be a generic complex n × n matrix with hierarchical eigenvalues representing for example a Dirac fermion mass matrix. Throughout this paper we will assume that its eigenvalues are stable in size with respect to small variations of the matrix elements M ij . In order to give a precise definition of this assumption, it is useful to define the quantities where p = 1 . . . n. For hierarchical eigenvalues, Π p is essentially the product of the p largest eigenvalues, as shown in eq. (8). The quantities Π p are useful because, on the one hand, the requirement of the stability of the eigenvalues m 1 , . . . , m n can be equivalently formulated in terms of the stability of the products m n . . . m n−p+1 ≈ Π p ; 2 on the other hand, the quantities Π 2 p have a polynomial expression in terms of the matrix elements M ij and their conjugated, see eq. (51), which allows to translate the stability requirement into constraints on the matrix elements. Definition (stability assumption). We say that the mass matrix M is stable with respect to small variations of its matrix elements iff As explained, the definition above expresses the stability of the determination of the eigenvalues of M (more precisely the products in eq. (8)) with respect to small variation of any matrix entry. Proposition 1 (relation with fine-tuning). The stability assumption implies but the viceversa is true only for n = 1, 2. 2 Strictly speaking the two requirements are equivalent if n is not too large, say n ≤ 3. If n 1, the stability of all Πp implies the stability of all m k , but not viceversa. This can be seen by observing that ∆(log Πp)/∆(log Mij) ≈ ∆(log mn)/∆(log Mij) + . . . + ∆(log mn−p+1)/∆(log Mij). Therefore, even if the individual eigenvalues have sensitivities of order one, the sensitivity of Πp can be large, for large p and n, because of the large number of O (1) contributions. On the contrary, a small sensitivity for all Πp guarantees a small sensitivity for all the eigenvalues. Inverting the previous relations one finds in fact: ∆(log mn)/∆(log Mij) ≈ ∆(log Π1)/∆(log Mij) and ∆(log m k )/∆(log Mij) ≈ ∆(log Π n−k )/∆(log Mij) − ∆(log Π n−k+1 )/∆(log Mij) for k < n. An example of 3 × 3 matrix M that satisfies eq. (10) but not eq. (9) is given in the Example 2 in Appendix B. The reason why eq. (10) in that case misses the instability is that the latter does not show up when |∆M ij | is much smaller than the second eigenvalue (which is always the case in eq. (10), where the limit ∆M ij → 0 is taken). This is the reason why we chose to use a definition of stability using finite differences. We now show that for n ≤ 3 the stability assumption translates in practice into simple constraints on products of matrix entries, which correspond to the absence of cancellations in the expressions entering the determinants and sub-determinants of M . The constraints in eqs. (11) and (12) are all we need for the analysis carried out in the next Sections. Note that the connection outlined above between the stability of M and the absence of cancellations in the determinant and sub-determinants, although intuitive, is not trivial. For example, it does not hold for n ≥ 4, as shown by the Example 1 in Appendix B. For completeness, we also give two additional characterisations of stable hierarchical matrices that emerge in the proof of the previous proposition. Let us first fix a matrix element M ij and defineM (ij) to be the matrix obtained from M by setting to zero all the elements in the row i and column j except M ij andM (ij) the matrix with the element ij set to zero, as in eq. (52). Let us also fix 1 ≤ p ≤ n and denote byΠ (ij)p andΠ (ij)p the quantities in eq. (8) associated toM (ij) andM (ij) respectively. Proposition 3 (general characterisation of stable matrices). The following three statements are equivalent: 1. Eq. (9) holds for given p, i, j; Therefore the stability of the mass matrix is equivalent to requiring 2. or 3. for all i, j, p. The intuitive meaning of the points 2. and 3. above has again to do with stability, as they state that setting to zero one of the matrix entries (or alternatively all the entries on the same row and column except that one) does not give rise to a drastic change of the structure of the eigenvalues. Appendices A and B contain a number of additional results, as well as the proofs of the statements in this Section. General structure of stable charged fermion (lepton) mass matrices In this Section, we will describe the general structure of a 3 × 3 hierarchical fermion mass matrix satisfying the stability assumption, i.e. such that the hierarchy of its eigenvalues does not require accidental or dynamical correlations among its entries. Let us start with a remark on the ordering of rows and columns of M : it is always possible to order the rows and columns of M in such a way that the structure of the matrix follows the hierarchy of the eigenvalues, i.e. in such a way that the third row and column are associated to the third and largest eigenvalue, and so on. More precisely, it is possible to order the rows and columns of M in such a way that where M [23] is the 2 × 2 sub-matrix of M corresponding to the second and third rows and columns (as in eqs. (48) and (49)). We will assume that this it the case in the following. En passant, one can wonder how far from m 3 and m 2 m 3 can |M 33 | and |det M [23] | get in the equations above, or what exactly O (m 3 ) and O (m 2 m 3 ) mean. In Appendix C we show that we can always make |M 33 | m 3 / √ 3 ≈ 0.6 m 3 and |det M [23] | m 2 m 3 / √ 6 ≈ 0.4 m 2 m 3 . If M did not satisfy the stability assumption (but is hierarchical), the bounds above would be qualitatively different, |M 33 | m 3 /3 and |det M [23] | m 2 m 3 /6. Once the rows and columns of M have been ordered as above, a stable M is subject to the following constraints: Viceversa, an hierarchical M satisfying the constraints above (and having m 1 , m 2 , m 3 as eigenvalues) is automatically stable. While in the 2 × 2 case M satisfies the stability assumption iff |M ij M ji | m i m j for all i, j = 1, 2, in the 3 × 3 case the corresponding constraint turns out to be true for all i, j = 1, 2, 3 except for ij = 13, 31. We can then consider, in turn, two ranges for |M 13 M 31 |: In this second case, which we consider first, the structure of M turns out to be particularly constrained. In this case, the constraints above force The general structure of M can then be described in terms of the size of the product |M 13 M 31 |, and in terms of the asymmetry, or degree of "lopsidedness", between |M 32 | and |M 23 | (R 23 ) and between |M 31 | and |M 13 | (R 13 , or R 12 = R 13 /R 23 ), The matrix |M | of absolute values of the entries of M has then the following structure The largest stable values of k, k ∼ m 2 /m 1 , require where the lopsideness factor R 23 is bounded as in eq. (17). |M In this case, |M ij M ji | m i m j holds for all i, j = 1, 2, 3. A general parameterisation similar to equation eq. (16) is still possible, although it turns out to be more complicated. The lopsidedness parameters R ij can be defined only if the corresponding |M ij M ji | is non-zero. If that is the case, we can define In terms of the above parameters we can then write where The formulas above also apply to the previous case, and thus become general, provided that the constraint k 13 1 is generalised to k 13 m 2 /m 1 and provided that k 13 √ k 22 1. Examples 4.1 Can neutrino mixing arise from the charged lepton sector? As an example of applications of the above results, in this subsection we revisit the issue of whether the PMNS matrix can be dominated by the charged lepton contribution. The PMNS matrix U is given by U = U e U † ν , where U e and U ν enter the diagonalisation of the charged lepton and neutrino mass matrices, Let us consider the possibility that U ν is diagonal and all the mixing comes from the charged lepton sector, U = U e (up to phases that can be set to zero without loss of generality). We first observe that in such a case the last row of the charged lepton mass matrix M E is approximately determined by the PMNS matrix, as where, experimentally, |U 3i | = O (1). 3 By using eq. (22) and the results for normal hierarchy from the global fit in ref. [1] we then get the 1σ ranges 3 In order to prove the previous equation, we first observe that |U e We now want to determine the constraints on the first and second lines that follow from the stability assumption. Using the characterisation of stable mass matrices in Section 3, we find that we find that a lepton mass matrix M E in the form eq. (23) satisfies the stability assumption iff it is possible to find a k such that The above matrix can be diagonalised perturbatively with a series of 2 × 2 unitary transformations, giving where R ij (θ, φ) denotes the 3 × 3 unitary transformation consisting in the embedding of in the ij block of the 3 × 3 matrix; R 23 (θ e 23 , φ 3 − φ 2 ) and R 12 (θ e 12 , φ 2 − φ 1 ) are the rotations necessary to bring the third row of M E in diagonal form and are determined by that row, (27) R 12 (θ 12 , φ ) diagonalises the 12 block after the previous two rotations have been applied; Φ is a diagonal matrix of phases. The results above hold up to corrections of relative order m 2 µ /m 2 τ . Eqs. (23) and (24) give tan θ e 23 ∼ tan θ e 12 ∼ 1 and tan θ 12 ∼ 1/k. The PMNS matrix in eq. (25) is in a form that has been already considered in the literature [16,6,17,11,12]. The precise relation between the parameters in eq. (25) and the parameters of the standard parameterisation can be found in refs. [11,12]. In our notations, where φ = φ + φ 1 − φ 2 . A fit for the parameters θ e 23 , θ e 12 and θ 12 , φ is shown in Fig. 1, using the results of the global fit of neutrino oscillation data from ref. We construct the likelihood function using the results of the recent global fit of neutrino oscillation data from ref. [1] for normal ordering (upper row) and inverted ordering (lower row) of neutrino masses. In plots (a,c) we use only the constraints on sin θ 13 and sin 2 θ 23 and the first two equations in eq. (29). In plots (b,d) we include also the constraints on sin 2 θ 12 and δ and use the third line of eq. (29) as well as the relation between φ and δ obtained by comparing the expressions for J CP in the two parametrizations (see ref. [12] for the details), and we marginalize over sin θ 12 and sin θ e 23 . The same analysis can be applied also to the case discussed in Section 4.2, see eq. (45), by substituting θ e 12 withθ 12 . small, unless a correlation among the entries of M E [23] [12] makes its determinant correspondingly small [16]. If this is not the case, we estimate From Fig. 1(a,c) we also note also that, as a consequence of the first equation in (29), the rotation angle θ 12 that diagonalises the 12 sector of M E has the same size, within errors, as the Cabibbo angle. Such a connection with the quark sector can be realised in the context of grand-unification [18,11,19,20]. In the light of what above, we observe that: • A small θ 13 in the range including the measured range, can be obtained without the need of cancellations even if all neutrino mixing comes from the charged lepton sector. 4 • Independent of whether all neutrino mixing is accounted for by the charged lepton contribution or not, the latter contribution is usually written as a product of two rotations in the "standard order" U e = R 12 R 23 . We see that the "inverted order", U e = R 23 R 12 , considered e.g. in refs. [5,12], can also be obtained (up to corrections of order m e /m µ ), without the need of correlations, when 1/k is at the lower end of its range, 1/k ∼ m e /m µ . • The value of k in eq. (30) is compatible with k ∼ m µ /m e . Lepton mixing can therefore be accounted for in this set up by Finally, let us briefly discuss whether an abelian flavour model, for example, can account for the texture in eq. (32). Often abelian models lead to textures in the form M E ij ∼ c ij λ c i λ j m 0 , with 0 < λ i , λ c j < 1 and |c ij | ∼ 1 [21,22]. Such textures can also be obtained in partial compositeness models (for a recent review see e.g. ref. [23]). Clearly such textures can account for all the entries of the above texture except for M E 33 , which parametrically would be expected to be O m τ m µ /m e rather than O (m τ ), i.e. an order of magnitude larger. Still, a texture in the form M E ij ∼ c ij λ c i λ j m 0 with |M E 33 | = O m τ m µ /m e is not obviously ruled out. In fact, the parametric difference between the ratio |M E 32 /M E 33 | ∼ 0.07 predicted by that texture and the ratio |M E 32 /M E 33 | ∼ 1 in eq. (32) can be accounted by i) the fact that the precise observed value |M E 32 /M E 33 | ≈ 0.7 is slightly smaller than 1, ii) the fact that in a two Higgs doublet model with large tan β the running of |M E 32 /M E 33 | from a high scale to the electroweak scale can reduce its value by a factor 2 [24], and iii) a slightly stretched O (1) factor. Another possibility is to consider an abelian flavour model with more than one flavon, which does not necessarily lead to a texture in the form M E ij ∼ c ij λ c i λ i m 0 . A complete example, also forcing the neutrino mass matrix to be diagonal, is provided in Appendix D. Correction to θ 12 = π/4 from the charged lepton sector As a second example, let us consider the case in which the neutrino mass matrix contributes to lepton mixing with a maximal "12" rotation (up to phases), where Φ ν and Ψ ν are diagonal matrices of phases. The charged lepton mass matrix must account in this case for the measured deviation of θ 12 from π/4, besides for θ 23 and θ 13 . As before we have M 3i ≈ m τ U e 3i , where now where we have denoted byŪ the PMNS matrix in the standard parameterization (the matrix U in eq. (34) is not necessarily in that parameterization). Eqs. (35) show that the value of θ e 23 is still determined by the PMNS matrix to be in the 1σ range 0.72 < cos θ e 23 < 0.76, while the value of θ e 12 also depends on the unknown phase α 1 − α 2 . A non zero value of θ e 12 is required in order to make |U 31 | = |U 32 |, as preferred by data at 2σ (see below). For the present central values of the PMNS parameters in ref. [1] (normal hierarchy), one gets the lower bound tan θ e 12 > 0.13. While θ e 12 may be expected not to be far from this lower limit, large values are also allowed, provided that the relative phase α 1 − α 2 in eq. (35) is properly adjusted. In where = tan θ e 12 and indicatively we can consider the range 0.13 1, with smaller values also allowed if PMNS parameters away from the best fit are considered (we will anyway assume that m e /m µ ≈ 0.005, as indicated by present data). As the case = O (1) has been considered in the previous subsection, we are interested to the case in which is significantly smaller than one, but the discussion below holds in both cases. Let us now determine the constraints on the structure of the charged lepton mass matrix that follow from eq. (36) and the stability assumption. Using the characterisation of stable mass matrices in Section 3, we find that a lepton mass matrix M E in the form (36) satisfies the stability assumption iff it is possible to find a k such that We can now diagonalise the matrix in eq. (37) to obtain the charged lepton contribution to the PMNS matrix. A perturbative block by block diagonalisation gives as before where Φ is a diagonal matrix of phases, R 23 (θ e 23 , φ 3 − φ 2 ) and R 12 (θ e 12 , φ 2 − φ 1 ) are the rotations necessary to bring the third row of M E (parameterised as in eq. (27)) in diagonal form, R 12 (θ 12 , φ ) diagonalises the 12 block after the previous two rotations have been applied, and the result holds up to corrections of relative order m 2 µ /m 2 τ . Eq. (37) gives tan θ 12 ∼ 1/k, tan θ e 23 ∼ 1, tan θ e 12 = . By combining U e in eq. (38) with U ν in eq. (33) we find a PMNS matrix in the form where Ψ is a diagonal matrix of phases. The PMNS matrix is thus again in the form found in the previous subsection (12 × 23 × 12 rotations), but now the last 12 rotation R 12 (θ e 12 , φ 2 − φ 1 ) is replaced by the combination of that rotation with the maximal 12 rotation provided by the neutrino sector where φ ν 12 is a combination of the phases in Φ ν , Ψ ν . In the absence of phases,θ 12 = π/4 ± θ e 12 . In general, . (43) The PMNS matrix is again parameterised in the way considered e.g. in ref. [12] in terms of the angles θ 12 , θ e 23 andθ 12 in eq. (40) and of the phase φ = φ =φ 12 . The angles θ 12 , θ e 23 ,θ 12 are related to the parameters of the charged lepton mass matrix in eq. (37) by The determination of the PMNS parameters in Figs. 1 therefore still applies. In particular, the determination of θ e 23 and θ 12 is still given by Fig. 1(a,c), whileθ 12 and φ are determined by Fig. 1(b,d). From Fig. 1(b,d) we see that θ e 12 = 0, corresponding toθ 12 = π/4, is 2σ away from the best fit. Note also that the rotation θ 12 in the 12 sector of M E has again the same size as the Cabibbo angle. Note that two factors, both associated to the charged lepton sector, contribute to make θ 12 different from the maximal value provided by the neutrino sector. One is the θ e 12 rotation induced by M E 31 , which makesθ 12 = π/4, and the other is the θ 12 rotation used to diagonalise the 12 block of M E 12 after the other two blocks have been diagonalised. It has been observed [12] that in the absence of the θ e 12 contribution, i.e. whenθ 12 = π/4, the θ 12 rotation alone can account for the deviation of θ 12 from π/4 only at the price of a 2σ tension (as θ 12 is constrained by θ 13 , see eqs. (45)). Here we see that this tension disappears if the independent contribution θ e 12 , induced by M E 31 , is taken into account. In such a scheme, θ 12 determines θ 13 and θ e 12 further contributes to the deviation of θ 12 from the neutrino contribution. Summarizing: • The previous rotation alone can account for the deviation of θ 12 from π/4 only at the price of a 2σ tension, with present data. On the other hand, this tension disappears if the independent contribution to θ 12 induced by a non-zero ratio = |M E 31 /M E 32 | is taken into account. Therefore, a plausible and stable texture for the charged lepton mass matrix can account at the same time for the atmospheric mixing angle, the θ 13 angle, and the deviation of the θ 12 angle from π/4. Finally, we comment on the possible origin of the texture in eq. (37). We observe that the latter is compatible with a form M E ij ∼ c ij λ c i λ j m 0 , with 0 < λ i , λ c j < 1 and |c ij | ∼ 1, provided that 1/k ∼ 0.16. Together with the experimental 2σ bound 0.13, this implies ∼ 1/k. The structure M E ij ∼ c ij λ c i λ j m 0 and the constraint det M E = m e m µ m τ then allow to rewrite eq. (37) as The previous texture is indeed in the form . It can also be written in the form M E ij ∼ c ij q c i +q j m 0 , with appropriate choice of and of the charges q i , q c i . Explicit and complete flavour models will be considered elsewhere. Summary We have studied general properties and specific examples of hierarchical fermion mass matrices satisfying a "stability" assumption. The latter amounts to assuming the stability of the smaller eigenvalues with respect to small perturbations of the matrix entries. Such an assumption is equivalent to the absence of certain precise correlations, be them accidental or forced by a dynamical/symmetry principle, among the matrix entries and is therefore also motivated by the fact that no evidence of special correlations has so far emerged from data. We have found a simple and general characterisation of a stable 3 × 3 mass matrix M with eigenvalues m i , i = 1, 2, 3, in terms of products of matrix entries that proves useful for practical applications, A number of exact relations involving the minors of M obtained in the appendices show that the latter corresponds to the absence of cancellations in the expressions entering the determinants and sub-determinants of M . As an example of application of the general results, we have revisited the issue of the the charged lepton contribution to neutrino mixing and determined the structure of the charged lepton mass matrix under two assumption for the neutrino contribution: i) no contribution at all (all mixing from the charged lepton sector) and ii) it only provides a maximal θ 12 angle. In the first case, we have seen that lepton mixing can indeed all come from the charged lepton sector and that this does not need to fine-tune the value of θ 13 , as long as θ 13 m e /m µ sin θ 23 ≈ 0.03, as it turned out to be. We have also translated the present determination of the standard PMNS parameters into a determination of alternative, equivalent parameters, directly related to the charged lepton matrix entries. The latter determination also allows to determine with good accuracy the whole third row of the charged lepton mass matrix. We have also briefly discussed the possible origin of the textures we have considered. In the case in which the neutrino sector only provides a maximal 12 rotation, we have shown that present data provide a 2σ evidence for a non-vanishing 31 entry of the charged lepton mass matrix. The PMNS matrix turns out in fact to be given by a product of 12 and 23 rotations, U = 12 1 × 23 × 12 2 × 12 π/4 , where the neutrino sector only provides for the last one. Both the first and the second 12 rotations contribute to shift θ 12 from π/4. The first one is the rotation used to diagonalise the 12 block of M E after the other two blocks have been diagonalised and is directly related to θ 13 . The second one is induced by a non zero value of M E 31 /M E 32 . Sometimes only the first one is considered, with the second set to zero. In such a case, a 2σ tension arises between the value of the 12 rotation needed to account for θ 13 and the value needed to account for the deviation from θ 12 = π/4 (also due to the constraints on the phase δ). On the other hand, the tension disappears if the second 12 rotation is taken into account. In such a case, the first 12 rotation determines θ 13 and the independent second rotation further contributes to the deviation of θ 12 from π/4. This way, a plausible texture for the charged lepton mass matrix can account at the same time for the atmospheric mixing angle, the θ 13 angle, and the deviation of the θ 12 angle from π/2. In both cases, the left-handed rotation that diagonalises the 12 sector of M E has the same size, within errors, as the Cabibbo angle, which may be considered as a hint in support of grand-unification. Finally, independent of whether all neutrino mixing is accounted for by the charged lepton contribution or not, we have shown that the so-called "inverted order" of the 12 and 23 rotations in the charged lepton sector, U e = R 23 R 12 can also be obtained without fine-tuning (up to corrections of order m e /m µ ). A Useful results In this Appendix, we collect some results that have been used in the main text and will be used in Appendix B. Let us first define some notations. Below, M will denote a n × n generic complex matrix, possibly representing a fermion mass matrix. The matrix M can be diagonalized by using two independent unitary matrices, (p, q = 1 . . . n, a = 1 . . . p, b = 1 . . . q). If the rows and columns coincide, we also use the notation [25]. 5 A related but independent result allows to obtain combinations of p singular values through the determinant of p × p submatrices: The relation above generalizes the p = 1 result n i=1 m 2 i = n i,j=1 |M ij | 2 obtained in ref. [26]. B Proofs of the results in Section 2 We now prove the results stated in Section 2, starting from Proposition 3, whose discussion is preparatory to the proof of the other two. In the following, and in the main text, x y (x y) indicates that x < y (x > y) or x is of the same order of y, i.e. they differ by a factor of order one. Therefore, x y (x y) is equivalent to the negation of x y (x y). Moreover, a b (a b) indicates that a < b + (a b − ), with 0 < |b|. . (52) As the quantities Π p can be profitably calculated in terms of the determinant of sub-matrices (eq. (51)), let us first determine the relation among the sub-determinants of M ,M (ij) ,M (ij) . The relation depends on whether the sub-matrix includes the row i and the column j. Accordingly, we have (for convenience, we fix i, j and drop the suffix (ij) inM ,M ,Π p ,Π p ) In the above equations, all i 1 . . . i p are different from i and all j 1 . . . j p different from j. Let us begin proving that 2 ⇒ 1. Using eq. (51) one finds where (55) For p = 1, eq. (54) should be interpreted as where k is a positive number of order one and φ is a phase chosen in such a way that 2 Re[e iθ v * α w α ] = 0 in eq. (54). Then |∆M ij | |M ij |, but eq. (54) gives which would contradict the assumption. Let us finally prove that 3 ⇒ 2. This can be done by observing thatΠ p Π p implieŝ where we have used eqs. (53) to obtain the first equality. This proves point 2. B.2 Proof of Proposition 2 For convenience, we remind that this proposition characterises as follows the stability of matrices M with dimension n ≤ 3: Let us start observing that for p = 1 (any n) eq. (56) gives This proves in particular that M is always stable for n = 1. Given what above, for n = 2 we just need to consider the case p = 2. In general, for p = n, eq. (54) gives B.3 Proposition 2 cannot be extended to n = 4 As mentioned in the text, the characterisation in Proposition 2 cannot be extended to the case n = 4. For example, not all n = 4 hierarchical matrices satisfying the stability assumption satisfy |M 1i M 2j M 3k M 4l | m 1 m 2 m 3 m 4 for all ijkl permutations of 1234. This is the case for example of the matrix in eq. (73). where M hk is the matrix element opposite to M ij in M . Therefore eq. (76) implies eq. (64), which implies that M is stable. This proves the viceversa for n = 2. Finally, we need to prove that the viceversa is not true for n = 3. This is illustrated by the following Example. one can see that eq. (76) is verified. On the other hand, M does not satisfy the stability assumption because M 12 M 33 m 2 m 3 , which contradicts eq. (65a). C Ordering rows and columns In this Appendix we discuss the results on the ordering of rows and columns of a 3 × 3 hierarchical mass matrix M mentioned in Section 3. Let us first consider a hierarchical matrix M that does not necessarily satisfy the stability assumption. The following lemma proves useful to discuss this case. Lemma (ordering for unitary matrices). Given a 3 × 3 unitary matrix U , it is possible to permute its columns (rows) in such a way that Moreover, it is not possible to set more stringent general bounds: for any > 0 there exists a unitary matrix U for which it is not possible to find an ordering such that |U 33 | ≥ 1/ √ 3 + and |det U [23] | ≥ 1/ √ 6 + . Using the previous Lemma, we can show the following proposition. • The matrix entries must satisfy |M ij | ≤ m 3 , |M ih M jk | m 2 m 3 , |M ih M jk M lm | m 1 m 2 m 3 when rows and columns are all different. • The possible structures can be classified by the position of the entries complementary (i.e. with no common row or column) to the 2 × 2 unsuppressed sub-determinants, which by eq. (70) are not much larger than m 1 . All remaining 2 × 2 sub-determinants must be suppressed with respect to m 2 m 3 . • Suppose only the 2 × 2 sub-matrices in the last two rows have unsuppressed determinants and let us consider the two sub-matrices that include the third column elements M 23 and M 33 , M [23][i3] , i = 1, 2. At least one of the two must have |det M [23][i3] | 1/ √ 6. The latter statement can be shown by observing that if |det M [23] In this appendix we briefly present, as a proof of existence, an abelian flavor model which realises the case in which the neutrino mass matrix is diagonal and the lepton mixing arises from the charged lepton sector, closely related to the one presented in Appendix A of ref. [16], albeit with no need of introducing extra messenger fields. We do so in the context of a supersymmetric SU(5) grand unified theory. We introduce a flavor symmetry The relevant field content, as well as charge assignment is given by The effective superpotential at low energy can be written as W = y ij 10 i 10 j 5 H + η ij 10 i5j5H + c ij Λ (5 i 5 H )(5 j 5 H ), where Λ is a high mass scale related to the flavor dynamics and the other couplings are adimensional and include suitable powers of θ i /Λ ∼ λ 1 (for simplicity, all vev are assumed to be of the same order) in order to make each term invariant under the symmetry F . This fixes the up-type quark mass matrix to be M u ∼ŷ 5 H λ 2   λ 7 λ 6 λ 4 λ 6 λ 5 λ 3 the charged lepton one to be and finally the neutrino masses are diagonal and with inverted ordering, proportional to Above we defined the O(1) parametersŷ ij ,η ij andĉ i . Notice that in eq. (87) we reproduced the mass matrix of eq. (32). Finally, let us point out that the only symmetries necessary in order to reproduce the texture of eq. (87) in the charged lepton sector (albeit with a different overall scaling with λ) are the first two U(1) factors, U(1) F 0 × U(1) F 1 , and the only flavons necessary are θ 0 , θ 1 and θ 2 , with the same charges as specified above.
10,455.2
2014-09-12T00:00:00.000
[ "Physics" ]
AN ALTERNATIVE REPARAMETRIZATION FOR THE WEIGHTED LINDLEY DISTRIBUTION Recently, [12] introduced a generalization of a one parameter Lindley distribution and named it as a weighted Lindley distribution. Considering this new introduced weighted Lindley distribution, we propose a reparametrization on the shape parameter leading it to be orthogonal to the other shape parameter. In this alternative parametrization, we get a direct interpretation for this transformed parameter which is the mean survival time. For illustrative purposes, the weighted Lindley distribution on the new parametrization is applied on two real data sets. The one parameter Lindley distribution and its generalized form are fitted for the considered data sets. INTRODUCTION A non negative random variable T follows the two-parameter weighted Lindley distribution, [12], with shape parameters μ > 0 and β > 0 if its probability density function is given by: where t > 0 and (β) = ∞ 0 t β−1 e −t dt is the gamma function.From (1), the corresponding survival and hazard functions, are given, respectively, by: and where (a, b), a > 0 and b ≥ 0, is the upper incomplete gamma function (see, [29]), defined as ∞ b t a−1 e −t dt.In (1), taking the shape parameter β = 1 we have the one parameter Lindley distribution as a special case.The one parameter Lindley distribution was introduced by Lindley (see, Lindley 1958 and1965) as a new distribution useful to analyze lifetime data, especially in applications modeling stress-strength reliability.[13] studied the properties of the one parameter Lindley distribution under a careful mathematical approach.These authors also showed, in a numerical example, that the Lindley distribution usually gives better fit for the data when compared to the standard Exponential distribution.A generalized Lindley distribution, which includes as special cases the Exponential and Gamma distributions was introduced by [36].Ghitany and Al-Mutari (2008) considered a size-biased Poisson-Lindley distribution and [31] introduced the Poisson-Lindley distribution to model count data.Some properties of the Poisson-Lindley distribution, its derived distributions and some mixtures of this distribution were studied by [5,6,24].A zero-truncated Poisson-Lindley was considered in [10].A study on the inflated Poisson-Lindley distribution was presented in [7] and the Negative Binomial-Lindley distribution was introduced in [37].The one parameter Lindley distribution in the competing risks scenario was considered in [26]. Since the standard one parameter Lindley distribution does not provide enough flexibility to analyze different types of lifetime data, the two-parameter weighted Lindley distribution could be a good alternative in the analysis of lifetime data.A nice feature of the two-parameter weighted Lindley distribution is that its hazard function has a bathtub form for 0 < β < 1 and it is increasing for β ≥ 1, for all μ > 0. It is important to point out, that in the last years, several distributions have been introduced in the literature to model bathtub hazard functions but in general these distributions have three or more parameters usually depending on numerical methods to find the maximum likelihood estimates which could be, in general, not very accurate.In this case good reparametrizations with less parameters could be very useful in applications.A comprehensive review of the existing know distributions that exhibit bathtub shape is provided in [30,17,3,28].In addition to the weighted Lindley distribution, that can be used to model bathtub-shaped failure rate, we also could consider as alternatives, four other two-parameter distributions introduced in the literature [18,8,15,35] with this behavior. The main goal of this paper is to propose an alternative parametrization for the one shape parameter of the weighted Lindley distribution.In the proposed parametrization, we get the new parameter orthogonal to the other shape parameter where this new reparametrized form of the parameter gives the mean survival time.The obtained orthogonality of the reparametrized form of the parameter is related to the observed Fisher information [2].Orthogonal parameters have many advantages in the inference results as, for example, for large sample sizes we have independence among the maximum likelihood of the orthogonal parameters, since the Fisher information matrix is diagonal.Other advantage of orthogonal parameters is related to the conditional likelihood approach ( The paper is organized as follows.In Section 2 the likelihood function for the two-parameter weighted Lindley distribution is formulated where we also present the proposed orthogonal reparametrization.Two examples considering real data sets are provided in Section 3 where its observed that the weighted Lindley distribution gives better fit for the data when compared to the one-parameter Lindley distribution and the generalized Lindley distribution.Some conclusions are presented in Section 4. THE LIKELIHOOD FUNCTION Let t = (t 1 , . . ., t n ) be a realization of the random sample T = (T 1 , . . ., T n ), where T 1 , . . ., T n are i.i.d.(identically independent distribution) random variables according to a two-parameter Lindley distribution, with shape parameters μ > 0 and β > 0. From (1) the likelihood function can be written as: where dy is the gamma function.From (4), the loglikelihood function for μ and β, l (μ, β | t), is given by: where T 1 = n i=1 log (t i ) and T 2 = n i=1 log (1 + t i ).Differentiating (5) with respect to μ and β and setting the results equal to zero we have: where is the digamma function.The maximum likelihood estimates, μ and β, for μ and β, respectively, are obtained by solving equations ( 6) and ( 7) in μ and β, respectively. From (6), the maximum likelihood estimate for μ is obtained as a function of β, μ (β), given by: In this way, replace μ in (7) by μ (β) given by (8), which leads to an equation with only one variable β.After choose an initial value for β, use standard Newton-Raphson algorithm to find the maximum likelihood estimator for β.With the obtained maximum likelihood estimator for β get the maximum likelihood estimator for μ using equation (8). Based on a single observation, the observed information matrix, I (μ, β), is given by: where ψ (β) = d 2 dβ 2 log (β), and the terms in the (2 × 2) observed Fisher information matrix (9) are obtained from the second derivatives given by, and The maximum likelihood estimates for μ and β have asymptotic bivariate normal distribution with mean (μ, β) and variance-covariance matrix given by the inverse of the Fisher Information matrix (9) locally at the maximum likelihood estimates μ and β.Since the data is independent, the information matrix ( 9) is equal to the expected information matrix. In this paper we propose to reparametrize the two-parameter weighted Lindley distribution such that (μ, β) is transformed to (θ, β), where: where θ > 0 is the mean of the weighted Lindley distribution with parameters μ and β and 2θ . Using the construction method of orthogonality parameters, proposed in [9], and from (9) we observe that μ is obtained as solution of the following orthogonality differential equation: In this new parametrization we have that the maximum likelihood estimate for θ is given by θ = n −1 n i=1 t i and Cov θ, β = 0.The orthogonality between θ and β implies that the information matrix is asymptotically diagonal which implies that the the maximum likelihood estimates θ and β are asymptotically independent.The orthogonality simplify the parameters estimation process and its interpretation.For the weighted Lindley distribution the parameter interpretation in the orthogonal parametrization is obvious since θ is the mean time to failure.Further orthogonality consequences are pointed out in [9]. APPLICATIONS In this section we fit the two-parameter weighted Lindley distribution (WL) to two real data sets.For comparative purposes we also have considered two alternative models: (L): the one parameter Lindley distribution, f (t | μ) = μ 2 μ+1 (1 + t ) e −μt , and (GL): the generalized Lindley distribution, [36].The first data set was reported by [4], and employed by [14] among others, represents the survival times (in days) of 72 guinea pigs infected with virulent tubercle bacilli, regimen 4.3.The regimen number is the common log of the number of bacillary units in 0.5 ml of challenge solution.The second data set was extracted from [33], see also [25], representing hours to failure of 59 test conductors of 400-micrometer length.All specimens ran to failure at a certain high temperature and current density.The 59 specimens were all tested under the same temperature and current density. Table 1 list for the two data sets and models L, WL and GL the maximum likelihood estimates and their standard errors.For comparative purposes the estimates are also presented in the original parameterization and were obtained using SAS/NLMIXED procedure, [32], by applying the Newton-Raphson algorithm.For the WL model, in the orthogonal parameterization, we have θ = t = 176.82(data set 1) and θ = t = 6.98 (data set 2).The standard errors are given, respectively, by 11.86 and 0.21.In Table 2 are listed standard model selection measures: −2 × log-likelihood, AI C (Akaike's Information Criterion, [1]) and B I C (Schwarz's Bayesian Information Criterion, [34]).From the values of these statistics we conclude that the two parameter Lindley distribution provides a better fit for the data sets when compared to the two alternative models.For the WL model the obtained estimates for θ are respectively given by: 176.82 (data set 1) and 6.98 (data set 2).The standard errors are given by, 11.86 and 0.21, respectively. For illustrative purposes, we present in Figure 1 the 50%, 90% and 95% likelihood contour plots in the original and the proposed orthogonal parametrization.In contrast to panels (b, data set 1) and (d, data set 2), the orientation of the contours in panels (a, data set 1) and (c, data set 2) revels a high positive correlation between β and μ.We observe in panels (b) and (d) that the axes of the elliptical contours are parallel to the coordinate axes, and for this reason we have an indication that the correlation is equal to zero.Naturally, this is expected since θ and β are estimated independently.These contours were built using the procedure described in [16] and also presented in [27]. In Table 3 we present, for both data sets, the corresponding p-values to Kolmogorov-Smirnov (K-S) and Anderson-Darling (A-D) goodness-of-fit statistics.From these results, it is clear that the WL distribution provides a good fit to the given data sets.We also consider the Log-Normal distribution (LN) in the data analysis, since this distribution was considered by [14] (data set 1) and by [25] (data-set 2). CONCLUDING REMARKS In this paper we introduced an alternative parametrization for the shape parameter of the weighted Lindley distribution (WL) introduced by [12], which generalizes the one parameter Lindley distribution.In the proposed parametrization, the new parameter have a direct interpretation and it is orthogonal to the shape parameter. In the last years, the Lindley distribution have been considered in several applications as an alternative lifetime model and its generalization called weighted Lindley distribution considering the orthogonal parametrization could be another good alternative distribution to modeling lifetime data.We fitted the WL distribution to two real data sets and compared the obtained results with those of L and GL distributions,which showed the great potentialities of the WL distribution. Figure 1 - Figure 1 -(a, c): Contour plot for the joint relative likelihood function of β and μ. (b, d): Contour plot for the joint relative likelihood function of β and θ.
2,634.8
2016-08-01T00:00:00.000
[ "Mathematics" ]
Observation of a $J/\psi\Lambda$ resonance consistent with a strange pentaquark candidate in $B^-\to J/\psi\Lambda\bar{p}$ decays An amplitude analysis of $B^-\to J/\psi\Lambda\bar{p}$ decays is performed using about 4400 signal candidates selected on a data sample of $pp$ collisions recorded at center-of-mass energies of 7, 8 and 13 TeV with the LHCb detector, corresponding to an integrated luminosity of 9 fb$^{-1}$. A narrow resonance in the $J/\psi\Lambda$ system, consistent with a pentaquark candidate with strangeness, is observed with high significance. The mass and the width of this new state are measured to be $4338.2\pm 0.7\pm 0.4$MeV and ${7.0\pm1.2\pm1.3}$MeV, where the first uncertainty is statistical and the second systematic. The spin is determined to be $1/2$ and negative parity is preferred. Due to the small $Q$-value of the reaction, the most precise single measurement of the $B^-$ mass to date, $5279.44\pm0.05\pm0.07$MeV, is obtained. The B − → J/ψΛp decay offers the unique opportunity to simultaneously search for P N ψ − and P Λ ψs 0 pentaquark candidates in the J/ψp and J/ψΛ systems, respectively.In particular, the phase space available in the decay allows searches for pentaquark candidates located close to different baryon-meson thresholds, such as Λ + c D 0 for P N ψ + , and Λ + c D − s , Ξ + c D − for P Λ ψs 0 candidates.Neither the P Λ ψs (4459) 0 state, found in the Ξ − b → J/ψΛK − decay [6], nor the P N ψ (4337) + state, found in the B 0 s → J/ψpp decays [5], is accessible with the present analysis since they are outside of the available phase space. The small Q-value of the decay, approximately (natural units with ℏ = c = 1 are used throughout this Letter) 128 MeV, provides excellent mass resolution, allowing searches for narrow resonant structures.In addition, efficient reconstruction of low momentum tracks can improve sensitivity to resonance structures near threshold.This decay was previously studied by the CMS collaboration using a sample of 450 ± 20 signal candidates and the invariant mass distributions of the J/ψΛ, J/ψp, Λp systems were found to be inconsistent with the pure phase-space hypothesis [16].In this Letter, an amplitude analysis of the B − → J/ψΛp decay is performed using signal candidates selected on a data sample of pp collisions at centre-of-mass energies of 7 TeV and 8 TeV (Run 1), and 13 TeV (Run 2), recorded between 2011 and 2018 by the LHCb detector, corresponding to an integrated luminosity of 9 fb −1 .In the following, the first observation of a P Λ ψs 0 pentaquark candidate with strangeness in the J/ψΛ system is reported, which is different from the P Λ ψs (4459) 0 state found in the Ξ − b → J/ψΛK − decay [6].The LHCb detector is a single-arm forward spectrometer covering the pseudorapidity range 2 < η < 5, described in detail in Refs.[17][18][19][20].The online event selection is performed by a trigger [21], comprising a hardware stage based on information from the muon system which selects J/ψ → µ + µ − decays, followed by a software stage that applies a full event reconstruction.The software trigger relies on identifying J/ψ decays into muon pairs consistent with originating from a B-meson decay vertex detached from the primary pp collision point. Samples of simulated events are used to study the properties of the signal mode decay B − → J/ψΛ(→ pπ − )p and the control-mode decay The latter are used to calibrate the distributions of simulated B − decays with data. The pp collisions are generated using Pythia [22] with a specific LHCb configuration [23].Decays of hadronic particles and interactions with the detector material are described by EvtGen [24], using Photos [25], and by the Geant4 toolkit [26][27][28], respectively.The signal and the control-mode decays are generated from a uniform phasespace distribution. Signal B − candidates are formed from combinations of J/ψ, Λ and p candidates originating from a common decay vertex.The J/ψ candidates are formed from pairs of oppositely charged tracks identified as muons and originating from a decay vertex significantly displaced from the associated pp primary vertex (PV).The associated PV for a given particle is the PV with the smallest impact parameter χ 2 IP , defined as the difference in the vertex-fit χ 2 of a given PV reconstructed with and without the particle under consideration.The Λ → pπ − candidates are formed from pairs of oppositely charged tracks and selected in two different categories according to the Λ decay position: i) the "long" category for early decays that allow the proton and pion candidates to be reconstructed in the vertex detector; ii) the "downstream" category for Λ baryons that decay outside the vertex detector and are reconstructed in the tracking stations only.The long candidates have better mass, momentum and vertex resolution than downstream candidates.The p candidate is a charged track identified as an antiproton. A kinematic fit [29] to the B − candidate is performed with the dimuon and the pπ − masses constrained to the known J/ψ and Λ masses, respectively [30].Simulated events are weighted such that the distributions of transverse momentum (p T ) and number of tracks per event for B − candidates match the B − → J/ψK * (892) − control-mode distributions in data.In simulation, the particle identification (PID) variables for each charged track are resampled as a function of their p, p T and the number of tracks in the event using Λ + c → pK − π + and D * + → D 0 (→ K − π + )π + calibration samples from data [31].The final stage of the selection uses multivariate techniques trained with simulation and data.Separate boosted-decision-tree (BDT, [32]) classifiers are employed for the four combinations of two data-taking periods (Run 1 and Run 2) and two signal categories, using long and downstream reconstructed Λ candidates.Each BDT is trained on simulated signal decays and data sidebands, with the m(J/ψΛp) invariant mass in the range [5320,5360] MeV.The variables used as input to the BDT are the p T , the decay length significance, the angle between the momentum and the flight direction and the χ 2 IP variable of the B − candidate; the χ 2 probability from the kinematic fit of the candidate; the sum of the χ 2 IP of the daughter particles; the angle between the momentum and the flight direction, the χ 2 of the flight distance (only for long category candidates), the χ 2 IP variables of the Λ candidate, and the hadron PID for the p candidate from the ring-imaging Cherenkov detectors. The BDT output selection criterion is chosen as in Ref. [5] by maximising the figure of merit S 2 /(S + B) 3/2 to obtain both high signal purity and significance, where S and B are the signal and background yield in a region of ±5.3 MeV around the known B − mass.To avoid a possible bias due to fluctuations of the signal yield, S is determined from a fit to the J/ψΛp invariant-mass distribution in data after applying a loose BDT selection, multiplied by the efficiency of the BDT output requirement obtained from simulation.Similarly, B is extracted from a fit to sideband data. For candidates passing all selection criteria, a maximum-likelihood fit is performed to the m(J/ψΛp) distribution shown in Fig. 1, resulting in a signal yield of 4620 ± 70.For the amplitude analysis about 4400 signal candidates are retained, with a purity of 93.0% in the signal region of ±2.5σ around the mass peak, where σ ≈ 2.1 MeV is the mass resolution.The signal distribution is modelled by the sum of a Johnson function [33] and two Crystal Ball [34] functions sharing the same mean and width parameters determined from the fit.The tail parameters and fractions of each signal component are fixed to values obtained from a fit to simulated events.The background contribution is mainly due to random combinations of charged particles in the event and is described by a third-order Chebyshev polynomial. The Dalitz distribution of the reconstructed B − candidates in the signal region is shown in Fig. 2, where a horizontal band in the region around 18.8 GeV 2 in the m 2 (J/ψΛ) distribution is present.Some structure in the high m 2 (J/ψp) spectrum is also present.This Letter investigates the nature of these enhancements. An amplitude analysis of the B − candidates in the signal region is performed using a phenomenological model based on the interference of two-body resonances in the three decay chains, J/ψK * − (→ Λp), ΛP N − ψ (→ J/ψp), and pP Λ ψs 0 (→ J/ψΛ), labelled as the K * − , P N ψ − and P Λ ψs 0 chains, respectively.The angular information of the subsequent J/ψ → µ + µ − and Λ → pπ − decays are taken into account in all cases.The decay amplitudes are based on helicity formalism [35] with CP symmetry enforced, and follow the prescriptions in Ref. [36] for the spin alignment of the different decay chains.Details about the decay amplitude definition are given in the Supplemental material [37]. The decay amplitudes are defined as a function of the six-dimensional phase space of the B − decay, (m Λp , Ω) described by the combined invariant mass m Λp of the p and Λ pairs, and by five angular variables indicated as Ω: the cosine of the helicity angle, cos θ K * (cos θ J/ψ ) of the Λ (µ − ) in the Λp (J/ψ) rest frame, the azimuthal angle φ p (φ µ − ) of the p (µ − ) in the rest frame of the Λ (J/ψ), and the cosine of the helicity angle cos θ Λ , of the p in the rest frame of the Λ.The amplitude fit to determine the model parameters ω, i.e., the couplings, the masses, the widths, and lineshape parameters of different contributions, is performed by minimising the negative log-likelihood function, where P sig (P bkg ) is the probability density function (PDF) for the signal (background) component of the ith event, and β = 0.07 ± 0.01 is the fraction of background candidates in the signal region.The signal PDF is proportional to the squared decay amplitude |M(m Λp , Ω| ω)| 2 , and accounts for the phase-space element Φ(m Λp ) and the reconstruction efficiency ǫ(m Λp , Ω), The denominator I( ω) normalizes the probability.The background PDF P bkg is parameterized according to a six-dimensional phase-space function based on Legendre polynomials, whose coefficients are determined from the m(J/ψΛp) region [5200, 5250] ∪ [5340, 5350] MeV.Similarly, the reconstruction efficiency is parameterized using Legendre polynomials with coefficients determined using simulated phase-space signal decays. No well-established resonances are expected to decay into the J/ψΛ and J/ψp final states.However, excited K − resonances decaying outside of the phase space of the B − → J/ψΛp decay can contribute to the Λp channel [16].A fit including only NR contributions and K * 4 (2045) − , K 2 (2250) − , and K 3 (2320) − resonant amplitudes does not reproduce the data distribution.A χ 2 /n.d.f. of 123.2/46 is obtained, where the χ 2 is calculated as the largest value over the six one-dimensional fit projections and the number of degrees of freedom (n.d.f.) is extracted from pseudoexperiments by fitting the tail of the χ 2 max distribution.The simplest and most effective amplitude model used to fit the data, indicated as the nominal model in the following, comprises a narrow J/ψΛ structure with spin-parity J P = 1/2 − , whose mass and width are extracted from the amplitude fit, and two nonresonant (NR) contributions, one with J P = 1 − for the Λp system and a second one with J P = 1/2 − for the J/ψp, referred to as NR(Λp) and NR(J/ψp), respectively.The J/ψΛ resonance is modelled with a relativistic Breit-Wigner function as discussed in the Supplemental material [37]. The couplings are defined in the LS basis, both for the B − → XR process, and for the R → Y Z process, where X, Y , and Z are the final state particles, and R = K * − , P N ψ − , and P Λ ψs 0 is the decay chain under consideration.Here, L indicates the decay orbital angular momentum, and S is the sum of the spins of the decay products.In the nominal model, L = 0 is used for the production and decay of the narrow J/ψΛ resonance, while L = 0, 1, 2 and L = 0, 2 are used in the NR(Λp) system for the production and decay, respectively, and L = 0 and L = 1 in the NR(J/ψp) system.Because of the small Q-value of the decay, higher values of the orbital momentum are suppressed.Fixing the lowest orbital momentum couplings for the NR(J/ψp) as the normalization choice reduces the number of free parameters to 16: the mass, the width, and the complex coupling of the P Λ ψs 0 resonant contribution, four complex couplings for the NR(Λp) contribution, and a complex coupling and two parameters for the second-order polynominal parameterization of the lineshape for the NR(J/ψp) contribution. A null-hypothesis model is used to test the significance of the P Λ ψs 0 state, which comprises only two NR contributions.The fit results for the nominal and the null-hypothesis model are shown in Fig. 3.The null-hypothesis model does not describe the data, with a corresponding χ 2 max /n.d.f.= 120.8/47.Using the nominal model, a good fit to data was obtained with a χ 2 max /n.d.f.= 55.3/51 and a p-value of 0.51 computed by counting the number of pseudoexperiments above the value of χ 2 max observed in data.A new narrow J/ψΛ structure is observed with high significance in the nominal fit to data.Using Wilks's theorem, a statistical significance exceeding 15σ is estimated from the value of −2∆ log L = 243 of the null-hypothesis model with respect to the nominal model.The mass and width of the new pentaquark candidate are measured to be M P Λ ψs = 4338.2± 0.7 MeV and Γ P Λ ψs = 7.0 ± 1.2 MeV, respectively, where the uncertainties are statistical only.This represents the first observation of a strange pentaquark candidate with minimal quark content ccuds. Alternative models are considered for systematic studies.To assess the contribution of a P resonance is added to the nominal model to parametrize the m(J/ψΛ) distribution close to the Λ + c D − s threshold at 4255 MeV, and found to not be statistically significant.Using the CL s method [38], an upper limit on the P Λ ψs (4255) 0 fit fraction is set to 8.7% at a 95% confidence level.To determine the J P assignments, all 16 combinations of J P = 1/2 ± , 3/2 ± are studied for the P Λ ψs (4338) 0 and NR(J/ψp) spin-parity hypotheses, and those with −2∆ log L > 9 with respect to the nominal fit are discarded.For the P Λ ψs (4338) 0 state, the J P = 3/2 ± hypotheses are discarded, the J P = 1/2 − assignment is preferred, while the J P = 1/2 + is excluded at a 90% confidence level using the CL s method [38].Systematic uncertainties are evaluated on the mass and the width of the new pen- taquark candidate, and on the fit fractions of P Λ ψs (4338) 0 , NR(J/ψp), and NR(J/ψΛ) contributions.The uncertainties are summarised in Table 1 and are summed in quadrature for the total contribution.For each systematic uncertainty, an ensemble of 1000 pseudoexperiments, generated according to the nominal model with the same statistics as in data is fitted with an alternative configuration that is representative of the systematic effect.The uncertainty on each parameter is determined as the mean value of the difference between the fit results of the nominal and the alternative models.The main contributions are related to the model for the decay amplitude, the bias of the fitting procedure, and the uncertainty on the reconstruction efficiency ǫ(m pΛ , Ω).For the amplitude model, the nominal value of the hadron radius for the Blatt-Weisskopf coefficients [39] is assumed to be 3 GeV −1 and varied to 1 and 5 GeV −1 , taking the largest effect as a systematic uncertainty.Additional LS couplings are considered with respect to the nominal model, in particular, the L, S = 1, 1 (L, S = 2, 3/2) coupling for the production (decay) of P Λ ψs (4338) 0 contribution, and the L, S = 1, 1 coupling for the NR(J/ψp) contribution.A relativistic Breit-Wigner function is used instead of the 2nd order polynominal for the lineshape of the NR(J/ψp) contribution.Moreover, a model with J P = 1/2 + assignment to the P Λ ψs (4338) 0 state is also considered.Finally, the behaviour of the maximum-likelihood estimator is studied using 1000 pseudoexperiments.Biases on the fit parameters are present due to the limited sample size and are assigned as systematic uncertainties.For the reconstruction efficiency, the nominal efficiency function based on decays from either the long or downstream Λ category and the largest effect is considered as systematic uncertainty. Additional systematic uncertainties account for the limited knowledge of the Λ → pπ − decay amplitude parameters [30,40], the background parameterization, and the effect of the resolution on the m(J/ψΛ) invariant mass.The nominal background parameterization P bkg is obtained from the distributions of candidates in the m(J/ψΛp) range [5200, 5250] ∪ [5340, 5350] MeV, while the parameterization obtained from the region [5295, 5315] MeV is used to assess systematic effects.The background fraction β = 0.07 ± 0.01 is also varied within uncertainties.The effect of the invariant mass resolution, about 1 MeV on average on m(J/ψΛ), is estimated by smearing the invariant mass distributions of 1000 pseudoexperiments and fitting them using the nominal model. The mass and width of the new pentaquark candidate are measured to be M P Λ ψs = 4338.2± 0.7 ± 0.4 MeV and Γ P Λ ψs = 7.0 ± 1.2 ± 1.3 MeV; the measured fit fractions are f P Λ ψs = 0.125 ± 0.007 ± 0.019, f NR(J/ψp) = 0.840 ± 0.022 ± 0.014, and f NR(Λp) = 0.113 ± 0.013 ± 0.017 for the resonant P Λ ψs 0 state, the nonresonant NR(J/ψp), and NR(Λp) contributions, respectively.The first uncertainty is statistical and the second systematic.The J P = 1/2 − quantum numbers for the P Λ ψs (4338) 0 state are preferred; J = 1/2 is established and positive parity can be excluded at 90% confidence level.Because of the small Q-value of the decay, the most precise single measurement to date of the B − mass 5279.44±0.05±0.07MeV is performed.This measurement is based on 1670 signal candidates with Λ baryons in the long category, which amounts to 36% of the total.Systematic uncertainties on the B − mass include uncertainties on particle interactions with the detector material (0.030 MeV), momentum scaling due to imperfections in the magnetic-field mapping (0.039 MeV) [17], and the choice of the signal and background fit model (0.050 MeV).The alternative fit model, with compatible fit quality with respect to the nominal model, comprises an exponential function for the background and a sum of a Gaussian and two Johnson functions for the signal.Systematic uncertainties from knowledge of the J/ψ, Λ, and p masses are negligible. In conclusion, an amplitude analysis of the B − → J/ψΛp decay is performed using about 4400 signal candidates selected on data collected by the LHCb experiment between 2011 and 2018 and corresponding to an integrated luminosity of 9 fb −1 .A new resonant structure in the J/ψΛ system is found with high statistical significance, representing the first observation of a pentaquark candidate with strange quark content named the P Λ ψs (4338) 0 state, with spin J = 1/2 assigned and parity P = −1 preferred.The new P Λ ψs (4338) 0 state is found at the threshold for Ξ + c D − baryon-meson production, which is relevant for the interpretation of its nature.No evidence for additional resonant states, either P Λ ψs (4255) 0 or P N ψ − pentaquark candidates or excited K − resonances, is found from the fit to data. A Amplitude model The amplitude model is constructed using helicity formalism [35] following the prescription for final particle spin matching described in Ref. [36].The amplitude O X λ 1 ,λ 2 ,λ 3 describes the decay amplitude for the B − to the J/ψΛp final state via the K * − , P where j X is the total angular momentum of the different contributions in the X = K * − , P N − ψ and P Λ ψs 0 decay chains, respectively, and {λ ′ } are the helicities of the final particles before spin rotations.The angle, ζ i Bk , is between the B − and the particle k in rest frame i.The coupling, H A→BC λ ′ , is the helicity coupling of a two-body decay A → BC, R is the line shape and d j λ A ,λ B −λ C is the small Wigner function.The angle, θ X , is the helicity angle of particle X, which is calculated using the Λ in the K * − rest frame, and either the p in the P N − ψ rest frame, or the J/ψ in the P Λ ψs rest frame.The total decay amplitude is obtained by including the J/ψ → µ + µ − and the Λ → pπ − decay amplitudes where φ p , θ Λ are the azimuthal and polar angles of µ − and p in the J/ψ and Λ rest frames, respectively.The axes in the B rest frame are defined as follows, where the symbol x refers to x/|x|.In Eq. 4, ∆µ is the difference of the muon helicities.For the J/ψ → µ + µ − decay, the coupling can be absorbed into the other couplings of the total decay amplitude and therefore is not fit.Indeed, there is only one coupling because the process with ∆µ = 0 is highly suppressed.So, ∆µ can only take values 1 and −1, and both choices lead to the same helicity coupling due to parity conservation. Enforcing CP conservation, the helicity couplings for B − and B + decays are the same.The matrix-element formula is the same for charge-conjugate decays, but all azimuthal angles must change sign due to charge-parity transformation, i.e. φ p → −φ p and φ µ − → −φ µ + . The Λ → pπ + decay parameters are defined by which satisfy the relation, It is convenient to express β + and γ + in terms of an angle φ + defined as Enforcing CP conservation, the following relations hold, This leads to where α − , β − and γ − are obtained following Eq.6 but using the couplings of the conjugate decay. The helicity couplings for the decay A → BC can be expressed as a combination of the LS couplings (B L,S ) using the Clebsch-Gordan (CG) coefficients where L is the orbital angular momentum in the decay, and S is the total spin of the daughters, , the higher orbital angular momenta are suppressed, hence the number of couplings is reduced.CG coefficients automatically take into account parity conservation constraints on helicity couplings for a strong or electromagnetic decay. where p is the momentum of resonance R in the B − rest frame, q is the momentum of particle Y in the rest frame of resonance R, p 0 and q 0 are the momentum values calculated at the R resonance peak, L is the orbital angular momentum between resonance R and particle X in the B − → XR decay, and l is the orbital angular momentum between particle Y and particle Z in the R → Y Z decay.The (p/p 0 ) L and (q/q 0 ) l contributions are the orbital barrier factors, B ′ L (p, p 0 , d) and B ′ l (q, q 0 , d) are the Blatt-Weisskopf functions that account for the difficulty to create the orbital angular momentum, and depend on the production (decay) momentum p (q) and on the size of the decaying particle given by the hadron radius d.These coefficients up to order 4 are listed below, where d is the particle size parameter, set to 3 GeV −1 following the convention of Ref. [1]. In the nominal amplitude fit of B − → J/ψΛp decays, the constant d is set to GeV −1 for the B − and intermediate resonant R decays. The relativistic Breit-Wigner amplitude is given by with where m is the invariant mass of the Y Z system, and m 0 (Γ 0 ) is the mass (width) of the R resonance.In the case that resonance R has a mass peak outside of the accessible kinematic region, i.e. m R > m B − − m X , such as for the K 2 (2250) − and K 3 (2320) − states, the effective mass m eff 0 is introduced to calculate the two-body-decay momentum q 0 in Eq. 14, This term is a constant that can be absorbed into the couplings, since it enters only in Eq. 14, and the mass m 0 and width Γ 0 of the K * resonant contributions are fixed to the nominal values [30].In the case of a resonance R with mass peak located outside of the phase space at values m R < m Y + m Z , such as for the K 4 (2045) − state, the width is chosen as mass-independent parameter Γ 0 .In the nominal model, the non-resonant (NR) contribution is modelled by a second-order polynomial, where m 0 is the average value of the invariant mass distribution, i.e. of the m J/ψp invariant mass distribution.The coefficients, c i , are the polynomial coefficients, where c 0 is set to a constant value since one of the c i coefficients can be factor out of amplitude matrix element, and the other two are extracted from a fit to the data. B Event-by-event efficiency parameterisation Event-by-event acceptance corrections are applied to the data using an efficiency parameterisation based on the decay kinematics.The 6-body phase space of the topology B − → J/ψ(→ µ − µ + )Λ(→ pπ − )p is fully described by six independent kinematic variables: m Λp , cos θ K * , cos θ J/ψ , φ µ , cos θ Λ , and φ p .For the signal mode, the overall efficiency, including trigger, detector acceptance, and selection procedure, is obtained from simulation as a function of the six kinematic variables, ω ≡ {cos Here, m ′ Λp and φ ′ are transformed such that all four variables in ω lie in the range (−1, 1].The efficiency is parameterised as the product of Legendre polynomials ǫ( ω) = i,j,k,l,m,n c i,j,k,l,m,n P (cos θ K * , i)P (cos θ J/ψ , j) where P (x, l) are Legendre polynomials of order l in x ∈ (−1, 1].Employing the order of the polynomials as {2, 2, 2, 2, 4, 3} for {cos θ K * , cos θ J/ψ , φ ′ µ , m ′ Λp , cos θ Λ , φ ′ p }, respectively, was found to give a good parameterisation.The coefficients c i,j,k,l,m,n are determined from a moment analysis of B − → J/ψΛp phase-space simulated samples where ω ν is the per-event weight taking into account both the generator-level phase-space element, dΦ, and the kinematic event weights.Simulation samples are employed where B − → J/ψΛp events are generated uniformly in phase space.In order to render the simulation flat also in m(Λp), the inverted phase-space factor, 1/dΦ, is considered.The factors of (2a + 1)/2 arise from the orthogonality of the Legendre polynomials, The sum in Eq. 18 is over the reconstructed events in the simulation sample after all selection criteria.The factor C ensures appropriate normalisation and it is computed such that where N rec is the total number of reconstructed signal events.Up to statistical fluctuations, the parameterisation follows the simulated data in all the distributions. C Fit results of the nominal model In Table 2, the fit results of the nominal model are reported including the results of the LS couplings.The couplings are split into real and imaginary parts, i.e.Re prod(decay) (R) L,S , Im prod(decay) (R) L,S .The subscript prod (decay) refers to the B − → XR (R → Y Z) process, where X, Y , Z are the final state particles, and R is the decay chain under consideration.The subscript L refers to the orbital angular momentum and S to the sum of the spins of the decay products. D Angular moments The normalized angular moments P U j of the P Λ ψs 0 helicity angle are defined as, where N rec is the number of selected events, P j are Legendre polynomials and ω i are perevent weights accounting for background subtraction (with sPlot technique) and efficiency correction. The angular moments are shown in Fig. 4, up to order 5, as a function of the m(J/ψΛ) invariant mass distribution.They show a good agreement between the data and the nominal model. E Efficiency corrected and background subtracted distributions The data are assigned weights to account for the efficiency and to subtract the background using the sPlot technique.The efficiency corrected data distributions of m(pΛ), m(J/ψp), m(J/ψΛ) and cos θ K * are shown in Figure 5.There is a sign difference between this cos θ K * definition and the one from CMS [16]. F Invariant mass fit for the B − mass measurement Figure 1 : Figure 1: Invariant mass distribution of the J/ψΛp candidates.The data are overlaid with the results of the fit. Figure 2 : Figure 2: Dalitz distribution for B − candidates in the signal region. − pentaquark candidate, a relativistic Breit-Wigner function is used for the m(J/ψp) lineshape instead of a 2nd order polynomial function.The value of −2∆ log L = 80 obtained with respect to the nominal fit indicates that the NR(J/ψp) contribution is preferred over the hypothesis of a P N ψ − candidate, while consistent results for the P Λ ψs (4338) 0 state parameters are obtained.The contribution of a second narrow P Λ ψs 0 Figure 3 : Figure 3: Distributions of invariant mass and cos θ K * .Fit results to data using the nominal model are superimposed.The null-hypothesis model fit results are also shown in grey.The Ξ + c D − baryon-meson threshold at 4.337 GeV is indicated with a vertical dashed line in the m(J/ψΛ) invariant mass distribution. Fig. 6 Fig.6shows the invariant mass fit used to extract the B − mass measurement.The signal yield with Λ baryons in the long category is 1670 ± 40, which amounts to 36% of the total candidates. Figure 4 : Figure 4: P Λ ψs (4338) 0 helicity angular moments as a function of m(J/ψΛ) invariant mass.The black points represent the data while the blue line is the nominal model. Figure 6 : Figure 6: Invariant mass distribution of the J/ψΛp candidates reconstructued with Λ baryons in the long category only.This dataset in used for the measurement of the B − meson mass.The data are overlaid with the results of the fit. Table 1 : Systematic uncertainties on the mass (M P Λ NR(J/ψp) and f NR(Λp) of the pentaquark candidate and nonresonant contributions (in %). Table 2 : Parameters determined from the fit to data using the nominal model where uncertainties are statistical only.
7,379.4
2022-10-19T00:00:00.000
[ "Physics" ]
Catalytic activity imperative for nanoparticle dose enhancement in photon and proton therapy Nanoparticle-based radioenhancement is a promising strategy for extending the therapeutic ratio of radiotherapy. While (pre)clinical results are encouraging, sound mechanistic understanding of nanoparticle radioenhancement, especially the effects of nanomaterial selection and irradiation conditions, has yet to be achieved. Here, we investigate the radioenhancement mechanisms of selected metal oxide nanomaterials (including SiO2, TiO2, WO3 and HfO2), TiN and Au nanoparticles for radiotherapy utilizing photons (150 kVp and 6 MV) and 100 MeV protons. While Au nanoparticles show outstanding radioenhancement properties in kV irradiation settings, where the photoelectric effect is dominant, these properties are attenuated to baseline levels for clinically more relevant irradiation with MV photons and protons. In contrast, HfO2 nanoparticles retain some of their radioenhancement properties in MV photon and proton therapies. Interestingly, TiO2 nanoparticles, which have a comparatively low effective atomic number, show significant radioenhancement efficacies in all three irradiation settings, which can be attributed to the strong radiocatalytic activity of TiO2, leading to the formation of hydroxyl radicals, and nuclear interactions with protons. Taken together, our data enable the extraction of general design criteria for nanoparticle radioenhancers for different treatment modalities, paving the way to performance-optimized nanotherapeutics for precision radiotherapy. Supplementary data : Table S1 : add the Z of the elements. Reviewer #2 (Remarks to the Author): The clarification of the radiation dose enhancement processes due to the nanoparticle presence in the tumor radiation therapy is a topic research field at moment. In this paper the authors investigated all aspects suspected of taking part in this process. It is very interesting and highlights new information. However, I have same question that I think it would be better to clarify. Pag. 4: the sentence "While photons deposit energy continuously" sounds very strange to me. Maybe the authors would say that the dose deposition of photons in depth is continuous and goes beyond the tumor. Instead the dose deposition of protons have a peak at the end of the proton range related to the energy…. Pag.7: As indicated in different research work on the same topic the results are related to the cellular line used. Please add comments on this aspect. Page 10: Please specify fvol Pag. 10. The maximum enhancement is found when the nanoparticles occupy 33.4 of the vesicle volume. How do you relate the amount of volume occupied in the vesicle with the amount of material administered to the culture? In standard experiment, how do you know with what percentage the nanoparticles are distributed in the cytoplasm and in the nucleus? Does this depend on the cell line? Does it depend on the size of the nano? There are works that say it never enters the nucleus. Please comment on this aspect. Pag. 27: without a scratch of the irradiation geometry is very hard to follow the description. Please could you add a new figure with the irradiation geometries? Pag. 27: "nanoparticle vesicles were placed in the cytoplasm only"…, I understood that the simulation were realized also with the nanoparticle presence in the nucleus. Please clarify this aspect. Reviewer #3 (Remarks to the Author): Radioenhancement by nano-particles is dicussed for decades as a promissing procedure to locally increase radiation damage in tumor cells while reducing the generfal radiation load on healthy cells. However up to now experimental data are often contradicting. The manuscript of Gerken et al. is addressing this situation and presenting systematic simulation and experimental studies. They systematically investigate enhancement effects of metal oxide nanoparticles and nanogold under different radiation conditions especially for MV photons and protons. The article is well written and the results are supported by additional data in the supplement. I recommend publication after minor revision. 1.) What are the noteworthy results? The results for therapeutic energies of 6MV photons and 100 MeV protons 2.) Will the work be of significance to the field and related fields? Yes, the work is highly significant and gives recommendations for nanoparticle design 3.) How does it compare to the established literature? If the work is not original, please provide relevant references. It is original work. However and this is my concern for revision: Recently new results were published* describing effects and data of nanogold dose enhancement and mechanisms behind. These publications should be considered and appropriately included in the discussion of the results. Letter of Reply We thank the reviewers for their careful and positive assessment of our manuscript. Please find a detailed reply to the respective comments and concerns below. Changes to the manuscript text are highlighted. Reviewer #1: The effects of ionising radiation on biological structures are governed by physical, chemical and biological phenomena and the exact contributions of these different effects are complex to determine. The mechanistic understanding is particularly hampered by the lack of fundamental and comparative studies, which prevents the rational design of nanoparticle-based radio-enhancers. Although a large number of studies have been published concerning nanoparticles and X-ray radiotherapy, studies comparing a large number of nanoparticles in the same cell line and with different radiation qualities are few, especially with protons, which makes these results particularly interesting. This study proposes to provide elements of mechanistic understanding by studying 6 nanoparticles of different nature on a single cell line and irradiating them with different types of radiation: X -ray 150 kVp, 6 MV Xrays and proton beams. The catalytic processes are also studied and related to the physical effects of the dose increase. Finally, a simulation study also provides elements of comparison between the elements with respect to the physical dose enhancement. This is an interesting study that brings together a very large number of experimental results. The experiments are well documented and the results well presented. Overall, the results are well discussed and the conclusions are relevant to the results presented. 1. Most of the time, nanoparticles are coated to make the nanoparticles usable in in vivo studies and to improve their tolerance. Could you discuss the potential influence of the coating on the catalytic and physical enhancement effects? We thank the reviewer for raising this point and we agree that this is a very important aspect, which we plan to investigate in our future research. We have added a short paragraph discussing potential implications on page 27: Potential effects of nanomaterial surface functionalization on dose-enhancement should be investigated carefully, including potential ROS quenching by antioxidant molecules (such as dopamine), as well as potentially synergistic effects leading to augmented ROS generation, e.g. by porphyrins. 2. In vivo, the overall efficacy of nanoparticles under irradiation will depend greatly on the concentrations of nanoparticles, and their extra and/or intra cellular localization. This point should be further discussed. We have added a short paragraph discussing effects of biodistribution on page 27: Eventually, the effective nanoparticle concentration reached in the cancer cells will govern dose enhancement. While experimental research in mouse models on intravenously injected nanoparticles has indicated that only a small fraction of the injected nanoparticles may accumulate inside cancer cells, intratumoral administration of HfO2 nanoparticles (NBTXR3) appears to partially overcome the delivery problem and shows convincing therapeutic effectiveness in preclinical and clinical settings. This effectiveness may, however, be even further improved by optimizing radioenhancer designs, in parts based on the insights provided by this work. Page 9: Define the DEF We have added the definition on page number 9: "We extracted physical dose enhancement factors (DEFs) by building the ratios of the dose scored to the cytosol, nucleus, vesicle or water shells, respectively, in the presence of the nanoparticles to that with no nanoparticles (water)." 4. Page 10: There is an error in the definition of the macroscopic DEF: The "mass absorption energy coefficient" has to be taken into account and not the "mass energy attenuation coefficient", as stated. We have corrected the definition and replaced the term "mass energy attenuation coefficient" by "mass energyabsorption coefficient" according to the terms specified by the National Institute of Standards and Technology. The following correction can be found on page 11: " , where is the atomic number (Z) mass fraction in the system and ( )/ the mass energy absorption coefficient at a monoenergetic photon energy, E." 5. Page 16: typo error "Ratio at 50% cell xxx » and not "Ration at 50% cell xxx" We have corrected this on page 16: "… by calculating the Dose Modifying Ratio at 50% cell survival …" 6. Page 31: specify also the x-ray tube filtration. Specify on which side of the PMMA phantom the beam was directed. We have specified this further on page 33: "Thus, photons travelled through approximately 3 cm PMMA phantom material before hitting the top of the 48-well plate. For kV X-ray irradiation, a tube source … with a 7 mm beryllium filter window was positioned…" We have also added a schematic figure illustrating the irradiation set up in the supplementary information in Figure S15. The graphs showing a comparison of the results against the different elements should be presented in the same format (the elements in the same order). Figures 4F, 6F, S5B, S9D We have harmonized the presentation of these figures. Table S1: add the Z of the elements. Supplementary data: We have added the atomic numbers (Z) of the elements in Table S1 of the supplementary information. Reviewer #2: The clarification of the radiation dose enhancement processes due to the nanoparticle presence in the tumor radiation therapy is a topic research field at moment. In this paper the authors investigated all aspects suspected of taking part in this process. It is very interesting and highlights new information. However, I have same question that I think it would be better to clarify. 1. Pag. 4: the sentence "While photons deposit energy continuously" sounds very strange to me. Maybe the authors would say that the dose deposition of photons in depth is continuous and goes beyond the tumor. Instead the dose deposition of protons have a peak at the end of the proton range related to the energy. We have adapted this statement on page 4: "While the dose deposition of photons in depth is continuous and goes beyond the tumor resulting in an "exit dose", protons lose the majority of their energy in the range of the Bragg peak, after which they are stopped completely." 2. Pag.7: As indicated in different research work on the same topic the results are related to the cellular line used. Please add comments on this aspect. We have added a paragraph to page 26: While nanoparticle uptake, cellular toxicity, and radio-enhancement are cell-line dependent, previous work on different cancer cell lines has indicated that the relative trends in radio-enhancer effectiveness hold true, albeit with slightly different absolute values. The intra-vesicular enhancement increases with increasing nanoparticle volume fraction. By packing a vesicle randomly with spheres, a maximum packing fraction was reached at 32.4 %. Such volume fractions are also observed experimentally, for example by studying nanoparticle uptake using liquid scanning transmission electron microscopy. 1 We have added a paragraph to page 10: The dose enhancement factors within a nanoparticle-filled vesicle reached values of DEF = 30-40 for Au nanoparticles and DEF = 10-20 for HfO2 and WO3 nanoparticles at the highest reached nanoparticle content of 32.4 vol% (volume percent) in the vesicle ( Figure 3A). This packing fraction is also reasonable for biological scenarios. For instance, nanoparticle volume fractions of 35 ± 16% per vesicle have been reported in cells for 30-nm sized Au nanoparticles, 36 and exposure conditions similar to the ones used in our study. No particles could be detected in the nucleus and all particles were distributed in the cytosol. We have added the following statement to page 8: Few hundred nanometer up to micrometer sized nanoparticle agglomerates were distributed within the cell cytoplasm (in vesicles or endosomes). In the > 100 cells analyzed per nanoparticle type, no evidence for nanoparticle uptake into the nucleus was found, even though uptake overall, and nanoparticle accumulation in the nucleus, might be particle and cell type dependent. 2,3 5. Pag. 27: without a scratch of the irradiation geometry is very hard to follow the description. Please could you add a new figure with the irradiation geometries? We have added a schematic to the Supplementary Information as Figure S15. 6. Pag. 27: "nanoparticle vesicles were placed in the cytoplasm only"…, I understood that the simulation were realized also with the nanoparticle presence in the nucleus. Please clarify this aspect. As the reviewer has pointed out correctly in comment number 4, nanoparticles rarely enter the cell nucleus. Therefore, we have performed our simulations under the assumption, that all nanoparticles are distributed in agglomerates within the cytoplasm only. This assumption is reflected in our TEM observations in this and earlier work, 4,5,6 Nevertheless, we have scored the dose deposited within the cell nucleus, while particles were only present in the cytoplasm. To clarify this further, we have modified the paragraphs on page 9:" The geometries were built to match cellular uptake scenarios as closely as possible, with ~ 400s nm nanoparticle agglomerates distributed only within the cytosol (see also Figure 2). 31 As nanoparticle uptake into the nucleus was not observed experimentally, it was considered negligible also for the simulations." and on page 29: "Different amounts of such nanoparticle filled vesicles were then placed in the cytoplasm only, because metal oxides or gold nanoparticles enter cells predominantly by endocytotic pathways and are clustered within roughly 300 -500 nm sized vesicles within the cytoplasm, rarely entering the cell nucleus.". Reviewer #3: Radioenhancement by nano-particles is dicussed for decades as a promissing procedure to locally increase radiation damage in tumor cells while reducing the generfal radiation load on healthy cells. However up to now experimental data are often contradicting. The manuscript of Gerken et al. is addressing this situation and presenting systematic simulation and experimental studies. They systematically investigate enhancement effects of metal oxide nanoparticles and nanogold under different radiation conditions especially for MV photons and protons. The article is well written and the results are supported by additional data in the supplement. I recommend publication after minor revision. 1. What are the noteworthy results? The results for therapeutic energies of 6MV photons and 100 MeV protons. 2. Will the work be of significance to the field and related fields? Yes, the work is highly significant and gives recommendations for nanoparticle design. 3. How does it compare to the established literature? If the work is not original, please provide relevant references. It is original work. However and this is my concern for revision: Recently new results were published* describing effects and data of nanogold dose enhancement and mechanisms behind. These publications should be considered and appropriately included in the discussion of the results. We have added a paragraph discussing these studies on page 23: Additionally, •OH radical formation in the cytosol might then be a combination of physical and surface catalytic processes for these high-Z materials at kV X-ray energies. Interestingly, cytoplasmic processes leading to the disruption of organelles, such as mitochondria or lysosomes, may play a major role in nanoparticle mediated radioenhancement. 7,8 Most recently, it was also shown that even very low concentration of 10 nm gold nanoparticles can have an effect on cell cycle phase, the proportion of radiosensitive G2 cells as well as the DSB repair kinetics. 9 Thus, the nanoparticle mediated radiation response is complex with sensitization of cancer cells as well as dose enhancement both contributing to the overall response. 4. Does the work support the conclusions and claims, or is additional evidence needed? Yes it does. 5. Are there any flaws in the data analysis, interpretation and conclusions? Do these prohibit publication or require revision? No, well performed analysis. 6. Is the methodology sound? Does the work meet the expected standards in your field? up-to-date standards 7. Is there enough detail provided in the methods for the work to be reproduced? Yes
3,698.8
2022-06-06T00:00:00.000
[ "Materials Science", "Medicine" ]
Repeated Type III Burst Groups Associated with a B-Class Flare and a Narrow-Width CME We have analysed a solar event from 27 September 2021, which included a small GOES B-class flare, a compact and narrow-width CME, and radio type III bursts that appeared in groups. The long-duration, repeated metric type III burst emission indicates continuous electron acceleration at high altitudes. The flaring active region was surrounded by strong magnetic fields and large-scale loops, which guided the outflow of the CME plasmoid and hence the narrow, bullet-like appearance of the CME. Radio imaging and EUV observations confirmed the direction of particle propagation and the depletion of matter from the solar source region. We observed V-shaped type III burst emission lanes, which also explain the field configuration and suggest a possible location for repeated reconnection that occurred at a constant altitude. Introduction Solar flares are powerful bursts of energy release, often followed by coronal mass ejections (CMEs).These eruptions accelerate particles and eject plasma from the solar atmosphere (Benz, 2008;Gopalswamy, 2016).However, the relationship between flares and CMEs is not always clear (Kawabata et al., 2018).The most energetic flares are known to be associated with fast and wide CMEs, but the relation between CMEs and small and less energetic flares, or with flares missing altogether, is more complicated.It has been suggested that the magnetic reconnection process that happens beneath the CME affects the CME dynamics most (Vrsnak, Sudar, and Ruzdjak, 2005). As flares and CMEs accelerate particles, some of their processes can be observed in radio emission; see reviews by Nindos et al. (2008) and Pick and Vilmer (2008).Particle streams, i.e. accelerated electrons, cause radio emission at the local plasma frequency as they propagate through the solar atmosphere.The radio emission is then also subject to scattering by density inhomogeneities and other propagation effects (Kontar et al., 2019).The most often observed solar radio emission types are type III bursts (caused by accelerated electron beams), type II bursts (caused by shock-accelerated electrons, typically ahead of propagating CMEs), and type I noise storms (electrons trapped in the magnetic field). Type III bursts are most often associated with flares, and they can appear as isolated bursts or in groups.There is now direct evidence that semi-relativistic electrons energized in magnetic reconnection regions produce type III bursts (Cairns et al., 2018).At starting frequencies higher than 100 MHz, type III bursts have been found to be associated mostly with GOES B-and C-class flares, but the X-ray emission at 6 keV usually lasts much longer than the groups of type III bursts (Reid and Vilmer, 2017).Type III bursts can be observed as fast-drifting emission features in the radio dynamic spectra, some only at decimetricmetric wavelengths (coronal heights) and some continuing to kilometer waves (near Earth distances). Shocks can form in the solar corona when an ejected projectile propagates at speed higher than the local magnetosonic speed.Also, flare blasts can create propagating shocks that accelerate electrons (Warmuth, 2007).The frequency drift in a radio type II burst is much slower than in type III bursts, as the drift compares to the propagation speed of the shock.Observationally, type II bursts can often be identified from their harmonic emission (Roberts, 1959). Type I noise storms are associated with active regions and sometimes with flares and CMEs, but the typical duration of a noise storm continuum is significantly longer than that of the associated flare (Iwai et al., 2012).The broad-band continuum also contains shortduration, narrow-band bursts.The generally accepted scenario is that the emission is due to non-thermal electrons trapped in closed magnetic fields.The origin of these fast electrons is not very clear, but small-scale reconnection and weak shocks associated with newly emerging flux have been suggested; see, for example, Mercier et al. (2015) and references therein.Mondal and Oberoi (2021) recently suggested that small-scale reconnection may produce electron beams, which quickly get collisionally damped, and therefore the plasma emission occurs only within a narrow bandwidth. In this study, we analyse a solar event that consisted of a small, GOES B-class flare and a compact CME that had a very narrow width and relatively high initial speed and was associated with groups of metric type III bursts and type I noise storm bursts.Our aim is to find out the reason for the continuous particle acceleration and the generation of repeated type III bursts.We also investigate the CME formation and its propagation as a compact, bullet-like form. Data and Analysis We have used in the analysis extreme ultraviolet (EUV) solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard the Solar Dynamics Observatory (SDO: Lemen et al., 2012) and from the Extreme Ultraviolet Imager (EUVI) onboard the Solar Terrestrial Relation Observatory (STEREO) spacecraft (Wuelser et al., 2004).Coronagraph images and associated data products were obtained from the CDAW LASCO CME Catalog at cdaw.gsfc.nasa.gov.The Gamma-ray Burst Monitor (GBM) onboard the Fermi Gamma-ray Space Telescope (Meegan et al., 2009) provided X-ray flux observations for the event. For decimetric-metric radio emission, we used radio spectral data from the Kilpisjärvi Atmospheric Imaging Receiver Array (KAIRA: McKay-Bukowski et al., 2015) located in Finland, from the various CALLISTO instruments in the e-Callisto Network (Benz, Monstein, and Meyer, 2005;Monstein, Csillaghy, and Benz, 2023), and from the Nançay Decameter Array (NDA: Lecacheux, 2000) and ORFEES radio-spectrograph (Hamini et al., 2021) located in France and provided by the Radio Monitoring website at secchirh.obspm.fr.For longer radio wavelengths, we used data from the WAVES instrument on the Wind spacecraft (Bougeret et al., 1995).Radio imaging at selected frequencies in the 150 -445 MHz range were provided by the Nançay Radioheliograph (NRH: Kerdraon and Delouis, 1997). Active Region and Flare A small, GOES B4.5-class flare was observed on 27 September 2021 (SOL2021-09-27T11:46) in NOAA active region (AR) 2871, located at S29W35 (Figure 1).The GOES flare was listed as starting at 11:40 UT, peaking at 11:46 UT, and ending at 11:50 UT.However, a post-burst increase was observed to last until 12:50 UT in the 1.5 -12.5 keV energy range (1 -8 Å).The potential magnetic field is presented in Figure 2, created using both SDO/AIA (Earth view) and STEREO-A/EUVI (separation angle between them 39.8 degrees) observations and the potential-field source-surface (PFSS) model.Fermi/GBM also observed the flare with its NaI detectors that cover the energy range from a few keV to about 1 MeV.In the 4 -15 keV energy range the maximum count rates were observed during 11:43:50 -11:44:40 UT, but count-rate enhancements were also visible at 11:43 UT, 11:44 UT, 11:46 UT, and 11:50 UT at energies 4 -50 keV (Figure 5).Flare emission was observed until 11:51 UT, after which the spacecraft turned away from the Sun (Fermi is in low-Earth orbit).The comparison between X-rays and radio emission is discussed more in Section 2.2. A plasmoid ejection was observed to start from the flare site at 11:42 UT.A narrow filamentary structure was ejected from the AR, and it was observed to change direction and turn to follow the fan-like magnetic field.The bending of this long filamentary structure along the magnetic field lines is shown in the EUV images from SDO and STEREO-A; see Figure 3.The bending was best observed in the STEREO-A view, over the limb and against the sky, which enabled us to better estimate and compare the source heights. Radio Emission The main radio features associated with the flare-CME event were type III bursts, i.e. fast electron beams propagating at speeds ≈ 0.3 c. Figure 4 shows how the type III The type III burst groups appear to have some temporal association with the X-ray flux measured by Fermi/GBM (Figure 5).Intense type III burst emission is observed at the time of the first X-ray peak near 11:43 UT, and the end time of the first type III burst group is also the time when the X-ray burst intensity decreases.There is a small increase in X-rays during the type III burst group number 3.However, we find no clear one-to-one correspondence between the X-ray count rate peaks and individual type III bursts. Bursts within the first two type III burst groups reached interplanetary space and propagated to Earth distances (emission continues down to kHz frequencies), but burst emission in the later three groups fades away at 5 -2 MHz; see the Wind/WAVES dynamic spectrum in Figure 4. Some isolated type III bursts were observed at 500 -200 MHz near the flare start time, but most of the type III burst groups had a start frequency near 200 MHz (Figure 4).The start frequency, 200 MHz, corresponds to an atmospheric height of 0.15 R when calculated with the 2-fold Newkirk atmospheric density model (Newkirk, 1961), which describes well the lower altitudes in the solar corona.Another often used atmospheric model, the so-called hybrid model by Vrsnak, Magdalenic, and Zlobec (2004), gives heights close to the 2-fold Newkirk model in the low corona but works well also at larger distances.The differences between atmospheric density models and the calculation of radio source heights is explained in, e.g., Pohjolainen et al. (2007). Individual type III bursts could be best identified in the 80 -20 MHz frequency range, and many of them could be imaged near their start frequency, at 150 MHz with NRH.These bursts were all located south-west of the AR, and some of them reached the solar limb; see an example of radio imaging in Figure 6, the type III burst is marked 4a in the dynamic spectrum in Figure 4.The calculated source heights at 150 MHz are in agreement with the imaged radio source distances from the AR.As the imaged radio sources are observed on the solar disc in projection from the Earth view, their heights are only estimates and can also suffer from various plasma effects (Chen et al., 2020). Specific Radio Spectral Features Near the flare start, at 11:43 UT, a high-frequency type III burst was observed, and it could be imaged at 228 and 270 MHz.Two separate radio source locations appeared then, one south-west and one north of the AR (Figure 7).This type III burst is marked 1a in the dynamic spectrum in Figure 4.The flux measured from the northern source region showed an enhancement at 270 MHz, and it peaked later than the 228 MHz flux.This suggests that the type III burst was reversed, i.e. the electron beam was moving down in the solar atmosphere.We note that the northern source did not appear again at later times.Some of the type III bursts showed enhanced plasma emission features along the electronbeam propagation path; the two features are indicated with arrows in Figure 8.The features appeared at ≈ 60 MHz around 11:46 UT.They were observed by e-Callisto stations and KAIRA, with different temporal resolutions of 0.25 and 0.01 seconds, respectively.The enhancements had a narrow bandwidth, only a few MHz, and on closer look they seem to be separated from individual type III emission lanes and have a curved appearance.In between the type III bursts, we could also observe narrow-band radio spikes, identified as type I noise-storm structures (Figure 8).In high-resolution dynamic spectra, type I bursts appear as spot-like features, whereas vertical stripe-like features are type III bursts; see, for example, Mugundhan et al. (2018).The bandwidth of individual spikes was only 1 -3 MHz.The spikes appeared in groups within a wider frequency range, between 80 and 50 MHz.These groups also drifted in frequency, typically towards higher frequencies. Noise storm spikes are visible in the spectra also before our flare, appearing and growing stronger in intensity around 9:00 UT and lasting up to around 15:00 UT in the 80 -50 MHz range.A GOES B3-class enhancement was observed at 09:00 -09:45 UT, a possible sign of small-scale reconnection event(s).As type I noise storms are generally thought to be excited by plasma waves caused by non-thermal electrons trapped in closed magnetic-field lines, this suggests that there were a large number of non-thermal electrons present in the AR loops even before the start of our flare-CME event. Near the start time of the first type III burst group, at 11:43 UT, some of the type III burst lanes seemed to split into two separate lanes.Figure 9 shows two of these V-shaped bursts in detail, observed by KAIRA (top spectrum, the slower-drift and turning lanes are indicated with arrows).The separation of electron-beam paths happens near 40 MHz, and the later, slower beam lanes disappear near 30 MHz.These frequencies correspond to solar atmospheric altitudes of 0.7 -1.0 R , which are reasonable for large-scale loop heights and agree with the loops obtained with the PFSS model of the magnetic field, shown in Figure 2. Beam path curvature back towards higher frequencies, i.e. to higher densities, indicates that the electron beam follows magnetic field lines back to the Sun. Coronal Mass Ejection After the plasmoid ejection, a narrow dimming region started to appear in the EUV images, elongated towards the limb in the south-west direction.The SDO/AIA difference images are shown in Figure 10.A CME was first observed by the SOHO/LASCO-C2 coronagraph at 12:12:05 UT, when the CME leading front was located at a height of 2.57 R .The CME had a narrow angular width, 25 -20 degrees (Figure 11), and it was propagating towards the south-west.There were no streamers located nearby the CME. The CME speed from the first LASCO observations was ≈ 500 km s −1 , but the velocity decreased after the first hour and then stabilized to about 380 km s −1 , based on the linear fit for the CME front heights available from the CME Catalog.The observed solar wind speed near Earth was 300 -330 km s −1 , which explains some of the decrease in the CME speed. We converted the CME heights to frequencies with the hybrid atmospheric-density model (Vrsnak, Magdalenic, and Zlobec, 2004), which works well for the larger distances in the corona.These heights and frequencies are listed in Table 1, and they are shown in Figure 12. Summary and Conclusions The observed long-duration, metric type III burst emission that appeared as type III burst groups indicates continuous electron acceleration.If no high-frequency emission is observed at the same time, then it suggests that the acceleration region is located high in the corona (the higher the radio frequency, the lower the atmospheric height).The type III bursts that appeared in groups were formed at 200 MHz or nearby frequencies (corresponding to ≈ 0.15 R or 100 Mm, heights above the solar surface as plasma frequency depends on electron density in the atmosphere), and each group had a duration of 2.5 -1.5 minutes.These bursts did not look to have a direct, one-to-one correlation with hard X-ray count peaks (Fermi/GBM), but enhanced X-ray emission was observed during the type III burst group periods.As X-rays are the result of electrons colliding with denser structures, this can mean that part of the X-ray emission came from heating and part from particle acceleration.The unchanged start frequency (formation height) of each radio burst group suggests that the acceleration process of the burst particles was not due to reconnection in rising structures, but that the reconnecting fields remained at the same heights.A decrease in start frequency would indicate that the reconnection region is moving up in the atmosphere, where the density is lower.For example, Reiner et al. (2008) have suggested that in complex type III bursts the later electron acceleration can be the result of coronal reconfiguration, caused by, for example, an erupting CME.In Figure 9, we presented a possible configuration for the creation of type III bursts that show a split into two separate lanes at metric wavelengths.The observed split happened near 40 -50 MHz, which corresponds to heights of ≈ 0.6 -0.8 R above the solar surface.The later, curved beam path can be explained by the large-scale loop shape and its slower rate of change in height, which becomes almost vertical later on.The "split" does not necessarily mean a physical split of the electron beam, but that part of the accelerated electrons gets access to field lines that curve to different directions, compared to those that propagate directly out along the open field lines.A similar split into two, "fast" and "slow" drifting burst parts, was reported also by Kallunki, McKay, and Tornikoski (2021).The burst they described was most probably associated with a CME that originated from the far side and had a narrow width in Earth view, and the true GOES-class was unknown and listed as a B-class flare. Type III bursts with inverted-U and N-shaped emission lanes, observed at coronal heights, have been discussed by Démoulin et al. (2006).These burst shapes require largescale loops and closed magnetic field lines, as in them the electron beams first travel upward along the field lines, then come back to lower heights, and in N-bursts the particles are mirrored back to larger heights.In our split V-shaped bursts, we did not find any evidence of mirroring, as the later branch simply faded away and ceased to be observed.This high-altitude scenario resembles those presented by Heyvaerts, Priest, and Rust (1977) and Sterling and Moore (2001).In their models, there is a rising flux rope on the side of a coronal-hole open field, which reconnects ("external reconnection").In our case, we have a large-scale loop on the side of open active-region field lines, with a rising but very narrow filament/plasmoid in between them.Due to the changes in the outflowing material, reconnection and particle acceleration are quasi-periodical.In this sense, the scenario is similar to loop-loop interactions, where the loop movements cause reconnection leading to oscillation in the radio emission.The many different possibilities for modulated reconnection have recently been described in Cattell et al. (2021). The active region that produced the small, GOES B4.5-class flare on 27 September 2021 was surrounded by strong fields that guided the outflow of the ejected plasmoid.The CME had a very narrow width due to the small volume of plasma and the narrow tunnel-like exit path from the solar surface.This was observed in Figure 3, where the propagation direction of the long filamentary CME structure was changed, and it was bent along the fan-like field lines.The PFSS model also predicted large-scale loops that could come into contact with the ejected plasmoid.The metric type III burst groups could have been created by periodic reconnection, caused by coronal reconfiguration that affected the large-scale loops and the open field lines nearby.The type III bursts ended when the CME had lifted off and the material outflow had ceased. Figure 1 Figure 1 On 27 September 2021 a GOES B4.5-class flare was observed to start at 11:40 UT, with maximum flux at 11:46 UT, at location S29W35 in active region AR 12871. Figure 2 Figure 2 Top: Spacecraft locations on 27 September 2021.The STEREO-A spacecraft longitude separation to Earth was 39.8 degrees, with SOHO, SDO, Wind, and Fermi spacecraft at Earth's longitude.The reference longitude is for the B4.5-class flare located at S29W35.(Plot prepared with Solar-MACH, The Solar MAgnetic Connection HAUS tool.)Bottom left: STEREO-A/EUVI image rotated to SDO/AIA field of view, with PFSS magnetic field lines.Blue and red lines indicate open field with opposite polarities and white lines indicate closed field.Bottom right: SDO/HMI magnetogram of the eruption region (from SolarMonitor.org). Figure 3 Figure 3 SDO/AIA and STEREO-A/EUVI images of the AR and its surroundings at 11:49 UT.Arrow points to the ejected filament/plasmoid structure.Note the bending of the structure along the magnetic field lines. Figure 4 Figure 4Radio Monitoring composite showing observed radio emission in the dynamic spectra for the 27 September 2021 event.Numbers 1 -5 refer to the type III burst groups.The locations of bursts labelled 1a and 4a are shown in Figures7 and 6, respectively.Comparison between NDA and Wind/WAVES spectra is also shown. Figure 5 Figure 5 Fermi/GBM background-subtracted X-ray counts from the most sunward-pointing detector at 11:42 -11:52 UT (bottom).The energy ranges are 4 -15 keV, 15 -25 keV, and 25 -50 keV.Radio dynamic spectrum from e-CALLISTO Glasgow is shown on top, at frequency range 45 -81 MHz.Numbers and lines indicate the type III burst groups and their durations. Figure 6 Figure 6 Temporal evolution of burst locations at 150 MHz, observed with NRH.The sources are shown near burst start (black contours) and burst end (white contours), overplotted on an EUV image (left).The spatial evolution is also shown in the color images, from 11:54:11 to 11:54:13 UT (right).This spectral feature is marked 4a (within group 4 bursts) in Figure 4.The radio source locations are typical for all type III bursts in the type III burst groups observed at 150 MHz. Figure 7 Figure7NRH source contours at 270 MHz (black) and 228 MHz (white) at 11:43:08 UT, plotted over an EUV image (left).The spectral feature is marked 1a within group 1 bursts) in Figure4.The flux curves (on the right) of the northern region show an enhancement at 270 MHz, which peaks later than the emission at 228 MHz.This suggests that the electron beam was moving down in the solar atmosphere.The later type III burst groups did not show this northern region again. Figure 8 Figure 8 Dynamic spectrum taken during the second group of type III bursts show narrow-band enhanced plasma emission features, which are indicated with arrows in the KAIRA observations at 75 -25 MHz (top).Narrow-band spikes (type I noise storm features) were observed in between the type III burst groups (KAIRA observations, middle).e-CALLISTO GLASGOW dynamic spectrum at 80 -60 MHz shows the spikes and spike burst groups in more detail (bottom). Figure 9 Figure 9 KAIRA observations of V-shaped type III bursts at 11:43 UT (top).Arrows point to the change in emission lane direction, visible at 35 -27 MHz.The bursts were also observed by several e-Callisto stations; the dynamic spectrum shown here is from the SWISS-Landschlacht e-CALLISTO at 80 -20 MHz (bottom left).Cartoon shows a possible configuration for the creation of V-shaped type III bursts.Star indicates reconnection, and arrows show the directions of two particle-beam paths, one along open field lines (emission with constantly decreasing frequencies) and the other following closed loops back to the Sun (emission lane turning back towards lower frequencies). Figure 10 Figure 10 SDO/AIA 193 Å base-difference images (base at 11:32 UT).The narrow-width dimming region, located from the AR towards the south-west limb, matches with the type III burst regions imaged at 150 MHz, and also with the CME propagation direction. Figure 11 Figure 11 SDO/AIA 193 Å and SOHO/LASCO-C2 difference image at 12:36 UT (left), and direct AIA and C2 images at 12:48 UT (middle) and 13:25 UT (right).The CME was very compact and had a narrow width.No streamers were visible along the CME propagation path. Figure 12 Figure 12 Top: CME leading front heights (boxes), the heights are from the LASCO CME Catalog, flare start time (cross), and metric type III burst group durations (filled boxes).Bottom: Wind/WAVES dynamic spectrum where white circles mark the CME leading front heights converted to frequencies with the hybrid atmospheric-density model. Table 1 Heliocentric CME heights from LASCO-C2 observations converted to frequencies with the hybrid atmospheric-density model.
5,299
2023-10-01T00:00:00.000
[ "Physics" ]
Wage setting in Slovenia : interpretation of the Wage Dynamics Network ( WDN ) survey findings in an institutional and macroeconomic context © A m Abstract: This paper examines responses to questions on wage setting features in Slovenia’s Wage Dynamics Network (WDN) survey in the institutional and macroeconomic context of the Slovene economy. The question on collective wage agreement did not capture the prevailing institutional arrangement of multi-level agreements, and the responses on wage indexation were seemingly at odds with institutional features of wage setting. Labor cost adjustments during the financial crisis were primarily in variable pay components and employment but not in base wages. Minimum wage policy contributed to downward wage rigidity. JEL codes: D22, J31, J38, J50 Introduction Labor market institutions and wage setting mechanisms in euro area countries have recently received considerable attention from the European System of Central Banks (Eurosystem). In a monetary union, countries do not have the option of using exchange rate policy to respond to shocks and the importance of wage and labor market flexibility as adjustment mechanisms becomes greater. If significant institutional rigidities are present, wage developments can perpetuate or exacerbate inflationary pressures and render the task of macroeconomic stabilization yet more difficult. Furthermore, if the rigidities are different across member countries, even common shocks may lead to persistent differences in labor costs and associated changes in competitiveness. Thus, a proper understanding of the features of labor market and wage rigidities is essential for designing appropriate structural policies that will facilitate adjustment to shocks. With these considerations in mind, the Eurosystem established the Wage Dynamics Network (WDN) research group in July 2006 to conduct an in-depth study of the sources and features of wage and labor cost dynamics, and of the relationship between wages, labor costs and prices both at the firm and macroeconomic level. Key elements of this research were two questionnaire-based surveys conducted in late 2007/early 2008. One survey questionnaire, filled out by experts from national central banks in 23 European countries 1 , was aimed at collecting information on wage bargaining institutions that prevailed in 2006 and a decade earlier in 1995 2 . The second survey questionnaire sought information on wage and price-setting behavior at the firm level, and was filled out by firms in 17 European countries. The harmonized questionnaire included a common set of "core" questions for all countries, but some countries adapted the questionnaire to account for specific country characteristics and differences in institutional framework by including fewer or additional "non-core" questions 3 . A third follow-up WDN survey was conducted during the summer of 2009 in a sample of 10 countries to examine firms' perception of the financial crisis and their actual responses to it. Ad hoc surveys typically have shortcomings. The most common limitations are a low response rate and respondent bias (European Central Bank 2009a). The latter includes recall error, lack of familiarity with the details, misunderstanding in interpreting the questions, and being influenced by the specific macroeconomic environment prevailing at the time of the survey. Therefore, it would be of interest to examine how well the survey responses conform to known institutional features of wage setting and macroeconomic developments in the country. This issue has not received much attention in the discussions of the WDN survey results. In this paper, we carry out such an exercise based on responses to selected questions on features of wage setting in the 2008 WDN firm-level survey for Slovenia. Slovenia did not participate in the follow-up 2009 WDN survey. This paper provides a description of wage setting institutions in Slovenia, places the labor market outcomes in a broader macroeconomic context, and presents selected results of the 2008 firm-level WDN survey. The primary objective of the paper is to check the survey results for consistency and signal potential caveats in its interpretation, including with regard to downward real wage rigidity. An important conclusion of the paper is that the WDN survey findings for Slovenia have to be interpreted with care, owing to limitations in both the design of the survey questionnaire and its implementation in practice. The paper makes suggestions regarding the conduct of a followup survey of Slovene firms, which would address some of the limitations of the 2008 survey and would also allow a better understanding of wage setting practices in different phases of a business cycle. The paper is organized as follows. Section 2 looks at the evolution of collective bargaining and wage setting institutions in Slovenia. Section 3 discusses the developments in GDP growth, inflation, labor productivity, and average wages during 1995-2010. Section 4 reviews selected findings of the WDN survey and econometric analysis of wage rigidity, against the backdrop of the discussion of wage-setting institutions and macroeconomic developments. Section 5 concludes. Some institutional features of wage setting Analyses of labor market institutions typically examine, inter alia, union density, the level of collective bargaining, the coverage of collective wage agreements, degree of wage indexation, and developments in minimum wages. Researchers generally consider centralization of wage bargaining and higher values of the other indicators as evidence of less flexibility (e.g., see Deutsche Bundesbank 2009;Kézdi and Kónya 2009). In Slovenia, some of the individual aspects of institutions have become more flexible over time. However, one should be careful in drawing conclusions about the effect of this on macroeconomic performance. As Aidt & Tzannatos (2002) note, the impact of individual aspects such as union density or centralization of bargaining cannot be assessed in isolation as it is the package of institutions that matters. In addition, the impact depends on the prevailing economic, legal, and political environment and can vary over time. Trade union density Trade union density in Slovenia has decreased since independence, though different data sources disagree on the extent of the decline in recent years. According to European Industrial Relations Observatory On-line (EIROnline) (2004, 2009a, 2009b), the percentage of employees affiliated with a trade union decreased by about one third in the initial years of Slovenia's transition to a market-oriented economy from 63.5 percent in 1994 to 42.8 percent in 1998, and fluctuated between 40 and 44 percent thereafter 4 . In contrast, the OECD data base shows a much lower level of union density and a large fall in recent years. According to the OECD, union density in Slovenia fell from 37.5 percent in 2003 to 28.1 percent in 2008 and 25.6 percent in 2009 (http://stats.oecd.org/Index.aspx? DataSetCode=UN_DEN) It is believed that union density declined further during 2010-2011 as a result of the crisis-related rise in bankruptcies (European Industrial Relations Observatory On-line (EIROnline) 2013). The decline in union density in Slovenia can be explained by the structural changes in the economy during the transition process and run-up to euro adoption. Sectors with traditionally high union density (e.g., mining, textiles, leather, and basic metals) have undergone substantial downsizing, and employment growth has mainly taken place in the services sectors where union representation is smaller and more difficult to organize. In addition, the practice of hiring workers on fixed-term contracts or recruiting temporary workers through employment agencies has become more prevalent 5 , and such workers have little incentive to join trade unions (European Industrial Relations Observatory On-line (EIROnline) 2010). Level and coverage of collective bargaining Notwithstanding a decrease in trade union density, virtually all employees in Slovenia have been covered by collective agreements over the years. In 2007, 96 percent of employees were covered by collective agreements and the remaining 4 percent of employees, comprising of managerial workers, were covered by individual agreements (European Industrial Relations Observatory On-line (EIROnline) 2009b). Until 2005, the extension procedure of collective agreements was based on "functional equivalent", in that because of compulsory membership of enterprises in the Chamber of Commerce and Industry the agreements were binding for all employers and their employees (Institute of Macroeconomic Analysis and Development (IMAD) 2004). Membership in employers' associations ceased to be compulsory from 2006, but the Collective Agreements Act 2006 stipulated that sectoral agreements concluded by associations of employers would continue to be mandatory for all employers and their employees in the sector for a transitional period of three years. Thereafter, the Act allows for the possibility of extension of a sectoral collective agreement to all employers and employees in the sector if the agreement was signed by representative trade unions and employers that employ at least 50 percent of all employees in the sector and one of the parties seek an extension from the Minister of Labor (European Industrial Relations Observatory On-line (EIROnline) 2009c); European Foundation for the Improvement of Living and Working Conditions (Eurofound) 2011) 6 . During 2009-2010, the Minister of Labor granted extension to six sectoral collective bargaining agreements. Wage bargaining in Slovenia in the private sector is highly structured and are negotiated at three levels 7 . A general agreement at the national level determines the wage indexation mechanism that is binding for the entire private sector, while sectoral and enterprise-level agreements negotiate additional wage increases based on productivity growth, financial performance, and other considerations. Agreements at each of the lower levels normally improve on the provisions of the higher level agreements. However, the higher level agreements generally include escape clauses that allow enterprises in financial distress to defer specified wage increases under certain conditions. A move toward partial decentralization of the bargaining framework was initiated in 2006 but the changes did not fully go into effect immediately. The Collective Agreement Act 2006 provided for collective agreements to be negotiated on a voluntary basis, but the prevailing practice of full coverage of general and sectoral agreements was to remain in effect for a three-year transitional period. Until 2005, the general agreement at the national level was negotiated on a tripartite basis between the trade unions, employers and government. From 2006 onward, the government stopped participating in the negotiations. Collective bargaining at the sectoral level gained in importance and became dominant. However, the social partners continued to negotiate a general agreement on starting pay and minimum basic pay increases as well as inflation safeguard clause that would apply to all workers in the private sector who were not covered by sectoral collective agreements or whose sectoral collective agreement did not determine the pay adjustment supplement. The sectoral agreements were required by law to set terms at least as favorable as those in the general intersectoral agreement 8 . There has been no new general intersectoral collective agreement since the expiry of the 2008-2009 general intersectoral collective agreement in December 2010, and all pay-related issues for the private sector are being regulated by the Employment Relationship Act and sectoral collective agreements. In some sectors-especially the laborintensive ones like textiles, clothing, and leather industries-collective bargaining has stopped altogether because of the adverse impact of the economic crisis. However, wages in these sectors have been adjusted in line with the increase in the minimum wage (European Industrial Relations Observatory On-line (EIROnline) 2013). Enterprise-level collective agreements always have been voluntary and are over and above the provisions of general and sectoral agreements. Thus, enterprises have flexibility to react to enterprise-specific economic circumstances, subject to meeting the norms set in the general and sectoral agreements. Enterprise-level agreements are more common in large firms. According to employers' associations, about 75-80 percent of large enterprises, 20-50 percent of medium-sized enterprises and less than 10 percent of small enterprises are covered by enterprise-level collective agreements 9 . The estimates of trade union confederations are lower. The two largest trade union confederations (ZSSS and KS − 90) estimate that 30-50 percent of large enterprises, 20-30 percent of medium-sized enterprises and 5 percent of small enterprises are covered by enterprise-level collective bargaining (see Table twelve in European Foundation for the Improvement of Living and Working Conditions (Eurofound) 2007). There are no estimates on how enterprise-level collective agreements have evolved over time. Indexation of wages to inflation Adjustment of wages to inflation has been a key element of all general wage agreements in Slovenia (Table 1). The indexation mechanism has been modified periodically to support disinflation during the transition process and run-up to euro adoption. During 1995-2000, the indexation mechanism was backward looking, with base wage increases partially indexed to past inflation. In 2001, social partners agreed to the implementation of a forward-looking indexation mechanism which provided for partial indexation of base wage increases to the projected inflation rate and a safeguard for additional increase in the event actual inflation turned out to be higher than projected. The indexation mechanism was further modified in 2004, coinciding with Slovenia's membership of the European Union and entry in the Exchange Rate Mechanism II in preparation for euro adoption. The new indexation formula tied wage increases to projected inflation in Slovenia, projected inflation in selected EU member states, and the projected exchange rate of the tolar vis-à-vis the euro. The mechanism also included a safeguard for additional wage increase in the event actual inflation exceeded a specified rate. The general agreements reached by social partners in 2006 and 2008 without government participation continued to link wage increases to inflation. Projected inflation was taken into account in the determination of the increase in starting and minimum base wages, though this was not transparent as the agreements proposed increases of a specified percentage each year. Both agreements also included a provision for additional wage increases if inflation exceeded a specified rate. The frequency of wage adjustments initially depended on the pace of inflation, but has not varied in recent years. Adjustments were made quarterly during 1995-96 when inflation was high. As inflation slowed down, adjustments were made once a year during 1997-98. However, with the resurgence of inflation pressure in 1999 wage adjustments began to be carried out twice a year and this frequency of adjustment continued to be provided for in the wage agreements for subsequent years even after inflation came down 10 . Minimum wages Minimum wage developments have contributed to downward wage rigidity in Slovenia. Until 2006, the minimum wage was set within the framework of the tripartite agreement between unions, employers and the government. Since then, the government has been setting the minimum wage alone, following consultations with employers and unions. During 2001During -2004During and 2008During -2010 the minimum wage went up by more than the average wage in the private sector. Thus, the ratio of the minimum wage to the average wage in the private sector has risen from 43 percent in 2000 to 48 percent in 2010, with a temporary dip during 2006-2007 ( Figure 1). Following its increase in March 2010, the minimum wage rather than the basic wage negotiated in sectoral collective agreements became the binding wage-setting parameter for many workers in the private sector, and there was a leveling out of wages at the bottom half of the wage Quarterly adjustment of base wages by 85% of inflation in the previous quarter. Additional increase in wages related to productivity Overall objective: maintain level of real wages. 1996 Social agreement Overall objective: maintain level of real wages. Quarterly adjustment of base wages by 85% of inflation in the previous quarter. Additional increase in wages related to productivity. Adjustment of base wages in January by 85% of inflation in previous year. 1997, June Additional increase in mid-1999 for price impact of introduction of VAT. Safeguard clause for additional increase in mid-2000 in the event of higher-than-expected inflation. 2001: Gradual transition to forward-looking indexation of base wages. Being a small open economy, Slovenia's growth was highly sensitive to the external economic environment. The periods of slowdown of economic growth coincided with unfavorable external conditions. GDP growth was primarily driven by productivity growth Wage policy is generally considered to have facilitated the disinflation process in Slovenia during the transition process and run-up to euro adoption in 2007 (Bole and Mramor 2006;Banerjee and Shi 2010). As noted earlier, adjustments of basic wages to inflation were always partial and the wage indexation formula was weakened from 2001 onward. However, for a complete picture one should focus on the total gross wage of a worker rather than basic wage, and compare the dynamics of nominal and real total gross wage with that of productivity. The increase in the total gross wage of a worker depends on the increase in basic wage, increases negotiated in sectoral and enterprise-level collective agreements, and other increases based on promotions, individual performance, and enterprise performance. Thus, the growth of total gross wage may exceed inflation. Whether or not this adds to inflationary pressures depends on the trend in the growth of unit labor costs (ULC). As Figure 2 shows In contrast to the pattern in the pre-crisis years, real wage growth rose at a faster rate than productivity growth in 2008-09. One should not immediately interpret the emergence of a negative gap between productivity growth and wage growth as evidence of wage rigidity during an economic downturn. When faced with a decline in demand, employers have the option of cutting total labor costs through adjustments in wages as well as employment. During the economic crisis, enterprises did reduce employment but probably by less than what they might have had, if it were not for the measures that were introduced by the Slovene government aimed at encouraging enterprises to keep workers on their payroll. These measures included subsidies for shorter-hour work schedule and for giving workers paid leave for a temporary period 13 . Because of the anti-crisis measures to avoid redundancies, measured productivity declined sharply in 2009 and a large negative productivity-wage gap emerged. In practically all general collective agreements, social partners had implicitly or explicitly agreed that real wage growth should lag behind productivity growth 14 . This guideline was abided by a margin of close to 1 percent or more in all years, except 2001 and 2008-09. Reflecting this, real ULC was on a declining trend until 2007, but rose appreciably during 2008-09 (Figure 3). Nominal ULC was on a rising trend throughout. The pace slowed progressively during 2002-05, but picked up sharply during 2007-09 on account of both faster nominal wage growth and slower or negative productivity growth. Calmfors and Driffill (1988) have hypothesized a humped-shaped relationship between wage bargaining systems and economic performance: that real wage growth is lower under highly centralized (national level) and highly decentralized (firm level) wage bargaining systems and higher under an intermediate (industry level) bargaining system. As cited earlier in Section 2, the impact of individual aspect of degree of bargaining cannot be assessed in isolation as it is the package of institutions that matters. Thus, it is difficult to assess the impact of the changes in the collective bargaining mechanisms on real wage outcomes in Slovenia, given the multiple levels of bargaining in operation, continued practice of extension of collective agreements to all workers, shock of the economic crisis, adaptability of union-employer coordination to the crisis situation, and downward wage rigidity imposed by minimum wage policy. The short observation period does not allow us to carry out a meaningful multivariate analysis. Further, Driffill (2006) notes that the empirical analysis of Calmfors and Driffill has been criticized for having paid more attention to the level at which bargains were struck and less to the extent of coordination among participants to wage setting (which seems to be an important consideration in Slovenia). Selected findings of the Wage Dynamics Network survey The WDN survey in Slovenia was conducted in January-February 2008, and the reference period was 2006 15 . A sample of 3,000 non-agricultural private-sector enterprises with 5 or more employees was selected from the Business Register of Slovenia using a stratified sampling technique. The stratification was done by sector (two-digit NACE classification) and firm size (less than 50 employees, 50-199 employees, and 200 employees or more). The selected firms were contacted by mail with instructions to fill out a web-based questionnaire. Only 681 enterprises, or 22.7 percent of those that were contacted, filled out the questionnaire. The response rate varied considerably across sectors and firm-size groups. The proportion responding to the survey was lowest among small-sized enterprises (16 percent) and among enterprises in tourism (12 percent) and construction (13.5 percent). Thus, in order to adjust for the unequal probability of enterprises ending up in the final sample and to make the results applicable to the entire population of workers, the survey responses were scaled by employmentadjusted sampling weights. The Slovenia survey conformed closely to the template provided by the WDN. It included the core and optional questions of the WDN plus some questions that were specific to Slovenia and not included in the other national questionnaires. The questions were qualitative and quantitative in nature and were aimed at obtaining an understanding of the features of wage setting and price and wage dynamics. In particular, the survey included questions on the frequency of wage changes, time-dependence and synchronization of wage changes, prevalence and features of indexation and adjustment of wages to inflation, wage setting of new hires, downward wage rigidity, response of wages to shocks, the synchronization of wage and price changes, and how wages feed into prices. Unfortunately, the Slovenia questionnaire did not include a question relating to minimum wages. More details on the WDN survey in Slovenia and the questionnaire can be found in Sila and Jesenko (2011) and Vodopivec (2010). The assessment of the survey responses in this paper is selective. In particular, we review only the responses on collective agreement coverage, adjustment of base wages to inflation, wage rigidity, and labor cost adjustment strategies. We do not discuss the survey responses related to behavior of wages of newly hired employees and to price and wage dynamics. Level of collective wage agreement The question on the level of collective wage agreement was posed differently in the Slovenia survey from that in the WDN survey in other countries and had a major shortcoming. In other countries, the survey included two questions on this issue: if the firm applied a collective wage agreement set outside the firm and, irrespective of the answer to this question, if the firm applied a wage agreement signed at the firm level. Thus, these surveys could capture collective agreements being applied at more than one level. In contrast, the Slovenia survey only asked which level of collective bargaining agreement was applied by the firm, and did not provide the option of recording multiple answers. Thus, the survey did not capture the application of more than one level of agreement that we know from Section 2 was a standard feature of the Slovene system. This shortcoming needs to be taken into account in interpreting the responses to the question on the level of collective bargaining that applied. According to the WDN survey, about one-fourth of non-agricultural private sector employees in Slovenia were covered by firm-level collective agreements, slightly more than one-half by sectoral collective agreements, and one-fifth by general collective agreement (Table 2). Since we know from Section 2 that in the reference year 2006 (i) a general agreement covered a small group of workers not covered by relevant sectoral agreements for a variety of reasons, and (ii) firm-level collective agreements were voluntary and augmented the agreements at the general and sectoral levels, it seems reasonable to infer that the response of the firms in the survey probably indicate the lowest-level of collective agreement that they applied. However, because the survey did not allow for multiple responses, we cannot accurately ascertain the importance of different levels of agreement from the survey responses and cannot unequivocally compare the findings for Slovenia with those for other countries. As Table 2 shows, there were significant differences between firm-size groups and sectors in the application of different levels of collective agreement. Firm level wage agreements were applied most frequently by firms with 200 or more workers and by firms in the utilities sector. Firms in manufacturing and financial intermediation also reported incidence of firm-level wage agreements above the national average. The application of general level agreement was reported most frequently by firms employing less than 50 workers and by firms in market services. The patterns by firm-size and sectors were consistent with expectations and broadly in line with that observed in some western European countries (see Deutsche Bundesbank 2009 As noted earlier in Section 2, trade unions and employers' associations also report that firm-level agreements are most common in large firms and minimal in small firms. Profit margins are generally higher for larger firms and firms in the utilities and financial intermediation sectors, which allow them greater flexibility in giving performancebased compensation above the wage increases negotiated in the general and sectoral agreements. Also, union density is likely to be lower in smaller firms and in the services sector, which reduces the scope for sectoral agreements being applied by them. Adjustment of base wages to inflation The responses in the WDN survey to the question on whether firms had a policy of adapting changes in base wages to inflation are seemingly inconsistent with the institutional features of wage setting in Slovenia. As noted in Section 2, the general agreement negotiated by social partners in 2006 (the reference period for the survey) for workers in the private sector not covered by sectoral collective agreements provided for a inflation safeguard clause, and the sectoral agreements were required by law to set terms at least as favorable as those in the general agreement. Yet, as Table 3 shows, as much as 40 percent of the sample indicated that they did not apply inflation indexation policy to adjust base wages. The proportion indicating no inflation-indexation policy was high across all size groups and sectors, but was highest among firms with 50-199 workers and in construction and trade. Strangely, these two sectors also reported the lowest incidence of firm-level collective agreements. While the survey responses reflect respondent bias, one can conjecture a few reasons for the seeming inconsistency. First, it is possible that the base wages negotiated in the sectoral agreements went up by more than the indexation amount suggested in the general agreement. Second, it is possible that the respondents did not pay attention to the fact that the question referred to base wages rather than total gross wages. Since nominal total gross wages grew more rapidly than inflation because of reasons related to productivity growth and other performance considerations, it is possible that the respondents did not perceive inflation indexation as being relevant or binding for overall wage determination. Third, the respondents may have misinterpreted the guidelines for indexation in the general collective agreement. The safeguard clause for additional wage increases in 2006 was not triggered because actual inflation in end-December was less than projected 16 . Also, the general agreement for 2006-07 specified a particular rate of wage increase without any explicit reference to indexation 17 , and the safeguard clause related to inflation developments in 2006 was to take effect in 2007. Thus, these two aspects could have mistakenly led some respondents to believe that indexation of wages to inflation did not apply in 2006. Fourth, it may be that the respondents were influenced by the fact that, unlike in earlier years, the indexation mechanism in 2006 was not government imposed. Another striking finding of the WDN survey is that of those who indicated inflation indexation of base wages a vast majority replied that wages were indexed to past inflation. The pattern was similar across all size groups and sectors. Once again, this response may have been influenced by the specific way that the indexation mechanism operated in Slovenia. As noted in Section 2, Slovenia moved away from backward indexation to forward-looking indexation in 2002, and agreements further included a safeguard clause for additional increase if actual inflation exceeded the projected or specified rate. The 2006 agreement did not explicitly include an indexation coefficient and figure for projected inflation but had a safeguard clause. This may have influenced respondents to interpret the framework as backward-looking indexation. It is difficult to decipher why indexation was considered informal by a sizeable percentage of the sample, because indexation was a key feature of collective agreements and there was a formal rule. The survey responses to the question on the typical frequency of base wage changes due to inflation, shown in Table 4, is puzzling as well as seemingly inconsistent with the institutional arrangements for indexation. First, the percentage of respondents stating that they never adjusted base wages due to inflation (16.5 percent, Table 4) is much smaller than the percentage that claimed no inflation indexation policy (39.5 percent, Table 3). Second, we know from the discussion of collective agreements in Section 2 that the typical frequency of wage indexation from 1999 onward was two times a year. However, since the safeguard clause was not triggered in 2006, only one adjustment of basic wages took place in August that year. Thus, it is odd that one third of the sample claimed that base wages were adjusted to inflation once every two years or less frequently than once every two years. Perhaps, the respondents were referring to the duration for which the negotiated agreements were valid. The proportion rose with firm size (from 16.5 percent in firms with 5-19 workers to 41 percent in firms with 200 or more workers), and was highest in the utilities and financial intermediation sectors (about 48 percent in each). Evidence of response errors is further demonstrated by the cross-tabulation of responses to the questions on inflation indexation policy and frequency of wage changes due to inflation (Table 5). Of those who had indicated that they had an automatic or informal inflation indexation policy, 16 percent responded that that they had never adjusted wages due to inflation. Furthermore, of those who indicated that they had no inflation indexation policy, nearly 60 percent stated that they adjusted base wages due to inflation once a year or more frequently. It may be that these firms did not see their action as wage indexation as they did not follow the automatic indexation rule specified in the collective agreement. Nevertheless, these seeming inconsistencies point to the absence of proper cross-checks of the answers by the survey data compilers and to the need for follow-up clarification of interpretation of the questions by the respondents. Downward wage rigidity The responses to the WDN survey questions aimed at assessing wage rigidity need to be interpreted with caution 18 . An extremely small proportion of firms answered that they had ever frozen or cut the base wages in the five years prior to the survey (below 3 percent in both cases). About three-fourths of the firms indicated that regulation or collective bargaining was of little or no relevance in preventing base wage cuts. Rather, the two most important reasons why base wage cuts were rare were the impact on work morale and the possibility that the most productive workers would leave as a consequence. In response to the questions about possible reactions of firms to possible demand and supply shocks, no firm indicated wage cuts as the most important factor for reducing costs. Furthermore, about two-thirds of the sample indicated that they had never used any measure to reduce labor costs. As such, these responses could be seen as evidence of high downward wage rigidity in Slovenia but, when viewed against the economic climate during the five year reference window, we cannot take the survey responses as conclusive evidence. As Sila and Jesenko (2011) point out, the observed downward nominal wage rigidity can be explained by the absence of a business cycle downturn during the reference period. The WDN survey was conducted prior to the onset of the financial crisis and, as discussed in Section 3, the Slovene economy experienced robust GDP and productivity growth and a U-shaped inflation path during 2003-2007. Thus, there was little incentive for cutting or freezing base wages or, for that matter, total gross wages during this period. While base wages were partially indexed to inflation under general collective agreements, additional wage increases were negotiated in sectoral and firm-level collective agreements based on productivity and performance considerations. Indeed, as Figure 4 shows, reflecting the favorable economic trends, the proportion of private sector employees who received performance-related payments ("thirteenth month's pay" and Christmas bonus) increased from 20.6 percent in 2003 to a peak of 33 percent in 2007 19 . Had employers made efforts to cut base wages or total gross wages in this environment, they would likely have lost their most productive workers. When faced with falling demand shock during the financial crisis, Slovene firms displayed flexibility in cutting labor costs to a significant degree. The adjustments were primarily in variable pay components and employment but not in base wages, consistent with the provisions of the collective bargaining agreements and the constraint imposed by the level of the minimum wage. The proportion of private sector employees who received performance-related payments fell to about 26 percent in 2010-a decrease of nearly one-fifth from the 2007 level-and the average amounts of the payments were smaller. Because of the existence of flexible wage components, firms could manipulate these payments to lower total nominal gross wage without cutting base wages. In addition to cutting performance related wage payments, Slovene firms cut labor costs by reducing unpaid overtime work, placing workers on shorter-time work schedules, and laying off fixed-term and temporary workers 20 . Enterprise level collective agreements typically include the possibility of introducing shorter-hour work schedule. As noted earlier in footnote 13 in Section 3, private sector employment was cut back by 7.25 percent and another 12.7 percent of employees placed on shorter-time work schedules in 2009-2010. Supporting evidence on the prevalence of flexible wage components in the form of performance-related bonus payments is provided by the WDN survey. As Table 6 shows, on average about 17.5 percent of the total wage bill in Slovene firms in the survey reference period (i.e., 2006) was allocated to bonuses associated with individual or company performance. Bonus payments were more important in small-sized firms, and in construction and trade sectors. The average share of both individual and company performance-related bonuses in the total wage bill was highest among firms employing 5 to 19 workers and fell as firm size increased. Firms in the trade sector paid bonuses related to both individual and company performance, whereas the construction sector tended to rely mainly on bonus payments related to individual performance. Following Babecký et al. (2009Babecký et al. ( , 2012, Sila and Jesenko (2011) treat indexation of base wages to inflation as a proxy for real wage rigidity and carry out an econometric exercise relating this variable to selected firm characteristics 21 . In the specifications for the sample of euro zone countries, Sila and Jesenko find that after controlling for other firm characteristics, GDP growth and inflation, Slovenia had the highest probability of firms reporting indexation of base wages to inflation. In the specification restricted to Slovene firms, the probability of reporting base wage indexation is higher for firms with higher share of high-skilled blue-collar workers and for large firms. Also, contrary to expectations, the probability of reporting wage indexation is lower for firms with a non-firm level collective agreement compared to firms with a firm-level agreement. The results of the econometric exercise is best seen as multivariate determinants of the response to the question on whether the enterprise had a policy to adapt changes in base wages to inflation 22 , rather than determinants of real wage rigidity. As noted in the previous section, the answers to this question were subject to serious response error, taking into account institutional arrangements for wage indexation in Slovenia as well as the WDN survey responses regarding the frequency of wage adjustments to inflation. General collective agreements covered all private sector employees in Slovenia for indexation of base wages to inflation. If all the survey responses were consistent with this institutional feature, the econometric exercise would be redundant because of lack of variability in the dependent variable. Still, wage indexation per se should not be taken as evidence of downward real wage rigidity. In Slovenia, the indexation of base wage to inflation was partial, thus allowing for a decrease in base wages in real terms in all years. The growth of total gross wages exceeded inflation because of additional wage increases linked to productivity gains and performance, and there was variation in annual developments in real gross wages. Conclusions An important conclusion of this paper is that the WDN survey findings on wagesetting behavior of firms have to be interpreted with caution and should not be used to draw definite conclusions about wage flexibility in Slovenia. The survey had critical shortcomings: the response rate was very low, the design of some key questions did not capture the prevailing institutional arrangements, questions may have been interpreted differently by different respondents, and answers to some questions were mutually inconsistent which suggested response errors and lack of proper cross-check by the survey administrators. In addition, it is likely that the survey responses were influenced by the prevailing economic environment. The survey was conducted at a time when the economy was booming and Slovenia had not experienced a downturn in the business cycle during the preceding decade. Evidence indicates that a sizeable part of the enterprise wage bill comprises of performance related bonus payments that tend to be pro-cyclical in nature. In the years preceding the onset of the financial crisis, the proportion of private sector employees receiving performance-related payments was on an upward trend. However, faced with falling demand shock with the onset of the financial crisis in late-2008, Slovene firms displayed flexibility in cutting labor costs to a significant degree. Enterprises cut back on performance-related payments in terms of amounts as well as the number of employees receiving them, lowered paid overtime work, put employees on shorter-hours work schedule, and laid-off workers. The labor cost adjustment methods implemented by the enterprises were within the bounds of the collective wage bargaining arrangements and did not involve cuts in base wages. A cut in base wages was not an option following the large increase in the minimum wage in 2010, as the increase pushed the minimum wage well above the basic wages negotiated in some sectoral agreements, even for more demanding tasks. The government's minimum wage policy contributed to downward wage rigidity and worked against the other efforts aimed at fostering wage flexibility. A number of steps have been taken in recent years to increase wage and labor market flexibility in Slovenia, though their impact is yet to be fully felt and many challenges remain. The Collective Agreement Act 2006 provided for collective agreements to be negotiated on a voluntary basis and the changes were to go into effect after a three-year transition period. Thus, a decrease in the proportion of employees covered by collective bargaining can be expected over time. The Employment Relationship Act was amended in 2007 with the aim of expanding the possibility of using flexible forms of employment (fixed-term employment), allowing for longer overtime work, and making termination of employment contracts easier (Institute of Macroeconomic Analysis and Development (IMAD) 2008). Looking ahead, there is a need to examine how enterprises have adapted to the changes in the institutional aspects of wage-setting and employment relationships, and to conduct in-depth micro data-based analysis of the enterprise behavior prior to the crisis, during the crisis, and after the crisis. Unlike many other European countries, Slovenia did not participate in the follow-up WDN survey that was conducted in 2009 soon after the start of the financial crisis. Thus, a follow-up WDN style survey that takes into account the lessons from the 2008 survey and other follow-up surveys conducted elsewhere is desirable. It would be particularly important in the follow up survey to include questions on the role of the minimum wage in the setting of basic wages in sectoral collective agreements and its impact on the labor cost adjustment efforts of enterprises. For a comprehensive assessment of enterprise behavior and labor market outcomes, it is not sufficient to focus on existing firms. It would be worthwhile supplementing the survey for existing firms with a small survey of firms that have ceased operations in recent years.
9,349.2
2013-08-15T00:00:00.000
[ "Economics" ]
Second Quantization and the Spectral Action We show that by incorporating chemical potentials one can extend the formalism of spectral action principle to Bosonic second quantization. In fact we show that the von Neumann entropy, the average energy, and the negative free energy of the state defined by the Bosonic, or Fermionic, grand partition function can be expressed as spectral actions, and all spectral action coefficients can be given in terms of the modified Bessel functions. In the Fermionic case, we show that the spectral coefficients for the von Neumann entropy, in the limit when the chemical potential $\mu$ approaches to $0,$ can be expressed in terms of the Riemann zeta function. This recovers a recent result of Chamseddine-Connes-van Suijlekom. Introduction The spectral action principle of Connes and Chamseddine was originally developed mainly to give a conceptual and geometric formulation of the standard model of particle physics [2]. The spectral action can be defined for spectral triples (A, H, D), even when the algebra A is not commutative. An interesting feature here is the additivity of the spectral action with respect to the direct sum of spectral triples. Conversely, one can wonder whether a given additive functional on spectral triples is obtained via an spectral action. In a recent paper [3], Chamseddine, Connes, and van Suijlekom have shown that the von Neumann entropy of the Gibbs state naturally defined by a Fermionic second quantization of a spectral triple is in fact spectral and they find a universal function that defines the spectral action. In this paper we show that by incorporating chemical potentials one can extend the formalism of spectral action principle to both Bosonic and Fermionic second quantization. In fact we show that the von Neumann entropy, the average energy, and the negative free energy of the thermal equilibrium state defined by the Bosonic, or Fermionic, grand partition function, with a given chemical potential, can be expressed as spectral actions. We show that all spectral action coefficients can be expressed in terms of the modified Bessel functions of the second kind. In the Fermionic case, we show that the spectral action coefficients for the von Neumann entropy, in the limit when the chemical potential µ approaches to 0, can be expressed in terms of the Riemann zeta function. This recovers the recent result of Chamseddine-Connes-van Suijlekom in [3]. It should be noted that without the use of chemical potentials, the natural spectral function for the von Neumann entropy in the Bosonic case is singular at t = 0, and in fact the corresponding functional is not spectral. In searching for a suitable expression of spectral action coefficients in all six cases studied in this paper, we were naturally led to the class of modified Bessel functions of the second kind. In Section 3 some basic properties of these functions are derived. In section 2 we recall some of the main concepts and results from the theory of second quantization. Our main results are presented in Sections 4 and 5. In this section, mainly to fix our notation and terminology, we shall recall some basic definitions and facts from the theory of second quantization in quantum statistical mechanics. We shall largely follow [1]. Fock space and second quantization In this section we shall first recall the definition of the Fock space F(H) of a Hilbert space H, and the correspondding Fermionic Fock space F − (H) and the Bosonic Fock space F + (H) [1]. Here we will regard F ± (H) as subspaces of F(H), although one can also treat them as the quotient spaces of F(H) instead. After that we shall recall the procedure of second quantization. Let H be a complex Hilbert space. We denote by H n = H ⊗ H ⊗ · · · ⊗ H the n-fold tensor product of H with itself when n > 0, and let H 0 = C. The Fock space F(H) is the completion of the pre-Hilbert space n≥0 H n . Define the projection operators P ± on H n by for all f 1 , ..., f n ∈ H. Since P ± are bounded operators with norm 1 on n≥0 H n , they can be extended by continuity to bounded projection operators on the Fock space F(H). The Bosonic Fock space F + (H) and the Fermionic Fock space F − (H) are then defined by F ± (H) = P ± (F(H)). The corresponding n-particle subspaces H n ± are defined by H n ± = P ± H n . The structure of the Fock space allows us to amplify an operator on H to the whole Bose/Fermi Fock spaces F ± (H). This procedure is commonly referred to as second quantization. Let H be a self-adjoint operator on H with domain D(H). We define H n on H n ± by H n is essentially self-adjoint, and the selfadjoint closure of this direct sum operator is called the second quantization of the operator H and it is denoted by dΓ(H). Namely, In particular, let H = ½ be the identity operator. Then we have where N is the number operator on F ± (H), whose domain is defined by and for any ψ ∈ D(N ) For a unitary operator U on H, first we define U n on H n ± by and then extend it to the whole Fock space. We denote this extension by Γ(U ), called the second quantization of the unitary operator U , It is worth noticing that here Γ(U ) is also a unitaty operator on F ± (H). Also, if U t = e itH is a strongly continuous one-parameter unitary group acting on H, then Γ(U t ) = e itdΓ(H) on the Fock spaces F ± (H). If H is a self-adjoint Hamiltonian operator on the one-particle Hilbert space H, then the dynamics of the ideal Bose gas and the ideal Fermi gas are described by the Schrödinger equation and the evolution of a bounded observable A ∈ B(F ± (H)) is given by conjugation as Next we shall introduce the Gibbs grand canonical equilibrium state ω of a particle system at inverse temperature β ∈ R, and with chemical potential µ ∈ R. Let be the modified Hamiltonian. Then ω is defined by Here we assume the operator e −βKµ is a trace-class operator. If we have two one-particle spaces H 1 and H 2 , and self-adjoint operators When the operators e −dΓ(H i ) are positive trace-class operators for i = 1, 2, then CAR and CCR algebras Both of the CAR and CCR algebras are constructed with the help of creation and annihilation operators. Because of that, we shall recall the definitions of annihilation and creation operators first. Let H be a complex Hilbert space. For each f ∈ H, we define the annihilation operator a(f ), and the creation operator a * (f ) acting on the Fock space F(H) by initially setting a(f )ψ (0) = 0, a * (f )ψ (0) = f , for all f ∈ H, and Here ψ (0) = 1 ∈ C. One can see that the maps f → a(f ) are anti-linear while the maps f → a * (f ) are linear. Also, one can show that a(f ) and a * (f ) have well-defined extensions to D(N 1/2 ), the domain of the operator N 1/2 . Moreover, we have that a * (f ) is the adjoint of a(f ); namely, for any φ, ψ ∈ D(N 1/2 ), one has We can then define the annihilation operators a ± (f ) and the creation operators a * ± (f ) on the Fermi/Bose Fock spaces F ± (H) by Moreover, since the annihilation operator a(f ) keeps the subspaces F ± (H) invariant, we have a ± (f ) = a(f )P ± , a * ± (f ) = P ± a * (f ). The first relations are called the canonical anti-commutation relations (CAR), and the second relations are called the canonical commutation relations (CCR). Roughly speaking, the CAR algebra is the algebra generated by the annihilation operators a − (f ) and creation operators a * − (f ). In fact, we have the following proposition [1]: Therefore both a − (f ) and a * − (g) have bounded extensions on F − (H). Definition We call the subalgebra of B(F − (H)) generated by a − (f ), a * − (g) and ½ the CAR algebra and denote it by CAR(H). Although the CCR rules looks very similar to the CAR rules, however, one can not simply mimic the previous definition of CAR algebras to deduce the definition of CCR algebras. The reason is that the annihilation operators a + (f ) and the creation operators a * + (g) are not bounded operators on F + (H). First we introduce the set of operators {Φ(f ), f ∈ H} by Since the map f → a + (f ) is anti-linear, and f → a * + (f ) is linear, then Thus it suffices to examine the set of operators {Φ(f ), f ∈ H}. Let F + (H) = P + n≥0 H n ⊆ F + (H), i.e. F + (H) contains the sequences ψ = {ψ (n) } n≥0 which have only a finite number of nonvanishing components. Since for each f ∈ H, Φ(f ) is essentially self-adjoint on F + (H), Φ(f ) can be extended to a self-adjoint operator, we still use Φ(f ) to denote the selfadjoint operator We have the following proposition [1]: Let CCR(H) denote the algebra generated by {W (f ), f ∈ H}. It follows that and (2) For each pair f, g ∈ H The operators W (f ) are called Weyl operators, and the algebra CCR(H) is called the CCR algabra of H. Gibbs states Let K µ denote the modified Hamiltonian operator In the Fermionic case, we can define the Gibbs state ω(A) over the CAR algebra CAR(H) by Here we assume the operator e −βKµ is a trace-class operator on F − (H). In fact, we have the following proposition [1]: Let H be a self-adjoint operator on the Hilbert space H and let β ∈ R. The following conditions are equivalent: (1) e −βH is trace-class on the one-particle Hilbert space H. In the Bosonic case, we can define the Gibbs state ω(A) over the CCR algebra CCR(H) by Similarly as in the case of Fermionic Fock space F − (H), it is implicitly assumed that the operator e −βKµ is trace-class on F + (H), in fact, we have the fololowing proposition [1]: Let H be a self-adjoint operator on the one-particle Hilbert space H, let β, µ ∈ R. The following conditions are equivalent: • e −βH is trace-class on the one-particle Hilbert space H and β(H − µ½) > 0, • e −βdΓ(H−µ½) is trace-class on the Bosonic Fock space F + (H). Entropy and energy Let (A, H, D) be a spectral triple. We can construct the Bosonic and Fermionic Fock spaces Suppose the operator e −dΓDµ is a trace-class operator on F + (H), or on F − (H). Then we can define the density matrix In this section, we will show that when the operator e −Dµ is trace class on H, the von Neumann entropy, the average energy, as well as the negative free energy of ρ can be expressed as spectral actions for the spectral triple (A, H, D). First let us briefly recall the von Neumann entropy and the energy. Consider a density matrix ρ on a Hilbert space H, i.e. ρ is a positive trace-class operator with Tr(ρ) = 1. Its von Neumann entropy is defined to be S(ρ) := −Tr(ρ log ρ). Consider an observable, that is a self-adjoint operator H : H → H, and let ρ = 1 Z exp(−βH) be a thermal density matrix, at some inverse temperature β. Here Z = Tr(exp(−βH)) is the canonical partition function. Then the average energy H = Tr(ρH) is given by and the free energy −F (ρ) is defined by It is easy to see that In a given spectral triple (A, H, D), the operator e −dΓDµ is well-defined on both F + (H) and F − (H). According to the Proposition 2.4, the operator e −dΓDµ is trace-class on F − (H) if and only if the operator e −Dµ is trace-class on H. Thus suppose e −Dµ is trace-class on H. Then we can define a density matrix on F − (H). The map D → S(ρ(dΓD µ )) gives rise to a spectral action, and this spectal action is an additive functional on spectral triples. In fact, suppose D = S ⊕ T is an orthogonal decomposition, then and since we have the entropy thus the map D → S(ρ(dΓD µ )) gives rise to a well-defined spectral action. Now for a given chemical potential µ, the map D → dΓD µ gives us a spectral action as well. According to Lemma 2.1, this action is additive. For simplicity, we take the inverse temperature β = 1 here. Modified Bessel functions of the second kind The modified Bessel functions {I ν (z), K ν (z)} are the solutions of the modified Bessel's equation The right-hand side of (3) should be determined by taking the limit when ν is an integer. The function I ν (z) is called the modified Bessel function of the first kind, and K ν (z) the modified Bessel function of the second kind. We shall introduce some basic properties of the modified Bessel function of the second kind. For more detail, one can check the references [7,6,4]. where γ is Euler's constant. When z ր ∞, one has Lemma 3.4. One has the integral representation formula of the function K ν (z): Lemma 3.5. Let K ν (z) be the modified Bessel functions of the second kind. Then one has [4, 8.486] Lemma 3.6. When ν > − 1 2 , a > 0, and x > 0, we have the integral formula [4,8.432] Using Lemma 3.6, we obtain the following Lemma: and where ψ ν,a (t) = 1 and ψ ν,a , φ ν,a denote the corresponding Fourier transforms of ψ ν,a and φ ν,a . Namely, Proof. According to Lemma 3.6, one has and then changing the variable t → 2πt, one can get formulae (8) and (9). From Lemma 3.7, one can easily deduce the following lemma: and Poisson summation and asymptotic expansions To continue, we need the following version of the Poisson's summation formula: Lemma 3.9 (Poisson's summation formula [5]). If a function f (x) is integrable, tends to zero at infinity, and By this lemma we can deduce the following asymptotic expansion formulae [5]: Lemma 3.10. When a → 0 + , we have the following asymptotic expansions where γ is Euler's constant. Proof. Let us consider the formula (14) first. Let By the equation (8), we have applying Lemma 3.9 to (16) and (17), Thus we get the asymptotic formula (14). If we replace a by 2a in formula (14), we get Thus we proved (12). Remark This is consistent with the formulae given in [4, 8.526], where if we take t = 0, when x → 0 + , we can get the formulae (12) and (14) then. The von Neumann entropy in the Fermionic second quantization In the Fermionic Fock space, the von Neumann entropy of ρ is given by It is worth noticing that we can still define D µ for a general spectral triple, and when µ < 0, the difference between D and D µ is which is a compact operator. Thus D µ here plays the role of a fluctuation of D, even though there is no * −algebra here. Let Notice that when µ = 0, we get the same function h(x) as in [3]. The derivative of h µ (x) is According to [3], Thus we get the expansion of h µ (x): Also, according to Proposition 4.4 in [3], Thus whereg µ (t) := e µtg (t). Now we want to compute the moments of the function h µ (x), that is the integral To this end, one can first compute the two integrals separately, and then sum them up. Lemma 4.1. We have the integral formula: Proof. According to Lemma 3.4, one has the integral formula: and using (6) and (7) From which we get the formula (24). Lemma 4.2. We have the following integral formula: Proof. Taking the derivative with respect to z on both sides of the formula (24), one has Using (6), one has Now substituting (27) into (26), finally we get the desired formula (25). Lemma 4.3. When ν > −1, one has Proof. Notice that Consider the integral and substitute x by y = x 2 − µ to get: Thus using Lemma 4.1, one has Using (30) and (29), then finally we get formula (28). Proof. Using Propositions 4.3 and 4.4, we have By applying (4) to this equation, we get the integral formula (32). For the second statement, we use the asymptotics and the Legendre duplication formula for the gamma function in (33), we get which is the same as [3, Lemma 4.5]. We denote the a−th order spectral action coefficient of h µ ( √ x) by γ µ (a); namely, It is clear that for a fixed chemical potential µ < 0, the equation (36) is an entire function with respect to a ∈ C. According to the Lemma 4.5, we can deduce that when the order a < 0, the coefficient of t a in the heat expansion is Now we show that for any fixed chemical potential µ < 0, the function (39) is an entire function with respect to a ∈ C , so that the function (39) can give rise to spectral action coefficients for any order a. Proposition 4.6. For any fixed chemical potential µ < 0, the function (39) is an entire function in a ∈ C. Hence we have the formula for all a. Proof. We only need to show that the series is an entire function in a ∈ C. In fact, using the integral expression for the Bessel function K ν (z) [4,8.432], we have or Re(z) = 0 and ν = 0. We see that for a fixed z > 0 the function K ν (z) is an entire function with respect to ν ∈ C. Now we need to show that equation (41) is locally uniformly convergent. In fact, for |ν| ≤ R, Since we have the asymptotic expansion it follows that the series ∞ n=1 n R+2 K R n √ −µ is convergent. Therefore the series (41) is locally uniformly convergent, and the function (40) is an entire function. Now according to (36), γ µ (a) is an entire function, hence the function (40) gives the spectral action coefficients for all a. Interestingly, we can express the spectral action coefficients γ µ (a) in a more concise way via the Poisson summation formula. Proposition 4.7. For any fixed chemical potential µ < 0, we have the expression for γ µ (a): . Proof. Using Lemma 3.8, and using the Poisson summation formula, when ν ≥ − 1 2 , a > 0, we have where φ ν,a (x) = 1 ((2x+1) 2 π 2 +a 2 ) ν+ 1 2 . Since we have the equation applying the formula (43) to Proposition 4.6 we then get the equatoin (42) when a ≥ 3 2 . Now, in Proposition 4.6 we saw that γ µ (a) is an entire function. It follows that the function (42) has an analytic extension to the whole complex plane C, and therefore equation (42) is true for all a ∈ C. Remark The second expression of γ µ (a) is in the sense of analytic continuation. Thus for example we have, Next we prove that when the chemical potential µ → 0 − , we can get the same coefficients given in [3]. We follow the same notation as in [3], and denote where ξ(z) is the Riemann ξ−function. Theorem 4.8. For all a ∈ n 2 : n ∈ Z , when the chemical potential µ approaches to 0, we have lim Proof. Since the spectral action coefficients are given by where g(t) is given by equation (22), we obtain Summarizing the above computations, we get the following Proposition: (1) For a given chemical potential µ < 0, the coefficient of t a in the heat expansion is given by γ µ (a), where and we have the following two explicit expressions of γ µ (a): Moveover, γ µ (a) is an entire function in a ∈ C. The average energy in the Fermionic second quantization Now we shall consider the average energy when the one-particle Hilbert space is H = C. We denote by Z = Tr(e −βdΓDµ ) the partition function. Then According to (1), Interestingly, this is just the first part on the right-hand side of our von Neumann entropy formula (21). We denote this function by u µ (x), Now let us consider the function u 0 (x) first. Since we have the expansion and (cf. e.g. [6]) When µ < 0, using the Fubini theorem, we can exchange the infinite sum and the integral, so that Then we obtain the following expression for the Laplace transform of r µ : Therefore, the function u µ ( √ x) is a well-defined spectral action function. Notice that here we can not take the chemical potential µ = 0, since the function u 0 (x) is singular at x = 0. When a < 0, the spectral action coefficient of t a is given by Using Lemma 4.4, we can express ω µ (a) as follows: Proposition 4.10. For any fixed chemical potential µ < 0, the function ω µ (a) is given by and moreover, it can be extended to an entire function in a. Proof. Taking any µ < 0, and using the same argument as in the proof of Proposition 4.6, we can show that ω µ (a) can be extended to an entire function as well. Now we want to find a more explicit expression for ω µ (a) using the Poisson summation formula. Proposition 4.11. For any fixed chemical potential µ < 0, we can express ω µ (a) as ω µ (a) = Γ(a + 1) Proof. Using (9) and applying Poisson's summation formula, we obtain, for any ν > 0 and z > 0, When a > 1 2 , we can combine the above equation with (45), and after simplification, we can deduce the equation (46). Now since ω µ (a) is an entire function, we conclude that (46) is valid in the whole complex plane. Now we want to see how the spectral action coefficients ω µ (a) behave when µ → 0 − . Proposition 4.12. When the order a ≤ 0, we have the limit When a = 1 2 , we have the asymptotic formula When a > 1 2 , we have the asymptotic approximation Proof. For a < 0, we have Applying (44), we get When a = 0, since ω µ (0) = u µ (0), we deduce that lim When a = 1 2 , using Lemma 3.10, we see that When a > 1 2 , using Proposition 4.11, we have the limit From which (48) follows. In particular, using Proposition 4.12, we get the expansion of u 0 ( √ x) as follows: The negative free energy in the Fermionic Fock space Since the free energy is the difference between average energy and von Neumann entropy, in the case of Fermionic second quantization it is natural to define the spectral action function with respect to the negative free energy to be Proposition 4.13. When chemical potential µ < 0, we have the following equation: Proof. Since 4t +tµ e −tx 2 dt. Now since µ < 0, we can apply the Fubini theorem to get the equation (49). Therefore the function v µ ( √ x) is a well-defined spectral action function when µ < 0, while is not a well-defined spectral action function since it is singular at x = 0. We denote by λ µ (a) the spectral action coefficient of v µ (x) of order a. For a < 0 we have Using an argument similar to the subsection 4.2, we obtain the following proposition. We omit the proof which is similar to the proof of Proposition 4.10, 4.11,4.12. Proposition 4.14. For a given chemical potential µ < 0, we can get a spectral action from the negative free energy of the Fermionic second quantization, and this spectral action function is given by the function The spectral action coefficients of v µ ( √ x) are given by the following two functions: Moreover, for any fixed chemical potential µ < 0, λ µ (a) is an entire function. When the order a < 0, we have the limit When a = 0, lim µ→0 − λ µ (0) = log 2. When a = 1 2 , we have the asymptotic expansion: When a > 1 2 , we have the asymptotic approximation: As in the case of Fermionic Fock space, we can also define the spectral actions in the case of Bosonic Fock space. Let H = C be the 1-particle Hilbert space. Then the Bosonic Fock space is The spectrum of dΓD µ is σ(dΓD µ ) = {n x 2 − µ : n = 0, 1, 2, 3, · · · }. Since the chemical potential µ < 0, we can define a density matrix ρ = e −dΓDµ Tr e −dΓDµ . The von Neumann entropy in the Bosonic second quantization We define a function k µ (x) by In the Bosonic Fock space case, we cannot take the chemical potential µ = 0, since the function k 0 (x) is singular at x = 0: Lemma 5.1. The function k 0 (x) is an even positive function of the variable x ∈ R\{0}, and its derivative is . Compare this to the function h 0 (x) in section 4.1, or the function h(x) in [3], where Similar to h 0 (x), we shall prove that the function k 0 ( √ x) is also given by the Laplace transform when x = 0. To prove this, we need the following lemma(compare this with Lemma 4.2 in [3]): . Proof. We use the Eisenstein series [3] in conjunction with sinh x = −i sin(ix). Thus 1 Now since one has the equation by the Fubini theorem we have the formula when x > 0. Now we have the following lemma: The function f (t) is rapidly decreasing as t → 0 + . Proof. Consider the theta function Let We have f (t) = g(4πt). Thus it suffices to show that g(t) is rapidly decreasing as t → 0 + . Now, using the Jacobi inversion formula, we have Since as t → 0 + , θ ′ 1 t is rapidly decreasing, g(t) is rapidly decreasing, and also the function f (t) is rapidly decreasing as t → 0 + . Thus we have the following proposition: where Proof. According to Lemma 5.3,f (t) is rapidly decreasing as t → 0 + . Thus when x > 0, the integral on the right hand side is well-defined. We denote the integral on the right hand side of (54) byk(x). We have and since both k 0 (x) andk(x) approach to 0 when x → ∞, thus k 0 (x) =k(x). Thus immediately we have Proposition 5.5. When the chemical potential µ < 0, for all x ∈ R, For the Bosonic Fock space, we can get similar results as in the Fermionic Fock space case. The main difference between them is that we get alternating sum from Fermionic second quantization, while we get just a sum in the Bosonic second quantization. Lemma 5.6. When ν > −1, one has the integral formula Proof. The proof of this proposition is the same as the proof of the Lemma 4.5. We denote by χ µ (a) the a−th order spectral action coefficient of k µ ( √ x), that is, t a e µt f (t)dt. Similar to the Proposition 4.6 and Proposition 4.7, we have the following proposition: Then we get the expansion of k 0 ( √ x): Unlike the Fermionic second quantization, here we cannot take µ = 0, as the integral on the right-hand side of the formula (62) does not converge. This is consistent with the fact that p 0 (x) is singular at x = 0. When the order a < 0, we denote by α µ (a) the spectral action coefficient of the spectral action function p µ ( √ x), namely, Using the same argument as in section 4.2, we have and it can be extended to a holomorphic function on C. Thus this formula gives the spectral action coefficients of all orders. Moreover, we have yet another expression for α µ (a): and it can also be extended to an entire function for any fixed chemical potential µ < 0. The negative free energy in the Bosonic second quantization Similar to the Fermionic second quantization, we define the spectral action function with respect to the negative free energy in the Bosonic second quantization to be It is obvious that the chemical potential must be negative, µ < 0. We denote by β µ (a) the spectral action coefficients of q µ ( √ x); namely, Using the same argument as before, we deduce that where f (x) is a non-negative even smooth function which is rapidly decreasing at ±∞, and Λ is a positive number called mass scale, or cutoff. Note that f (D/Λ) is a trace-class operator. We denote by χ(x) = f ( √ x), and assume that χ(x) is given as a Laplace transform where g(s) is rapidly decreasing near 0 and ∞. We also assume that there is a heat trace expansion Tr e −tD 2 ∼ α a α t α , t → 0 + , It was proved in [2] that the spectral action has an asymptotic expansion for Λ → ∞, namely, Tr(χ(D 2 /Λ)) ∼ a α Λ −α ∞ 0 s α g(s)ds. When α < 0, by the Mellin transform, Thus the spectral action coefficient is And when α = n is a positive integer, since (∂ x ) n (e −sx ) = (−1) n s n e −sx , we have that
7,219.8
2019-03-22T00:00:00.000
[ "Physics" ]
Cellular and Molecular Biology : Hepatocellular carcinoma is known to be a common predominant cancer in adults, especially in eastern countries. Immune response and cancer-asso - ciated fibroblasts (CAFs) have significant influences on tumor development. However, the interaction between CAFs and immunotherapy is unclear in hepatocel - lular carcinoma. We measured the number of activated fibroblasts in hepatocellular carcinoma samples and samples taken from normal liver tissues. A total of 20 patients’ fresh hepatocellular carcinoma and normal tissues which were surrounding the tumor were obtained from the surgery and used for evaluating alpha-SMA expression. We investigated the effects of CAFs in anti-tumor immunity in hepatocellular carcinoma animal model. The effects of CAFs in inducing anti-PD-1 treatment resistance were also measured in a preclinical animal model. Activated fibroblasts were highly accumulated in hepatocellular carcinoma tissues but not in surrounding normal tissues. CAFs showed a significant tumor-promoting effect in an immunocompetent model. The infiltration and function of some immune cells like myeloid-derived suppressive cells and T-cells were increased by CAFs. CAFs also reduced the number and activation of tumor-infiltrating cytotoxic T-cell in tumor tissue. In the treatment model, tumors with a higher amount of CAFs had been insensitive to therapy with anti-PD-1. CAFs are potent inducers of immunosuppression in hepatocellular carcinoma. Depleting CAFs rescued the antitumor immunity in the hepatocellular model and could be a novel treatment to combine with the existing immunotherapy. Introduction Hepatocellular carcinoma is known to be one of the most common malignant tumors in Asian counties (1).The introduction of combinational chemotherapy and surgery for localized hepatocellular carcinoma may increase the survival rate of patients (2).However, the cure rate for patients with the metastatic or relapsed disease remains dismal with short long-term survival (2).Hence, understanding the mechanisms of hepatocellular development is urgent. Tumors are communities of malignant cells as well as surrounding stromal cells, as well as fibroblasts and infiltrating immune cells (3).The significance of immune cells in determining the cancer patient's survival and treatment has been widely studied (4,5).Recently, the US FDA has approved the immunotherapies, such as immune checkpoint blockades, to treat several types of tumors, including melanoma, lung cancer, and renal cancer (6)(7)(8).However, immune checkpoint blockades' efficiency is determined by the overall immune cell function, which can be regulated by non-immune cells, such as cancer-associated fibroblasts (CAFs) (9). CAFs in cancerous tissues are as like as myofibroblasts in morphology (10).Functionally, CAFs are perpetually activated in cancer tissue and don't undergo apoptosis-like non-cancerous fibroblasts (10).Designing efficient medications for cancers needs more knowledge about CAFs.Herein, we investigated the immunoregulatory roles of CAFs in a hepatocellular carcinoma model. Cell culture and transfection Murine hepatocellular carcinoma cell line H22 was received from the Chinese Academy of Sciences (Shanghai, China) Cell Bank.Cell lines were cultured in DMEM medium (Thermo Fisher Scientific, IL, USA) comprising 5% fetal bovine serum (FBS), 100 mg/ml streptomycin, and 100 U / ml penicillin in a 5% CO2 incubator at 37 ° C humidified.At a confluence of 70 percent of the growing cell, layer subculture took place.Main cancer-associated fibroblasts (CAFs) have been derived from hepatocellular fresh H22 murine tissue.Standard hepatic stellate cells (HSCs) were isolated from the liver of BALB / c mice.A procedure that was previously published was used to separate and classify CAFs14.The CAFs were medium cultivated in DMEM and subcultured to ten lines. Animal model A syngeneic animal model was developed using 6-week old female BALB / c mice (21-23 g, Shanghai SLAC Laboratory Animal Center at Chinese Academy of Sciences, China) and H22 cells to examine the in vivo immunoregulatory function of CAFs.Each mouse was injected on the hind legs flank with 2 x 105H22 cells with or without 8 x 105CAFs or 8 x 105 HSCs.Each group included ten mice.The productivity of the tumors was tracked every five days.The size of the tumor was determined on the basis of the generally known formula: tumor volume=length * width2 *π/6.For the orthotopic model, the same number of cells were injected.The detailed procedure was reported previously (11).The animal work was approved by the local Animal Care and Use Committee.Every mouse was kept in a specific pathogen-free area with free exposure to autoclaved water, regular food, and a day and night period of 12 hours.The treatment plan for each experiment was included in the corresponding Figure legend. Flow cytometry A study of the flow cytometry was used in an experimental model to analyze the immune infiltration.CD8 + T cells (CD19 -, CD3 + , CD4 -, and CD8 + ), regulatory T cells (Treg, CD19 -, CD3 + , CD4 + , CD8 -, CD25 + and FOXP3 + ), helper T-cells (Th1: CD19 -, CD3 + , CD4 + , CD8 -, and IFN-γ + , Th2: CD19 -, CD3 + , CD4 + , CD8 -, and IL4 + ), and myeloid-derived suppressive cells (MDSCs, CD45 + , CD11b + , and Gr1 + ) were classified and analyzed.We isolated single cells from animal tumor tissues and washed them with PBS once.To extract red blood cells the red blood cell lysis buffer was applied.Cells were then washed with PBS once and resuspended in blocking buffer for 10min.Cell membrane staining was then performed, and cells were incubated 15min at room temperature.After cell membrane staining, cells were fixed by fixation/permeabilization buffer for 30min at room temperature.The cytoplasm proteins were then stained at room temperature for 30min.We used the FACSCanto II equipment (Becton Dickinson and Company, San Jose, CA) for data acquiring.Flow Jo software was used to visualize the data. Patient sample A total number of 20 hepatocellular carcinoma tissues and 20 tumor-adjacent normal liver tissues were included in this study.These patients were diagnosed from January 2016 to December 2016.All the tissues were obtained during the surgery before chemotherapy or radiotherapy.This study was approved by the local ethics committee.All patients assigned written informed consent. Western blotting The protein content of fresh human tissue and cell lysates was calculated utilizing the BCA (Thermo Scientific) protein measurement test.Anti-smooth muscle actin antibody (1:1000 dilution, Abcam) was used for Western blotting to test the rates of various proteins in the lysates.Beta-actin (Abcam, 1:2000 dilution) has been used as a charge buffer.The general western blotting technique was implemented.Pierce ECL Western Blotting Substrate (Thermo Scientific) was also used to generate the signals. Statistical analysis Graph Pad software (CA, USA) was used for statistical analyses and data visualization.The data were shown as mean ± SEM.One-way ANOVA was used to analyze the difference of means between more than two different groups.T-test was performed for two-group comparison.Differences with a two-tailed P-value<0.05were considered as statistically significant. Activated fibroblasts are accumulated in hepatocellular carcinoma A total of 20 patients' fresh hepatocellular carcinoma and surrounding normal tissues were obtained from the surgery and used for evaluating alpha-SMA expression.The hepatocellular carcinoma tissues showed higher alpha-SMA expression than tumor-adjacent normal tissues (Figure 1A and B).Alpha-SMA is expressed in activated fibroblasts, but not in quiescent fibroblasts.Thus, our data suggested that CAFs, a subtype of activated fibroblasts were accumulated in hepatocellular carcinoma. CAFs induced immunosuppressive cells accumulation in tumor tissue Flow cytometry quantified the major innate immune cell types in tumor tissues.The gating strategy of myeloid-derived suppressive cells (MDSCs) has been shown in Figure 2A.The number of tumor-infiltrating MDSCs was highest in tumors containing exogenous CAFs (Figure 2B).However, the frequency of tumor-infiltrating macrophages and dendritic cells (DCs) were not changed by the exogenous CAFs (Figure 2E and F).The immunosuppressive cytokines, IL-10 and PD-L1, were also accumulated in tumor-infiltrating MDSCs from the tumors with a high amount of exogenous CAFs (Figure 2C and D).When dasatinib, the functional inhibitor of we further investigated the therapeutic role of dasatinib in hepatocellular carcinoma.As shown in Figure 5A, the administration of dasatinib delayed tumor growth in tumors with a high number of CAFs.In an orthotopic model, we found that anti-PD-1 treatment alone didn't dramatically increase mice survival time.However, when dasatinib was combined, the survival time was dramatically increased (Figure 5B). Discussion There is abundant evidence indicating that natural CAFswas administrated, the immunosuppressive effects of CAFs were neutralized. CAFs reduced T-cell infiltration but enhanced Treg accumulation in tumor tissue T-cell infiltration and phenotype are key factors of antitumor immunity.We found that tumors with exogenous CAFs had less CD8 + T-cell (Figure 3B).We then investigated the subtype of CD4 + T-cell: T-helper cells (Th1 and Th2) and regulatory T-cell (Treg) in the tumor tissue.The frequency of tumor-infiltrating Th1 and Th2 cells was very close in all groups (Figure 3C and E).However, the Tregwas accumulated in tumors with exogenous CAFs (Figure 3D).However, when the function of CAFs was inhibited by dasatinib, the number of tumor-infiltrating T-cells was also rescued.These data suggested that CAFs significantly influenced the quantity and phenotype of T-cell. CAFs suppressed tumor-infiltrating T-cell function We further measured the function of tumor-infiltrating T-cell in each group.The expression of IFN-γ and granzyme B by CD8 + T-cell was significantly inhibited by exogenous CAFs (Figure 4A and B).However, dasatinib neutralized the inhibitory effects of CAFs on IFN-γ and granzyme B expression (Figure 4A and B). Inhibition of functional CAFs sensitized anti-PD-1 treatment in hepatocellular carcinoma Since we showed that CAFs were able to induce immunosuppression in the tumor microenvironment and inhibition of CAFs function can neutralize this effect, and/or therapy-induced antitumor immune responses dictate a better prognosis for patients across diverse histological types of neoplasia (12).The function of the cytotoxic T-cell, the major immune cell type that kills tumor cells, is affected by both immune and non-immune factors (9).CAFs are characterized by unchecked pro-fibrotic and pro-inflammatory signaling, which can suppress T-cell function (13).In the present study, we aimed to understand the roles of CAFs activation on hepatocellular carcinoma anti-tumor immune response. In our study, we first measured the amount of CAFs in hepatocellular carcinoma tissues and compared with the adjacent normal tissues.In line with the previous report (14), the amount of activated fibroblast (alpha-SMA positive) was much higher in tumor tissue than in the normal liver.These observations provided the rationale to study the effects of CAFs on immunoregulation and targeting CAFs in hepatocellular carcinoma. Innate immune cells are major cell types of the tumor microenvironment.We investigated the frequency of tumor-infiltrating macrophages, DCs, and MDSCs in tumors with and without functional exogenous CAFs.We found that MDSC is the only cell type that was affected by functional CAFs in the tumor microenvironment.MDSCs are potent suppressors of anti-tumor immunity and significant impediments to cancer immunotherapy (15).We noticed that exogenous CAFs significantly enhanced MDSCs tumor infiltration.The immunosuppressive phenotype of MDSCs was also enlarged with higher IL-10 and PD-L1 expression.Dasatinib is a functional inhibitor of activated fibroblasts (16).When the activation of CAFs was inhibited by dasatinib, the immunosuppressive phenotype of MDSCs was diminished. T-cell infiltration and classification in tumor tissue are major indicators of antitumor immunity (17).Via flow cytometry, we systematically studied the infiltration and classification of T-cell in the hepatocellular carcinoma model.The number of CD8 + T-cell in tumor tissue was decreased by exogenous CAFs.However, the number of Treg in tumor tissue was enlarged by exogenous CAFs.This data suggested that CAFs induced immunosuppression via excluding cytotoxic T-cell and accumulating Treg in tumor tissue.This is in line with the previous study in gastric cancer that activated CAFs can shift the ratio of cytotoxic T-cell to Treg (18).We also checked the functional markers of T-cell, IFN-γ and granzyme B, which are downregulated by functional CAFs as well.These data strongly supported that functional CAFs are immunosuppressive in hepatocellular carcinoma and reversing CAFs activation may release the suppression. The immune checkpoint blockades have achieved impressive effects in melanoma patients (19,20).However, when tested in solid tumors, the immune checkpoint blockades alone showed limited efficacy.This is partly due to the immunosuppressive microenvironment in solid tumors, such as hepatocellular carcinoma (21).Combinatory immunotherapy has been widely tested in clinical trials (22).Here, we showed that inhibiting the activation of CAFs could reduce exogenous CAFs mediated hepatocellular carcinoma growth.More importantly, the efficacy of anti-PD-1 treatment was enhanced by dasatinib treatment.These data highlighted the clinical value of targeting CAFs in hepatocellular carcinoma. In conclusion, our study indicated that activated CAFs promoted hepatocellular development via inducing strong immunosuppression.Inhibition of activated CAFs released the immunosuppression in the tumor microenvironment and thus might be a promising target for combining with immunotherapies. Figure 1 . Figure 1.CAFs in hepatocellular carcinoma.(A)alpha-SMA expression in the representative case of hepatocellular carcinoma and surrounding normal liver tissues.(B) Quantification of alpha-SMA expression in 20 hepatocellular carcinomas and surrounding normal liver tissues.The data were normalized to β-actin expression.(**** P-value < 0.0001). Figure 3 . Figure 3. Effects of CAFs on T-cell infiltration.(A)The flow cytometry gating plots of total T-cell and subtypes.(B)The frequency of tumor-infiltrating CD8 + T-cell was increased by down inhibiting the function of CAFs.(C & E) The frequency of tumor-infiltrating Th cell was not influenced by exogenous CAFs.(D)The frequency of tumor-infiltrating Teg cells was increased by exogenous CAFs.(n=8 in each group).For the groups with dasatinib treatment, CAFs were pre-incubated with dasatinib (0.5uM) for 48h before injected with tumor cells.(NS: None-significance, ** P-value < 0.001, *** P-value < 0.001, and **** P-value < 0.0001).
3,021.2
2020-05-15T00:00:00.000
[ "Medicine", "Biology" ]
Image Clustering Based on Multi-Scale Deep Maximize Mutual Information and Self-Training Algorithm Image clustering is a complex procedure that is significantly affected by the choice of the image representation. Generally speaking, image representations are generated by using handcraft features or trained neural networks. When dealing with high dimension data, these two representation methods cause two problems: i) the representation ability of the manually designed features is limited; ii) the non-representative and meaningless feature of a trained deep network may hurt the clustering performance. To overcome these problems, we propose a new clustering method which efficiently builds an image representation and precisely discovers the cluster assignments. Our main tools are an unsupervised representation learning method based on Deep Mutual Information Maximization (DMIM) system, and a clustering method based on self-training algorithm. Specifically speaking, to extract the informative representation of image data, we derive the maximum mutual information theory and propose a system to learn the maximum mutual information between the input images and the latent representations. To discover the clusters and assign each image a clustering label, a self-training mechanism is applied to cluster the learned representations. The superiority and validity of our algorithm are verified in a series of real-world image clustering experiments. I. INTRODUCTION Clustering, a vital research topic in the field of data science and unsupervised learning, which aim to classify elements into categories on the basis of their similarity [1]. The clustering problem has been extensively studied in the past decades. However, the performance of standard clustering algorithms is adversely affected when dealing with high-dimensional data [2]. Because image is a kind of high-dimensional data, image clustering is always a challenging task in computer vision and machine learning [3]. The associate editor coordinating the review of this manuscript and approving it for publication was Noor Zaman . Generally speaking, the traditional image clustering methods such as k-means++ [4], gaussian mixture model [5] and spectral clustering [6] group images on handcrafted features and treat feature extraction and clustering separately. Based on this insight, many attempts have dedicated to developing suitable clustering feature extracting techniques such as manually designed feature descriptors, including Bag of Feature (BOF) [7], Histogram of Oriented Gradient (HOG) [8], Principal Component Analysis (PCA) [9] and Scale-Invariant Feature Transform (SIFT) [10]. However, the representation ability of the manually designed feature methods is limited; the traditional clustering methods may be invalid due to the influence of some messy variable. They mostly suffer from appearance variations of scenes and objects when VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ dealing with image data. How to automatically learn an image representation to capture the image information is a crucial problem that needs to be solved in image clustering tasks. In recent years, some novel representation learning methods have emerged, such as manifold alignment [11], dictionary learning [12] and deep neural network [13]. Among them, deep neural networks have been successfully applied to various supervised learning tasks [14].The reasons for the success of deep neural networks is that it can automatically learn the essential representation of images by constructing networks with multiple hidden layers, and train the network with a large number of data [15]. Motivated by this, some studies are devoted to cluster images based on deep neural networks, that is deep clustering [16], [17]. Deep clustering learns deep representations that favor clustering tasks using neural networks. In most of previous deep clustering studies, they usually train a deep generation model e.g. Auto-Encoder (AE), Variational Auto-Encoder (VAE) and Generative Adversarial Network (GAN), to reduce the dimension of image data. Then, part of the trained generation model is fine-tuned by using the clustering algorithms, and the generation model provide deep feature to the clustering algorithms to discover clusters [18], [19]. This two-stage training scheme has been successfully applied to many clustering works [2], [20]- [22]. However, the clustering results may suffer from the invalid image representation. Generation models use reconstruction loss to learn image representation which is not an optimal choice for clustering tasks. Most recently, some deep clustering methods try to combine image representation with clustering learning [23], [24]. Generally, they construct neural networks and use clustering algorithm as loss function to train network directly. From the perspective of representation learning, one-stage clustering scheme is more reasonable while it learns image representation and clustering information simultaneously. However, one stage clustering also involves a non-representation problem, that is, the clustering algorithm does not match the deep learning. This may lead to the learned image representation only focuses on clustering, and lacks the essential information of the image, and causes the phenomenon of degenerate solution. To overcome the aforementioned problems, this paper focuses on the establishment of an image clustering method based on the Deep Mutual Information Maximization (DMIM) and a self-training algorithm. Specifically speaking, we first derive the Mutual Information Maximization (MIM) theory into deep neural network, and propose a Deep Mutual Information Maximization (DMIM) systeem to learn an informative representation of image data. To discover the clusters of the input images, we assign each image a clustering label, and adopt a self-training algorithm to fine tune the DMIM system to obtain a more friendly clustering representation. We conduct series of experiments to verify the effectiveness of our algorithm, and the performance of the proposed algorithm outperforms the newest opponent in a large margin. The main contributions of this paper can be concluded in three aspects. Firstly, we propose a novel maximum mutual information system based on statistical learning theory, and use it to learn an informative image representation. Secondly, we incorporate the learned image representation to a self-train algorithm to realize image clustering. Thirdly, we conduct extensive experiments on four real-world datasets to verify the effectiveness of the proposed algorithm. The rest of the paper is organized as follows. In Section II, we introduce the related work of our paper. Section III proposes the clustering algorithm as well as some details of the algorithm. Section IV provides a series of experiments to analyze the effect of parameters and verify the superiority of the proposed algorithm. We conclude this paper in the last section. II. RELATED WORK A. DEEP CLUSTERING Deep clustering refers to clustering with the related algorithm of deep learning, which has been widely concerned and studied in recent years. The existing deep clustering algorithms are mainly divided into two categories: (I) a two-stage work that applying clustering after a representation is learned. (II) a one-stage work that jointly optimization the representation learning and clustering. Some two-stage methods usually train a generation model at the first stage. Then, the trained generation model acts as a feature extractor and uses clustering algorithm to obtain the clustering results. For instance, Guo et al. propose Convolutional Auto-Encoder (CAE), using k-means algorithm to cluster the auto-encoder's image representation [20]. Ghasedi Dizaji et al. propose Stacked Auto-Encoder (SAE) algorithm which first train an AE, and uses relative entropy as a loss function training encoder to obtain clustering results [2]. Xie et al. propose Deep Embedded Clustering (DEC) which starts with a pre-training phase using only the reconstruction loss and then improves the clustering ability of the representation by optimizing in a self-supervised manner [25]. Peng et al. propose a novel clustering method by minimizing the discrepancy between pairwise sample assignments for each data point [26]. Gaussian Mixture Variational Autoencoders (GMVAE) is a representative generation-based clustering algorithm that incorporates gaussian distribution to variational AutoEncoder [22]. Categorical Generative Adversarial Networks (CatGAN) is another clustering algorithm that based on generative models. It is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model [19]. The disadvantage of two-stage methods is the unsuitable problem between image representation and clustering. Since the target of generation model is to make the generated image close to the input image in visualization, while clustering aims to reduce all possible variations into several templates [27], [28]. This difference makes generation model unsuitable to participate in deep clustering directly as the important discriminative information will be lost. One-stage methods combine the image representation with the clustering learning process. Joint Unsupervised Learning of Deep Representations and Image Clustering (JULE), Structured AutoEncoders for Subspace Clustering (SASC) and Deep Adaptive Clustering (DAC) are three representative image clustering methods that simultaneously learn the image representation and the clustering results. JULE proposes a recurrent framework for joint unsupervised learning of deep representations and image clusters [23]. DAC defines an effective objective and proposes an adaptive mechanism to realize image clustering [24]. The defined objective function is used to update the parameters of a convolutional network by selecting highly confidence image pairs and the cluster assignment is integrated into classification labels. SASC proposes a clustering method based on the subspace clustering theory and a local preserving scheme. It improves the traditional subspace clustering methods by using an autoencoder to guarantee that the learned representations preserve the local and global subspace structure. [29]. The effectiveness of these learning schemes has been proved in theory and practical experiments. However, there are two crucial factors that affect the stability and effectiveness of these two algorithms. On one hand, the initialization of convolutional network is an important factor that affects the performance of DAC and JULE. On the other hand, with the training going on, the local structure preservation of representation cannot be guaranteed. The image representation in the distorted feature space may not be suitable for clustering. B. MUTUAL INFORMATION MAXIMIZATION Since the definition of mutual information is based on the information entropy, we first briefly introduce the corresponding concepts of information entropy. Information entropy is the average rate at which information is produced by a stochastic source of data [30]. The measure of information entropy associated with each possible data value is the negative logarithm of the probability mass function for the value. Given X = {x 1 , x 2 , ..x n }, the information entropy is defined as, where p(x) is the probability density of X . If Z is the latent variable of X , mutual information (MI) is a measure of the reduction of uncertainty in X due to the knowledge of Z [31]. In information theory, given two random variables X and Z with the joint distribution p(x, z) and the marginal distribution p(x) and p(z), MI between X and Z can be calculated as follows, Therefore, MI can be defined as follows, where D KL (·) denote Kullback-Leibler divergence (KL divergence for short). Mutual Information Maximization (MIM) is based on the definition of IE and MI, and is a technique for maximizing the average mutual information between two variables [32], [33]. In this paper, we maximize the mutual information between image X ∈ R D and representation Z ∈ R d , and d. Thus, we realize the dimensionality reduction of X , and extract the informative representation of image data. III. METHOD In this section, we first derive the mutual information maximization to deep neural network and establish a deep mutual information maximization system. Then, a selftraining method is involved in the training of image representation to obtain a representation suitable for clustering. Finally, we provide the network architecture and the detailed training procedure of the proposed algorithm. A. DEEP MUTUAL INFORMATION MAXIMIZATION As mentioned above, the mutual information between inputs X and their representations Z is defined by the KL-divergence (3), which can be decomposed as follows, where p(x) denote the probability density of input data X , p(z) is the probability density of representation Z . We will often write densities like p(Z = z) as p(z) to save space. To learn the maximize mutual information between the input image and the image representation, we model p(z|x) as neural network, and assume that p(z) follows the standard normal distribution. This assumption is similar to what is done in VAE. Therefore, the objective function used to train the network p(z|x) can be defined as follows, In [33], Deep InfoMax optimizes (5) by using an adversarial scheme. However, we observe that this scheme often leads to an instability problem in the in the process of network training. Different from Deep InfoMax, we directly add a restriction on the distribution of representation Z , and the restriction is defined as follows, where q(z) is a prior distribution, which follows the standard normal distribution. Combining (5) with (6), the whole objective function is defined as follows, where γ is the balance coefficient. To calculate the minimal optimization problem, we decompose (7) as follows, Therefore, (7) is equivalent to the following optimization problem, . (9) Because of KL-divergence is theoretically unbounded, we use Jensen-Shannon divergence (JS divergence for short) to instead of it. The objective function can be transformed as, . (10) In [34], f-GAN have proved that the minimization JSdivergence can be calculated as follows, where D(x, z) is a learnable parameter. Therefore, the objection function (10) can be rewritten as, To involve more comprehensive information of images, we combine the local and global mutual information loss, and define the following objective function, where D 1 (x, z g ) and D 2 (z l , z g ) are the learnable parameters. L g , L l and L p are the local mutual information loss function, global mutual information loss function and prior distribution of latent variable, respectively. The definitions of L g , L l and L p are as follows, , B. NETWORK ARCHITECTURE As a practical applications of the ideas described above, we will now develop a system for maximizing the mutual information between the input X and the latent variable Z to obtain an image representation. For this purpose, we model p(z|x), D 1 (x, z g ) and D 2 (z l , z g ) by three neural networks, that is, f θ (x), g φ 1 (x, z g ) and g φ 2 (z l , z g ), where θ, φ 1 and φ 2 denote the weights and biases parameters of the networks. The system we established is shown in Fig. 1. Specifically, f θ (x) maps the input x to the global latent variable z g , which can be implemented by several convolutional layers. For g φ 1 (x, z g ), we first encode x into a vector, and concatenate this vector with the global representation z g . Then, we feed the connection vectors to several fully connection layers to realize g φ 1 (x, z g ). The implementation of g φ 2 (z l , z g ) is similar to g φ 1 (x, z g ) expect the connections are the global representation z g and the local representation z l . The goal of the proposed system is to learn an encoder which maps input images to the informative representations. Next, we introduce the details of the complete loss function. C. COMPLETE LOSS FUNCTION All the aforementioned objects including global mutual information, local mutual information and prior matching loss function are jointed together. The complete loss function for learning image representation is defined as follows, and L g = E x∼p pos log g φ 1 (x, z g ) where p pos and p neg denote are the distributions of positive samples and negative samples. z l and z g are the local and global latent representations of positive samples.ẑ l andẑ g are the local and global latent representations of negative samples, respectively. The complete loss function consists of three components. The first and second terms in (15) denote the local and global mutual information loss function, which measure the information relevance between input image and representation. The third item in (15) denotes the prior loss which measure the errors between model prediction and the target variable. Next, we introduce the implementation details of the proposed deep mutual information network. For the implementations of L g and L l , the two expectations can be approximated by Monte Carlo sampling. Since the expectations of p(z|x)p(x) and p(z)p(x) can be realized by sampling on positive and negative samples, respectively [35]. To achieve this, we borrow the well known word2vec algorithm and adopt negative sampling trick to obtain positive and negative samples [36]. Specifically, we first randomly select a batch of samples from the dataset as positive samples. The positive samples follow the distribution p pos . Then, we shuffle the samples and take the disordered samples as negative samples. The negative samples follow the distribution p neg . Similar to VAE, p(z|x) follows the normal distribution of mean u k (x) and variance σ 2 Since q(z) follows the standard normal distribution q(z) ∼ N (0, I ). Thus, D KL p(z|x) q(z) can be calculated as follows, For implementation of L p , µ k (x) and σ k (x) are the outputs of network f θ (x). Note that the expectations of p pos denotes that the prior loss only calculate in positive samples. D. CLUSTERING LOSS FUNCTION By maximizing the mutual information between the input image and the representation, the image representation with the most image information can be obtained. In this section, to mine the clustering characteristics of image representation, a self-learning module is proposed and integrated into the training of deep mutual information network. The concept of self-training is derived from semi supervised learning. It first trains a classifier by using the known labeled samples, and then uses the trained classifier to evaluate the label of unlabeled samples [37]. For the unsupervised problem, the self-training can be transformed into two steps: calculating the initial label of samples and training the network with high confidence to correct the low confidence label. Therefore, it is very important to generate the initial labels as close to the real label distribution as possible. Inspired by the DEC algorithm, in this paper, we use selftraining module as a tool to discover the clusters of the learned image representation. Specifically, we first use the well known student's t-distribution and the learned maximization mutual information representation to calculate the pseudo label of each image, and get the pseudo label distribution of the image. Then, we establish a target distribution and use it to VOLUME 8, 2020 fine-tune the learned maximization mutual information network. The reasons why we incorporate the maximum mutual information representation to self-training algorithm contain two points: i) the initial pseudo labels of the self training algorithm should be as close as possible to the semantic labels of images, which means that the learned image representation should contain as much image information as possible; ii) the target distribution is established by enhancing the high confidence pseudo labels and weakening the low confidence pseudo labels. In [25], DEC proposes a student's t-distribution as a kernel to measure the similarity between the image representation z i and the cluster centroid u j as follows, where z i denotes the i-the image representation, u j is initialized by K-means on representations learned by pre-train autoencoder, and v denote the defrees of freedom of the Student's t-distribution. In particular, l ij can be regarded as a pseudo label of each input image, and denote the probability of assigning sample i to cluster j. When we obtain the clustering assignment distribution Q, Akin [25], we also propose a target distribution to optimize the image representation by learning from the high confidence pseudo labels. Specifically, the target distribution P is defined as follows, where f j = i l ij denote the pseudo label frequencies. The goal of image clustering is to align Student's t-distribution with the target distribution (20). Therefore, we define the following clustering loss function, By minimizing the distance between T and L distributions, the target distribution can guide the DMIM system to learn an image representation that is more suitable for image clustering. E. TRAINING AND CLUSTERING Given a dataset X = [x 1 , x 2 , . . . , x n ] with n samples that need to be clustered. The number of clustering K is a priori knowledge. Let the value of Z = [z 1 , z 2 , . . . , z n ] is the image representations. We first sample a batch of images from the dataset X to construct positive sample set, and shuffle the image to construct negative sample set. Then, we train the proposed DMIM system by optimizing the object proposed in (15) to obtain the image representation Z . Finally, we calculate the clustering pseudo labels by using (19), and update the DMIM system by using the self-training objective function (21). The entire training procedure of the proposed algorithm is presented in Algorithm 1. Compute the image representation Z . 6: Update θ, φ 1 and φ 2 by minimizing (15) with the learning rate ρ 1 . 7: end for 8: Self-training for the optimization of the network: 9: Cluster the image representations by using k-means++ and initialize clustering centroids u j , j = 1, 2, . . . , K . 10: Calculate T and L by using (19) and (20). 11: for t 2 in epochs 2 do 12: Update θ by minimizing (21) with learning rate ρ 2 . 13: end for 14: Calculate clustering labels: 15: for x i in X do 16: Calculate the clustering label based on f θ (x i ) and (19). 17: end for IV. EXPERIMENTS In this section, we conduct a series of experiments to verify the effectiveness of our clustering algorithm. All the experiments are performed on a desktop workstation with Inter(R) Core i7-4790 3.6GHz CPU, 32G RAM, Ubuntu 14.04 operating system and Keras environment. A. DATASETS We select four representative image datasets including MNIST, Fashion-MNIST, Cifar-10 and STL-10 datasets, to verify the effectiveness of our algorithm. Next, we briefly introduce these datasets. 1) MNIST AND FASHION-MNIST MNIST is a handwritten digits database which includes a training set of 60,000 examples and a test set of 10,000 examples. Fashion-MNIST is a dataset of Zalando's article images which is an update version of MNIST [38]. For these two dataset, each example is a 28×28 gray scale image, associated with a label from 10 classes. The STL-10 dataset is an image dataset used to develop unsupervised feature learning, deep learning and self supervised learning algorithms [40]. It is inspired by the CIFAR-10 dataset but with some modifications. The high-resolution dataset (96 × 96) will make it a challenging benchmark to develop more scalable unsupervised learning methods. The detailed statistics of these four datasets are shown in Table 1. B. EVALUATION METRICS To evaluate the performance of the clustering algorithms, we adopt three commonly metrics including clustering accuracy (ACC), Normalized Mutual Information (NMI) and Adjusted Rand Index (ARI). These three metrics reflect the cluster performance from different perspectives. ACC measures the best matching between unsupervised clustering results and ground truth. NMI measures the similarity between pairs of clusters [41], [42]. Adjusted Rand Index (ARI) establishes a baseline by using the expected similarity of all pair-wise comparisons between clusterings specified by a random model [43]. All the metrics are the higher the score, the better the clustering performance. C. COMPETITORS We compare the performance to traditional clustering methods, two-stage clustering methods and one-stage clustering methods. Specifically, traditional clustering methods include K-means++ [4], Self-tuning Spectral Clustering (SSC) [6] and Density Based Spatial Clustering of Applications with Noise (DBSCAN) [44]. These methods first extract BOW feature, and then cluster the image feature. Two-stage clustering methods include Greedy Layer-Wise Training of Deep Networks (GLWTDN) [45], Deep Embedding Clustering (DEC) [25] and Improved Deep Embedding Clustering (IDEC) [46]. These methods first train an Autoencoder, and then use clustering methods to calculate clustering assignments. One-stage clustering methods include Gaussian Mixture Variational AutoEncoders (GMVAE) [22], Categorical Generative Adversarial Networks (CatGAN) [19] and Deep Adaptive Clustering (DAC) algorithm [24]. These three methods also belong to clustering algorithm based on generation model. 1) TRADITIONAL IMAGE CLUSTERING METHODS K-means++, SSC and DBSCAN: These three comparisons first use BOF algorithm to encode the images. Then, the image features are clustered to achieve image clustering. 2) TWO-STAGE DEEP CLUSTERING METHODS GLWTDN: It first trains an AE to extract image features, and then uses k-means algorithm to cluster the image features to realize image cluster [45]. DEC: It first learns image representations from an AE. Then, cluster are obtained by utilizing a self-training mechanism [25]. GMVAE: GMVAE uses gaussian mixture model as a prior distribution to improve the traditional variational autoencoder. It uses the improved latent vector as image representation, and then clusters representation to realize image clustering [22]. 3) ONE-STAGE DEEP CLUSTERING METHODS IDEC: IDEC is an improved version of DEC. It trains AE's reconstruction loss function and self-training loss function simultaneously to guarantee local structure preservation [46]. CatGAN: It uses general Generative Network Adversarial (GAN) and entropy as loss function to realize image clustering [19]. DAC: It formulates image clustering as a binary pairwise classification problem, and identifies this pairs of images which should belong to the same cluster [24]. D. EXPERIMENT SETUP For the traditional clustering algorithms, i.e., K-means, SSC and DBSCAN, we first extract the BOF features of the images. The selected image feature extraction method is Scale-Invariant Feature Transform (SIFT) [47], and the number of bins in BOF algorithm is set to 20. The parameters of other comparison methods are mostly set according to the original literature. For our algorithm, we set the parameters α = 0.01, β = 0.5, γ = 0.5, respectively. According to most of methods based on self-training scheme, the parameter of the student's t-distribution is set to v = 1. We set the learning rates as η = 0.005 and ρ = 0.0001, which are set empirically. VOLUME 8, 2020 Clustering performance and comparison, ACC (%) and NMI (%) and ARI (%), on all datasets. The results marked † are excerpted from [24], [25] and [46]. The best and second best results are highlighted in bold and underlined, respectively. The detailed network architectures are shown in Table 2. The weights of convolutional and fully connected layers are all initialized by Xavier approach [48]. 1) CLUSTERING PERFORMANCE COMPARISON In this part, we compare our method with many state-of-the art methods including K-means ++ [4], SSC [6], DBSCAN [44], GLWTDN [45], DEC [25], IDEC [46], GMVAE [22], CatGAN [19] and DAC [24]. For our method, we followed the implementation details and report the average results from 5 trails. For the rest, we present the best reported results either from experiment on the original codes of their papers, or from [24], [25] and [46]. We report the detailed clustering results of these methods on all the datastes in Table 3. As shown in Table 3, for each dataset, the performances of deep clustering algorithms are better than that of traditional clustering algorithms. Our clustering method outperforms traditional algorithm with a large margin, which shows the fascinating potential of the proposed method in clustering tasks. Furthermore, note that the proposed method outperforms the deep clustering methods on all the three evaluation metrics expect on MNIST dataset. Our algorithm outperforms all competitive baselines, with significant margins of 7.52%, 3.49% and 3.17% in the case of Fashion-MNIST, Cifar10 and STL10 respectively. These results show the effectiveness of our method in image clustering tasks. Fig. 2 shows the confusion matrixes of the clustering results for all the datasets. The values along the diagonal represent the percentage of samples correctly classified into the corresponding categories. We can find that all the clustering accuracies are average and stable for all the datasets. This proves that our method does not aggregate samples into a few categories or assign a cluster to outlier samples, and can effectively avoid degenerate solutions problem. 2) VISUALIZATION In this part, we use two methods to visualize the clustering results of our algorithm. In the first visualization experiment, we map the image representation Z to a 2-dimension vector by using t-SNE algorithm [49]. We report the t-SNE results of Fashion-MNIST dataset and STL-10 dataset with different clustering accuracy in Fig. 3 and Fig. 4. Different colors indicate different clusters and the corresponding clustering accuracies are reported below. The visualization results show that the proposed algorithm can effectively improve the sep-arability of data, which is helpful to improve the clustering accuracy. In the second visualization experiment, we qualitatively analysis the cluster results by the proposed method on Fashion-MNIST dataset and CIFAR-10 dataset. For each category, we randomly select one image as the original image at the first stage. Then, we pick up 5 samples which are the smallest Euclidean distance between original image from the same cluster. Finally, we pick up 5 samples which are most closest to the original image in the incorrect clustering images. All the picked images are shown in Fig. 5, and we mark the correct samples and the incorrect samples with green labels and red labels, respectively. Form the visualization results we can find that the successful cases not only depend on appearance textures, but also contain some semantic information of categories. The failure cases also contain a lot of texture contents similar to the source images. The visualization results imply that our method not only captures image appearance information, but also captures some abstract image information for image clustering. This is the reason why the proposed method can precisely discover the categories of the input images. 3) ON EFFECT OF THE NUMBER OF CLUSTERS In this experiment, we mainly study the effect of the number of clusters on our algorithm. For each dataset, we conduct 6 experiments on different training sets. The number of training sets varies in the range of [5,10] at equal intervals. We report the variation curves of clustering accuracy with the number of clusters in Fig. 6. As shown in Fig. 6, with the increase of the number of clusters, the accuracies of all the clustering methods decrease gradually. For the Fashion-MNIST, CIFAR-10 and STL-10 datasets, the clustering accuracy of our method is always higher than the other algorithms in different number of clusters. In addition, other two metrics results also show the superiority of the proposed algorithm. This is because our algorithm can exploit the essential information of images. In addition, the experimental results also show the stability of the our algorithm. V. CONCLUSION This paper proposes a new image clustering method based on Deep Mutual Information Maximization (DMIM) system and self-training algorithm. To make the learned image representation contains more image information, we first derive a deep mutual information maximization system, and use it to learn an unsupervised image representation. To discover the image clusters assignments, we borrow a self-training mechanism and incorporate to the learning of image representation. We evaluate our method on unsupervised clustering tasks using popular datasets, achieving competitive results compared to the current state of the art methods. Form the view of learning scheme, this paper regards an unsupervised learning problem as a semi-supervised leaning problem with enhance the high confidence pseudo labels. Future work may include exploring more self-training methods to assist encoders in obtaining image representations. Specifically, we first train a classification DMIM model and select high confidence pseudo labels as initial labels. Then, we may use some semi-supervised leaning schemes to train the model. Nevertheless, how to determine the initial label is an open problem. A optional way is to automatically select the initial labels based on more prior information. Beside, Graph Convolutional Network (GCN) has been proved to be effective in semi-supervised classification tasks [50]- [52]. A possible direction is adding some GCN layers to the proposed model to improve the clustering performance. PEIYAO WANG received the B.Sc. and M.Sc. degrees in pattern recognition and intelligent system from Liaoning Shihua University, China, in 2014 and 2017, respectively. She is currently a Teaching Section Chief with the Shenyang Institute of Technology. Her research interests include image/video representation and deep learning. YUNING WANG received the B.Sc. degree in ammunition engineering and explosive technology from the College of Equipment Engineering, Shenyang Ligong University, Shenyang, Liaoning, China. He is currently working as an Assistant Engineer with PLA 32681. His research interests include the theories and algorithms of object detection, machine learning, and intelligent vision systems. CHENGDONG WU (Member, IEEE) is currently the Vice President of the Faculty of Robot Science and Engineering, Northeastern University, and the Director of the Institute of Artificial Intelligence, a Professor, and a Doctoral Tutor with Northeastern University, Shenyang, China. He has long been involved in automation engineering, artificial intelligence, and teaching and researching in robot navigation. He is also an Expert in Chinese modern artificial intelligence and robot navigation. He is also a Special Allowance of the State Council.
7,638.4
2020-01-01T00:00:00.000
[ "Computer Science" ]
Characterization data and kinetic studies of novel lipophilic analogues from 2,4-dichlorophenoxyacetic acid and Propanil herbicides This work describes the data collection of new lipophilic esters and amides herbicides, analogues to 2,4-dichlorophenoxyacetic acid (2,4-D) and Propanil. The data include 1H and 13C NMR spectra and UV–VIS spectroscopic experiments, from the work “Novel lipophilic analogues from 2,4-D and Propanil herbicides: Biological activity and kinetic studies”. The UV–VIS and 1H NMR spectra were employed to kinetic degradation design, and could be used to access new herbicides derivatives with better environmental properties. Specifications Organic chemistry Specific subject area Organic synthesis; Physico-Chemistry. Type of data Figures How data were acquired NMR experiments were performed on a Bruker AVANCE 400 NMR spectrometer operating at 9.4T, observing 1 H and 13 C at 400.13 MHz and 100.50 MHz, respectively, equipped with a 5 mm direct detection probe (BBO) with gradient along the z-axis in CDCl 3 or DMSO-d6 solution with TMS as the internal standard. For qNMR 1 H experiments, pulse was calculated by pulsecal. The relaxation delay for use in the acquisition of the quantitative 1 H NMR spectra was determined by T1 measurements with the aid of the pulse sequence inversion recovery, with same parameters as for 1 H spectra changing the τ values from 0.01 to 15 s. 1 H spectra were acquired by using a 30 °pulse sequence (zg) with the following parameters: 30 s of relaxation delay (D1), 16 transients, a spectral width (SW) of 4789.27 Hz ( ∼ 12.0 ppm), 64 K numbers of data (TD), and 6.84 s of acquisition time (AQ). The experiments were performed at 298 K. FIDs were Fourier transformed with line broadening (LB) = 0.3 Hz. The resulting spectra were manually phased and baseline corrected, and referenced to the TMS at δ 0.0 ppm. The kinetic studies were carried by UV-Vis spectroscopy (Agilent Cary). Infrared (IR) spectra were acquired on a Schimadzu IR PRESTIGIE-21. Data format Raw and analyzed data Parameters for data collection The kinetic UV-Vis spectroscopy (Agilent Cary) monitored the region of 190-800 nm under pseudo-first order conditions. An aliquot of 20 μL stock MeCN solution (0.01 mol. L −1 ) was added to a quartz cuvette (10 mm optical path) containing 3 mL of the reaction medium: acid solution (HCl 0.1 mol. L −1 ) or alkaline solution (NaOH 0.1 mol. L −1 ). The reactions were monitored for at least five half-life times, by following the reactant consumption and product formation. The kinetic profiles were fitted with equations, using iterative least-squares software. Description of data collection The NMR spectroscopic data were collected from isolated product, from chromatographic column. Kinetic data (UV-VIS and 1 H NMR) were collected from aliquots directly retired from reaction, under alkaline or acid conditions. Value of the Data • These data are useful or important because describe the spectroscopic data of the lipophilic amides and esters analogs from classical organochlorides herbicides. In addition, the data showed the kinetic parameters obtained in acid and alkaline hydrolysis after the incorporation of fatty long-chains in herbicides. • This dataset could be useful for other research groups interesting in the characterization of new derivatives of organochlorides herbicides and can benefit kinetic parameter studies relational to organochlorides herbicides degradation in the environmental. • This dataset can be used for application and in the development of experiments in agricultural practices with environmental-friendly agrochemicals. Annually, around 2.5 million tons of agrochemicals are used worldwide and this causes an impact on the environment such as water suppliers and soil. Data Description The dataset referring to lipophilic analogues from herbicides 2,4-dichlorophenoxyacetic acid (2,4-D) and Propanil that were obtained from fatty common alkyl chains. The synthesis of new lipophilic esters 6a-c was realized from esterification reaction of herbicide 2,4-D with palmitic (C16:0), stearic (C18:0) and oleic (C18:1) fatty alcohols. The experiments were performed according to previous work using sulfamic acid (H 2 NSO 3 H) catalyst [1] . After synthesis of the fatty esters, the synthesis of lipophilic amides 8a-c from 2,4-D was investigated from different methodologies. The synthesis of new fatty amines 11a-c was derived from 3,4-dichloroaniline, common core present in Propanil, Linuron and Diuron agrochemicals. The lipophilic esters and amides synthesized from 2,4-D and 3,4-dichloroaniline were characterized by 1 H and 13 C NMR, infrared spectroscopy. Afterwards, the lipophilic herbicides 6a-c, 8a-c and 11a-c were submitted to studies of kinetic behavior in aqueous medium, under basic and acid conditions. The degradation's profile was studied by kinetic UV-vis and 1 H NMR experiments. Kinetic studies by 1 H NMR and UV-vis The lipophilic herbicides and 2,4-D were submitted to studies of kinetic behavior to determine the degradation's profile in aqueous medium, under basic and acid conditions. The degradation's profile studied by kinetic 1 H NMR and UV-vis are showed in Figs. 31-38 . NMR characterization experiments The NMR characterization experiments of lipophilic herbicides 6a-c, 8a-c and 11a-c were performed in NMR 5 mm tube on a Bruker AVANCE 400 NMR spectrometer operating at 9.4T, Kinetic studies by UV-Vis The kinetic studies were carried by UV-Vis spectroscopy (Agilent Cary) monitoring in the region of 190-800 nm under pseudo-first order conditions [2] . An aliquot of 20 mL stock solution of the target compounds ( 6c, 8c and 11c ; 0.01 mol.L −1 in acetonitrile) was added to a quartz cuvette (10 mm optical path) containing 3 mL of the reaction medium: acid solution (HCl 0.1 mol.L −1 -acid hydrolysis) or basic solution (NaOH 0.1 mol.L −1 -alkaline hydrolysis). The reactions were monitored for at least five half-life times, by following the reactant consumption and product formation. The kinetic profiles (absorbance vs time) were fitted with equations, using iterative least-squares software. Kinetic studies by RMN The experiments were performed in NMR 5 mm tube using aliquot of 20 mL stock solution of the target compounds ( 6c, 8c and 11c ; 0.01 mol.L −1 in acetonitrile) containing 3 mL of the reaction medium: acid solution (HCl 0.1 mol.L −1 -acid hydrolysis) or basic solution (NaOH 0.1 mol.L −1 -alkaline hydrolysis). For qNMR 1 H experiments, pulse was calculated by pulsecal . The relaxation delay for use in the acquisition of the quantitative 1 H NMR spectra was determined by T1 measurements with the aid of the pulse sequence inversion recovery, with same parameters as for 1 H spectra changing the τ values from 0.01 to 15 s. 1 H spectra were acquired by using a 30 °pulse sequence ( zg ) with the following parameters: 30 s of relaxation delay (D1), 16 transients, a spectral width (SW) of 4789.27 Hz ( ∼ 12.0 ppm), 64 K numbers of data (TD), and 6.84 s of acquisition time (AQ). The experiments were performed at 298 K. FIDs were Fourier transformed with line broadening (LB) = 0.3 Hz. The resulting spectra were manually phased and baseline corrected, and referenced to the TMS at δ 0.0 ppm. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
1,643.6
2020-08-21T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Introduction to Extreme Seeking Entropy Recently, the concept of evaluating an unusually large learning effort of an adaptive system to detect novelties in the observed data was introduced. The present paper introduces a new measure of the learning effort of an adaptive system. The proposed method also uses adaptable parameters. Instead of a multi-scale enhanced approach, the generalized Pareto distribution is employed to estimate the probability of unusual updates, as well as for detecting novelties. This measure was successfully tested in various scenarios with (i) synthetic data, (ii) real time series datasets, and multiple adaptive filters and learning algorithms. The results of these experiments are presented. Introduction Novelty detection (ND) plays an important role in signal processing. Many research groups have dealt with both the methods and applications because there are many complex tasks where accurate ND is needed. However, the success of this method depends on the type of data, so the current methods usually give good performance and results only for specific datasets. As more data are being analyzed currently, there is a greater need for new methods of ND. Furthermore, the increasing computational power provides more possibilities and methods that were not possible to use a few decades ago, but can now be performed for real-time tasks easily. For these reasons, we consider the topic of ND to be vital. Two different approaches have been established over the last few decades. The first approach is based on the statistical features of the data [1], and some methods also use extreme value theory to estimate the novelty of the data [2][3][4][5]. The second approach uses learning systems [6][7][8]: the attributes of a learning system are used to obtain information about novelties in the data. Over the last decade, many new methods have been proposed in the field of machine learning [9]. The set membership algorithm [10][11][12] uses the prediction error for better accuracy, reducing the computational resources required and assuring a greater robustness with the proper filter, especially for data without drift. Bukovsky et al. proved that the learning effort of a learning system can be used to estimate a measure of the novelty for each data point [13,14], but a shortcoming of that method is that it is hard to interpret the ND score. A similar approach, combining the prediction error with adaptive weight increments, was proposed in [15]. That method also lacks the possibility of a meaningful interpretation of the ND score. It was also already shown that the accuracy of the learning system is not necessarily correlated with the accuracy of the ND [16] and that simple predictors are useful even for signals that are produced by complex systems (e.g., EEG, ECG). ND brings a new point of view to complex signal analysis. Research groups have started dealing with the early diagnosis of different diseases where ND plays an important role. Taoum et al. presented ND and data fusion methods to identify acute respiratory problems [17]. Rad introduced ND for gait and movement monitoring to diagnosis Parkinson's disease and autism spectrum disorders [18]. Burlina used ND algorithms in the diagnosis of different muscle diseases [19]. Other fields where ND can be found are information and mechanical engineering. Hu introduced ND as an appropriate tool for monitoring the health of mechanical systems, where it is usually impossible to know every potential fault [20]. Surace described the application of ND to the simulation of an offshore steel platform [21]. In this article, a new method for ND is introduced. The proposed method combines both a statistics based approach and a learning systems based approach. The changes of the adaptive parameters of the learning system obtained via an incremental learning algorithm are evaluated. A new measure, called extreme seeking entropy, is then estimated. It is shown that the proposed measure corresponds to different types of novelties in various datasets and how it may be useful for diagnostics and failure detection tasks. It also outperforms the other unsupervised adaptive ND methods. This paper is organized as follows. Section 2 describes the specifications of the learning system and learning algorithm used during the experiments. Section 3 recalls the learning entropy algorithm and an error and learning based novelty detection method. Then, the general suitable properties of learning based information are discussed. Section 4 introduces the new measure of novelty, and the ND algorithm based on this measure is presented. Section 5 describes a case study where both synthetic and real datasets are used to show the usability of the proposed algorithm and also contains the rationale behind the selection of the experiments. Section 6 contains the rate detection of the proposed algorithm in two cases, namely detection of a change in the trend and the detection of a step change of a signal generator. The last two sections are dedicated to limitations and further challenges, Section 7, and then our conclusions, Section 8. Review of the Learning Systems Used All the supervised learning systems used in the experimental analysis are introduced in this section. In general, assume that the output of the learning system is a function of weights and the input data: where y ∈ R denotes the output, w ∈ R n is the vector of its adaptable parameters, x ∈ R n is a vector that contains the input data, and f is the mapping function that maps the input data and weights to the output. The following adaptation is done in order to minimize the error: where k is a discrete time index and d(k) ∈ R is the target of the supervised learning system (the desired output). The update of the weights w is done with every new sample as follows: where ∆w ∈ R n is a vector that contains the updates of the adaptive parameters. This update depends on the learning algorithm used. The learning algorithms will be discussed later. Adaptive Models The adaptive models used during the experiments are described briefly in this section. Linear Adaptive Filter One of the simplest adaptive models is the linear adaptive filter, also known as the linear neural unit (LNU), with finite impulse response (FIR). The output of this model at a discrete time index k is given by: which is equivalent to where w T (k) = [w 1 (k), w 2 (k), . . . , w n (k)] ∈ R n is the row vector of adaptive weights and x T (k) = [x 1 (k), x 2 (k), . . . , x n (k)] ∈ R n is the column input vector. The vector of adaptive weights is updated with every new sample obtained, and the size of the update depends on the learning algorithm used. In general, x may contain the history of a single input or even the history of multiple inputs. An Adaptive Filter Based on Higher Order Neural Units The quadratic neural unit (QNU) [22][23][24] (also known as a second order neural unit) is a non-linear predictive model. The output of the QNU is: where often, x 0 = 1. This is equivalent to: where the column input vector colx for n inputs has the general form: and w is a row vector of adaptive weights that has the same length as colx. Note that the first term in colx, x 0 = 1, should be used when the data have a non-zero offset. Learning Algorithms To prove the generality of the adaptive weight evaluation approach for novelty detection, different learning algorithms have been tested. Both algorithms are heavily used in signal processing and machine learning. Normalized Least Mean Squares Algorithm The normalized least mean squares (NLMS) algorithm [25] is a variant of the least mean squares algorithm. The problem with the selection of the learning rate is solved by normalizing by the power of the input. It is a stochastic gradient approach. The update of this adaptive algorithm is given by: where ∈ R is a small positive constant used to avoid division by zero, µ ∈ R is the learning rate, and e ∈ R is the error defined as in (2). According to the normalization of the learning rate shown in (9), it is necessary to choose a learning rate µ satisfying 0 ≤ µ ≤ 2 to preserve the stability of the NLMS algorithm. Generalized Normalized Gradient Descent The generalized normalized gradient descent (GNGD) [26] algorithm is another algorithm for linear adaptive FIR filters. Due to its adaptation of the learning rate based on the signal dynamics, it converges in places where the NLMS algorithm diverges. The update of this adaptive algorithm is given by: with: where η ∈ R is the adaptive learning rate, ∈ R is a compensation term, and ρ is the step size adaptation parameter, which should be chosen so as to satisfy 0 ≤ ρ ≤ 1. On the Evaluation of the Increments in the Adaptive Weights in Order to Estimate the Novelty in the Data This section recalls two ND methods that evaluate the increments in the adaptive weights, namely learning entropy, and error and learning based novelty detection. Those methods are compared with the proposed algorithm in Sections 4 and 6. Then, the general properties of the learning based information measure will be discussed. Learning Entropy: A Direct Algorithm The recent publication on Learning Entropy [14] specifies a direct algorithm to estimate the learning entropy (LE) as follows. Here, z is a special Z-score, given as follows: where |∆w M i (k − 1)| is the mean of the last M increments of w i , σ(|∆w M i (k − 1)|) is their standard deviation, and n w is the number of adaptive weights. According to Equation (15), the function f in this case corresponds to the special Z-score function z, and the function A is represented by the sum over the adaptive weights. Error and Learning Based Novelty Detection Another recently published method that evaluates the increments of the adaptive weight together with the prediction error is ELBND [15]. ELBND describes every sample with the value obtained as follows: or, alternatively, In this case, the function f is represented by multiplying the ith adaptive weight increment ∆w i by the prediction error e. The function A is the maximum of the vector in the case of ELBND given by Equation (13) and the sum over the weights in the case of the ELBND given by Equation (14). General Properties of A Suitable Learning Based Information Measure Learning entropy was proposed in [13,14]. It is a learning based information measure L that, in general, evaluates unusually large learning increments, as follows: where A is a general aggregation function and f is a function that quantifies the irregularity in the learning effort [14]. Another form for f and A will be presented in the present paper. Firstly, the function f is presented. Assume that the value of f should be high when the increments ∆w are unusually high. Furthermore, this function also takes the history of those increments as input. As stated, some cumulative distribution function of each weight increment seems suitable. This cumulative distribution function (cdf) is discussed later in this paper. The question is how to deal with the aggregation function A. Under the assumption that each weight is independent of the others, it is possible to choose the aggregation function A as follows: The function A in the stated form is high for high cdf values of the weight updates, and hence for the values where the cdf is close to one. The function 1 − f cd f i can be viewed as the complementary cumulative distribution function (or the survival function, also known as the reliability function). This approach clearly avoids the need for a multi-scale approach. The result is that much fewer parameters are needed for detecting potential novelties. Only the crucial choice of the cdf remains. In the next section, a suitable probability distribution will be presented, together with the new novelty detection algorithm. The Generalized Pareto Distribution A normal distribution is used in some novelty detection algorithms [27][28][29]. However, the normal distribution cannot always be used, especially when the description of the data by a mean and a symmetric range of variation would be misleading [30]. Let us mention the Pickands-Balkema-de Haan theorem [31,32], which states that if we have a sequence X 1 , X 2 , . . . of independent and identically distributed random variables and F u is their conditional excess distribution function (over the threshold u), then: where GPD is the generalized Pareto distribution and F u is defined by: for 0 ≤ x ≤ x F − u, where x F is the right endpoint of the underlying unknown distribution F. The probability density function of the GPDtakes the form: where in general, µ ∈ (−∞, +∞) is a location parameter, σ ∈ (0, ∞) is the scale, and ξ ∈ (−∞, ∞) is a shape parameter. The corresponding cumulative distribution function then takes the form: Note that the support is Figure 1, we show the ability of the GPD to deal with many possible shapes of the tails of the distributions. Note that if ξ = 1, it is equivalent to the uniform distribution; if ξ = 0, it is equivalent to the exponential distribution; if ξ = −0.5, it is the triangular distribution; if −0.5 < ξ < 0, it is a light tailed distribution (e.g., the normal distribution or the Gumbel distribution); if ξ > 0, it is a heavy tailed distribution (e.g., the Pareto distribution, the log-normal distribution, or Student's t-distribution); and if ξ < −1, it is a monotonically increasing distribution with compact support (e.g., the beta distribution). As long as we do not know the distribution of increments of the adaptive weights, it is appropriate to use the GPD due to its universality in modeling the tails of other distributions [33][34][35]. As the aim is to evaluate unusually high increments of an adaptive system, the need for some threshold arises: denote this threshold by z. This threshold should divide the weight increments into two sets. An increment that is lower than the threshold should belong to the set that contains the usual high increments; denote this by L. However, an increment that is greater than or equal to this threshold should belong to the set H. Assume that both sets exist for every adaptable parameter, so for the ith adaptable parameter w i , we should set a threshold z i so the weight updates belong to the sets as follows. The increments belonging to L i will be unlikely to contain any information about a novelty in the adaptation, so we are not going to evaluate them. The set H i should contain the weight increments that are drawn from the GPD if the choice of the threshold was appropriate. The threshold z i depends on the method chosen, peaks over threshold, which will be discussed in the following subsection. The Peaks over Threshold Method The main issue in GPD fitting is the estimation of a suitable threshold, z. If the threshold is too high (i.e., there are only a few points that exceed it), then the parameters of the GPD suffer a high variance. If the threshold is too low, then the GPD approximation is not reliable. Therefore, the proper choice of threshold is crucial for the performance of the ND algorithm. There are many approaches to estimating the threshold [36]. To show the usability of the proposed ND algorithm, multiple rules of thumb [37][38][39] for the choice of the threshold have been used. Let l be the number of samples used for the GPD fitting and n s be the total number of samples available: Note that we use the highest adaptive weight increment to estimate the GPD parameters. The peaks over threshold (POT) method is crucial for deciding whether |∆w k (k)| belongs to H i or to L i . In Section 5 are presented the results with different techniques of choosing the threshold. Extreme Seeking Entropy Algorithm In this subsection, the new novelty measure and the new novelty detection algorithm are presented. We will introduce the extreme seeking entropy measure, which is given as follows: where: The proposed algorithm evaluates the value of ESE for every newly obtained weight increment. Note that if the weight increment is smaller than the threshold from the POT method, the addition to the novelty measure ESE is zero. Small probability increments, which are highly likely to contain a novelty, have a high value of ESE. To estimate the parameters of the GPD pdf, it is possible to process all available history samples, or only the n s newest samples, with the POT method. The proposed algorithm is described by the following pseudocode (Algorithm 1). Algorithm 1 Extreme seeking entropy algorithm. 1: set n s , and choose the POT method 2: initial estimation of the parameters of the GPD: ξ i , µ i , σ i for each adaptable parameter 3: for each new d(k) do 4: update the adaptive model to get ∆w(k) 5: proceed with the POT method 6: if |∆w i |(k) ∈ H i then 7: update the parameters ξ i , µ i , σ i 8: end if 9: compute ESE according to (26) 10: end for The proposed ND algorithm needs only one parameter to be set, which avoids the need for a multi-scale approach and overcomes the issues arising from setting multiple parameters. The parameter n s can also take all available samples, if needed. Furthermore, there is the need to choose the proper POT method. Choosing the POT method depends strongly on the nature of the data. The limitation of the proposed method is the need for an initial estimate of the parameters of the GPD. We need a priori information about ξ, σ, and µ for each adaptive weight. If there are n w adaptive weights, then we need 3 · n w parameters to start the extreme seeking entropy algorithm. If there is no a priori information about the parameters, we need at least n s samples to obtain the first results. Another problem may arise if the type of underlying unknown distribution F or its parameters are significantly varying in time. The Design of the Experiments The proposed ESE algorithm was studied in various testing schemes with synthetic data and with one real dataset. For each experiment, we also show the results of the ELBND and LE methods, for the sake of comparison. The parameter M that specifies the number of increments for the LE evaluation was set as M = n s in all experiments. The first experiment was the detection of perturbed data in the Mackey-Glass time series. This experiment was chosen due to the possibility of comparing it with the results published in [13]. The second experiment, with synthetic data, showed the ability of the ESE algorithm to detect a change in the standard deviation of the noise in a random data stream, which can be viewed as a novelty in the data. It was inspired by a problem that arises in hybrid navigation systems that use both GPS and dead-reckoning sensors [40]. The third experiment, involving a step change in the parameters of a signal generator, was an analogue to a problem that may arise in evaluating multiple stream random number generators [41], where we may detect and evaluate the probability of changes in the parameters of those generators. The fourth experiment was the detection of the disappearance of noise. This experiment was chosen as neither of the compared methods (LE, ELBND) were able to deal with this problem, where the disappearance of the noise could be also viewed as a novelty in the signal. The fifth experiment was the detection of a change in trend; this is a common problem in fault detection and diagnosis [42]. The last experiment was performed on the mouse EEG dataset. The aim of this experiment was to show that the proposed ESE algorithm was suitable even for real-world complex phenomena that are characterized by non-linear dynamics [43,44]. This dataset contained the start of an epileptic seizure, and we wanted to show that it was possible to detect this seizure with the proposed ESE algorithm. All of the experiments were carried out in the programming language Python [45], with the libraries Numpy [46], Scipy [47], and Padasip [48]. The graphs were plotted with the Matplotlib library [49]. The codes with the experiments can be obtained via email from the authors. Mackey-Glass Time Series Perturbation The first experiment was the detection of a perturbed sample in a deterministic chaotic time series. The time series data were obtained as the solution of the Mackey-Glass equation [50]. The data sample at discrete time index k = 523 contained the perturbation, as follows: The data series and detailed perturbation are depicted in Figure 2. The QNU was chosen for the data processing. The number of inputs to the QNU was set to n = 4, so the inputs are: and hence, the adaptive filter had in all 15 adaptive weights. The parameters were updated with every newly obtained sample by means of the NLMS algorithm. The setting was the same as in [13]. The learning rate during the experiment was constantly set to µ = 1. The POT method was chosen according to (23) with n s = 300. The details of the adaptive filters and prediction error are depicted in Figure 3. The results of the ND are shown in Figure 4. Note that the global maximum in the ESE corresponds to the perturbed sample. The global maxima of the ELBND and LE methods correspond to the biggest prediction error, and not to the perturbed sample. Change of the Standard Deviation of the Noise in a Random Data Stream The detection of a change in the standard deviation of the noise in the obtained data was carried out in the following experiment. Assume there are two inputs x 1 (k) and x 2 (k) and that the output y(k) is related to them by: where v(k) represents a Gaussian noise that is added to w(k). The Gaussian noise has zero mean and standard deviation 0.1, υ ∼ N(0, 0.1). The values of x 1 (k) and x 2 (k) are drawn from a uniform distribution, so that x(k) ≥ 0 and x(k) ≤ 1 for every k. At the discrete time index k = 500, the standard deviation of the noise changes to 0.2, so υ ∼ N(0, 0.2). The QNU was chosen for the data processing. The number of inputs to the QNU was set to n = 2, so the inputs are: and hence, the adaptive filter had three adaptive weights in all. The structure of the QNU corresponds to the structure of the data generator described by Equation (31). The parameters were updated with every newly obtained sample using the GNGD algorithm. The learning rate during the experiment was set to µ = 1. The POT method was chosen according to (24) with n s = 500. The results of the novelty detection and details about the adaptive filters are depicted in Figure 5. The a priori values of GPD for ESE and for LE were obtained using 500 samples, which are not shown in Figure 5. Note that the global maximum of the ESE corresponded to the change in standard deviation. The detection by the ELBND and LE was delayed. Step Change in the Parameters of a Signal Generator The scheme of this experiment was similar to the previous one. Assume there are two inputs x 1 (k) and x 2 (k) and one output y(k), related by: where v(k) represents a Gaussian noise that is added to y(k). The Gaussian noise has zero mean and standard deviation 0.1, υ ∼ N(0, 0.1). The values of x 1 (k) and x 2 (k) are drawn from a uniform distribution, so x(k) ≥ 0 and x(k) ≤ 1 for every k. At the discrete time index k = 500, the equation is changed to the following one: The QNU was chosen for the data processing. The number of inputs to the QNU was set to n = 2, so the inputs are: and hence, the adaptive filter had three adaptive weights in all. Note that the structure of the QNU corresponded to the structure of the signal generator. The parameters were updated with every newly obtained sample, using the GNGD algorithm. The learning rate during the experiment was constantly set to µ = 1. The POT method was chosen according to (23) with n s = 500. The a priori values of GPD for ESE and for LE were obtained using 500 samples, which are not shown in Figure 6. The results of the novelty detection and details about the adaptive filters are depicted in Figure 6. Note that the ESE successfully detected the change in the parameters of the signal generator. The LE failed to detect this change, and the detection by ELBND was delayed. Furthermore, the value of the peak in ESE was significantly higher than that in the ELBND case. Noise Disappearance In this experiment, it was shown that the slightly reformulated algorithm could also deal with an immediate decrease of the learning effort. Assume that instead of an unusually high learning effort, we want to focus on an unusually low learning effort. The only change in the proposed algorithm was that we used the POT method to get l the smallest weight updates, and based on those, the parameters of the GPD would be estimated. The scheme of this experiment was similar to the previous one. We assumed there were two inputs x 1 (k) and x 2 (k) and one output y(k), which were related by (31). However, in this case, at discrete time index k = 500, the noise was removed, so Equation (31) for k ≥ 500 takes the form: The QNU was chosen for the data processing. The number of inputs to the QNU was set to n = 2, so the inputs are: and so, the adaptive filter had three adaptive weights in all. The structure of the adaptive filter was chosen to correspond to the structure of the signal generator. The parameters were updated with every newly obtained sample using the GNGD algorithm. The learning rate during the experiment was constantly set to µ = 1. The POT method was chosen according to (23) with n s = 500. Figure 7 shows that the peak in ESE corresponded to the disappearance of the noise. The LE and ELBND methods failed to detect the disappearance of the noise. For ELBND, these results were to be expected, as the values of the ELBND were high for a high prediction error and high adaptive weight increments. For the discrete time index k ≥ 500, the noise is removed from the signal, which corresponds to the peak in ESE. Graphs (e) and (f) contain the results of the ELBND and LE methods. Trend Change The last experiment with artificial data was the detection of a change in trend. Assume that there are two inputs x 1 (k) and x 2 (k) and one output y(k), related by: where v(k) represents a Gaussian noise that is added to y(k). The Gaussian noise had zero mean and standard deviation 0.1. At the discrete time index k = 500, there was a change in the trend, so Equation (38) changes to: where k ≥ 500. The LNU was chosen for the data processing. The number of inputs to the LNU was set to n = 3, so the inputs are: and the adaptive filter had three adaptive weights in all. The structure of the adaptive filter was chosen in accordance with the structure of the signal generator. The parameters were updated with every newly obtained sample by means of the GNGD algorithm. The learning rate during the experiment was constantly set to µ = 1. The POT method was chosen according to (23) with n s = 500. Figure 8 shows that the peak in ESE corresponded to the trend change point, which was the same as the peak in LE and ELBND. Note that the value of the peak in ESE was significantly higher than in LE and ELBND. The graph (d) shows the ESE novelty score. At discrete time index k = 500, there is a step change in the trend, which corresponds to the peak in ESE. Graphs (e) and (f) contain the results of the ELBND and LE methods and peaks corresponding to the trend change. Detection of Epilepsy in Mouse EEG The last experiment was with a mouse EEG signal. Three channels of the EEG data were chosen, which contained a significant seizure. According to the expert, the seizure started at about k ≈ 1700, as is shown in Figure 9, which shows the z-scores of the EEG data. The LNU was chosen for the data processing. The number of inputs to LNU was set to n = 10, so the inputs are: and the adaptive filter had 10 adaptive weights in all. The number of inputs and filter structure were chosen experimentally. The parameters were updated with every newly obtained sample using the NLMS algorithm. The learning rate during the experiment was set to µ = 1. The POT method was chosen according to (25) with n s = 1000. Figure 10 shows that the peak in ESE approximately corresponded to the beginning of the seizure. Especially in channel C3, the peak in ESE was significant. The position of the peaks was at k = 1735 for channel C3, k = 1698 for channel Pz, and k = 1727 for channel Fp1. Figure 10. ESE value for mouse EEG data channels containing a seizure. The peaks approximately correspond to the beginning of the seizure. Note that channel C3 contains a significant peak in ESE compared to the other channels. Evaluation of the ESE Detection Rate This section is dedicated to evaluating the detection rate in two different cases. The first case was a step change in the parameters of a signal generator (similar to the experiment described in Section 5.4). The second case was the detection of a change in trend. Step Change in the Parameters of a Signal Generator: Evaluation of the Detection Rate Assume there are two inputs x 1 (k) and x 2 (k), one output y(k), and weights a 1 , a 2 , and a 3 , related by: where v(k) represents a Gaussian noise that is added to y(k). The Gaussian noise had zero mean and standard deviation σ. The initial values of a 1 , a 2 , a 3 were drawn from the uniform distribution U (−1, 1). At discrete time index k = 200, there was a step change in a 1 , a 2 , and a 3 , and their new values were drawn again from U (−1, 1). The structure of the adaptive filter was the same as described in Section 5.4. The parameters were updated with every newly obtained sample using the GNGD algorithm. The POT method was chosen according to (23) with n s = 1200. The performance of the ESE algorithm was compared with those of LE, ELBND, and plain prediction error evaluation. The a priori values of GPD for ESE and LE were obtained using 1200 samples with initial values for the parameters a 1 , a 2 , a 3 . For each experiment, the signal-to-noise ratio (SNR) was evaluated as follows: where σ s is the standard deviation of the output of the system and σ is the standard deviation of the noise. The evaluation of the rate detection was performed as follows: 1. choose noise standard deviation σ 2. for given noise standard deviation σ, perform 1000 experiments, and at the beginning of each experiment, choose new parameters a 1 , a 2 , and a 3 3. successful detection was when the global peak in ESE, LE, ELBND, or prediction error was between discrete time index k ≥ 200 and k ≤ 210; compute the detection rate 4. compute the SNR for each experiment according to (43), and compute the average SNR for all experiments for given noise standard deviation σ The evaluation of the detection rate was performed for the inputs x 1 , x 2 whose values were drawn from the uniform distribution U(−1, 1) and from the normal distribution N(0, 1). The results for the inputs drawn from the uniform distribution are depicted in Figure 11. The corresponding table with results for various SNRs is Table A2 (see Appendix A). The results for inputs drawn from the normal distribution are depicted in Figure 12. The corresponding table with results for various SNRs is Table A3 (see Appendix A). N(0, 1). For SNR > 8 dB, the ESE algorithm outperforms in the detection rate the LE, ELBND, and error evaluation. For SNR > 34 dB, the ESE achieved a 100% detection rate. Detection of a Change in Trend: Evaluation of the Detection Rate Assume there are two inputs x 1 (k) and x 2 (k) and one output y(k), related by: where v(k) represents a Gaussian noise that is added to y(k). The Gaussian noise has zero mean and standard deviation σ. At discrete time index k, the trend changed, so the output of the system y(k) for k ≥ 200 is given by: where a is drawn from the uniform distribution U(−0.02, 0.02). The structure of the adaptive filter was the same as in the experiment described in Section 5.6. The parameters were updated with every newly obtained sample using the GNGD algorithm. The POT method was chosen according to (23) with n s = 1200. The performance of the ESE algorithm was compared with LE, ELBND, and plain prediction error evaluation. The a priori values of the GPD for ESE and LE were obtained using 1200 samples where the output of the system was described by Equation (44). For each experiment, the SNR was evaluated according to (43). The evaluation of the rate detection was performed as follows: 1. choose noise standard deviation σ 2. for given noise standard deviation σ, perform 1000 experiments where at k = 200, there is a change in trend 3. successful detection is when the global peak in ESE, LE, ELBND, or prediction error is between discrete time index k ≥ 200 and k ≤ 210; compute the detection rate 4. compute the SNR for each experiment according to (43), and compute the average SNR for all experiments for given noise standard deviation σ The evaluation of the detection rate was performed for inputs x 1 , x 2 whose values were drawn from the uniform distribution U (−1, 1). The results are depicted in Figure 13. The corresponding table with the results for various SNRs is Table A1 (see Appendix A). Limitations and Further Challenges There is a significant limitation to using the ESE algorithm. As was already mentioned in Section 4, before we could obtain the first results, we needed to get a priori information about the parameters of the GPD or obtain a suitably large sample size to compute those parameters. This limitation arose from the nature of using the probability distribution and is common to many statistical approaches to ND. This was the main drawback compared to, e.g., the ELBND method, which was able to produce the results immediately. Another limitation of the presented algorithm is the selection of a suitable POT method, as the estimation of the parameters of the GPD and the selection of the threshold were strongly related to this. To avoid this issue, it was possible to implement some sophisticated parameter estimator that could deal with the optimal threshold selection (e.g., Zhang's method [51], an estimator based on generalized probability weighted moment equations [52], or a method that combines the method of moments and the likelihood moment [53]), but these are outside the scope of this article. Another challenge was how to combine the ESE of unusually low and unusually high increments together, because both could correspond to a novelty in the data. Further work will be oriented toward using adaptive filters whose adaptive parameters are non-linearly related to the output, e.g., fuzzy adaptive filters or non-linear adaptive Kalman filters. Furthermore, more learning algorithms should be tested. Another topic, which was not mentioned in this article, is that of deciding whether the value of the ESE implies a novelty in the data or not, so we need some threshold. To evaluate the precision of the classification, the area under the receiver operating characteristics [54,55] should be estimated. Due to the scope of this article, this was omitted, but it will be part of further work on the ESE. Conclusions This paper introduced a new measure of data novelty, called extreme seeking entropy, and a detection algorithm that used this measure. An experimental study was also presented. The algorithm evaluated the absolute value of the increments in the adaptive system weights that were unusually high. The generalized Pareto distribution was used to model those increments, and we tested whether a low probability of a weight increment corresponded to a novelty in the data. It was also shown that the prediction error did not need to be correlated with a novelty in the data, so relatively simple, even inaccurate, adaptive models could be used. Five experiments with synthetic data including novelties and one experiment with a real mouse EEG signal were presented. It was shown that the proposed novelty detection algorithm was able to detect novelties in both kinds of data (real and synthetic) and that the proposed approach using simple adaptive models should be suitable for adaptive novelty detection. The detection rate of the proposed algorithm was evaluated for various SNRs in the scenarios of trend change detection and of a step change in the parameters of a signal generator. These scenarios were also tested with LE, ELBND, and prediction error evaluation. It was shown that for higher SNRs, the proposed ESE algorithm outperformed the other tested algorithms in terms of a successful detection rate in both scenarios. Acknowledgments: Jan Vrba would like to thank Matouš Cejnek for developing the PADASIP (Python Adaptive Signal Processing library) and Ivo Bukovský for helpful discussions about learning entropy and learning systems. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: Table A2. Step change detection rates for inputs drawn from uniform distribution U(−1, 1). Table A3.
9,027.2
2020-01-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
Multi-level computational methods for interdisciplinary research in the HathiTrust Digital Library We show how faceted search using a combination of traditional classification systems and mixed-membership topic models can go beyond keyword search to inform resource discovery, hypothesis formulation, and argument extraction for interdisciplinary research. Our test domain is the history and philosophy of scientific work on animal mind and cognition. The methods can be generalized to other research areas and ultimately support a system for semi-automatic identification of argument structures. We provide a case study for the application of the methods to the problem of identifying and extracting arguments about anthropomorphism during a critical period in the development of comparative psychology. We show how a combination of classification systems and mixed-membership models trained over large digital libraries can inform resource discovery in this domain. Through a novel approach of “drill-down” topic modeling—simultaneously reducing both the size of the corpus and the unit of analysis—we are able to reduce a large collection of fulltext volumes to a much smaller set of pages within six focal volumes containing arguments of interest to historians and philosophers of comparative psychology. The volumes identified in this way did not appear among the first ten results of the keyword search in the HathiTrust digital library and the pages bear the kind of “close reading” needed to generate original interpretations that is the heart of scholarly work in the humanities. Zooming back out, we provide a way to place the books onto a map of science originally constructed from very different data and for different purposes. The multilevel approach advances understanding of the intellectual and societal contexts in which writings are interpreted. Introduction Just as Britain and America have been described as two nations separated by a common language, different academic disciplines often use the same words with divergent meanings [1].Interdisciplinary research thus poses unique challenges for information retrieval (IR).Word sense disambiguation [2,3], differing publication practices across disciplines [4][5][6] and disjoint authorship networks [7] pose special challenges to information retrieval for interdisciplinary work.When the dimension of time is added, terminological shifts [8,9], changing citation standards [10][11][12][13], and shifting modes of scholarly communication [4,5,14,15] all amplify the challenges for IR to serve the need of interdisciplinary scholars. Widespread digitization of monographs and journals by HathiTrust [16,17] and Google Books [18,19] enable new longitudinal studies of change in language and discourse [8,9,12,[20][21][22], an approach known as "distant reading" [23].These data-driven distant readings contrast with "close readings", in which short passages and particular details are emphasized for scholarly interpretation.Newly digitized materials, which enable distant reading, differ from born-digital scholarly editions in three key ways: First, the reliance on optical character recognition (OCR) over scanned page images introduces noise into the plain-text representations of the text.Second, the unstructured text does not contain any markup that may differentiate page header and footer information, section headings, or bibliographic information from the main text.Finally, metadata is often automatically extracted and lacks the provenance information important to many humanities scholars.Researchers seeking to marry these "distant readings" to more traditional "close readings" are impacted by these factors [24]. Our goal is to develop computational methods for scholarly analysis of large-scale digital collections that are robust across both the technological inconsistency of the digitized materials and the variations of meaning and practice among fields and across time.A further goal of our approach is that these methods should inform interdisciplinary research by suggesting novel interpretations and hypotheses.These methods should support scholars who wish to drill down from high level overviews of the available materials to specific pages and sentences that are relevant for understanding the various responses of scientists to contentious issues within their fields. In this paper, we focus on meeting these challenges within the interdisciplinary field of history and philosophy of science (HPS).HPS must not only bridge the humanities and the sciences, but also the temporal divide between historically-significant materials and the present [25][26][27][28].We show how faceted search using a combination of traditional classification systems and mixed-membership models can move beyond keyword search to inform resource discovery, hypothesis formulation, and argument extraction in our test domain, delivering methods that can be generalized to other domains. Using a novel approach of drill-down topic-modeling, we demonstrate how a set of 1,315 fulltext volumes obtained by a keyword search from the HathiTrust digital library is reduced to 6 focal volumes that did not appear in the top 10 HathiTrust search results.Topic modeling of these volumes at various levels, from whole book down to individual sentences, provides the contexts for word-sense disambiguation, is relatively robust in the face of OCR errors, and ultimately supports a system for semi-automatic identification of argument structure.We show how visualizations designed for macroanalysis of disciplinary scientific journals can be extended to highlight interdisciplinarity in arguments from book data [29].This guides researchers to passages important for the kind of "close reading" that lies at the heart of scholarly work in the humanities, supporting and augmenting the interpretative work that helps us understand the intellectual and societal contexts in which scientific writings are produced and received. While the extension of computational methods to various questions in the humanities may eventually provide ways to test specific hypotheses, the main focus of such research is likely to remain exploratory and interpretative, in keeping with the humanities themselves [24,30].This approach nevertheless shares something with the sciences: it is experimental to the extent that it opens up a space of investigation within which quantitatively defined parameters can be systematically varied and results compared.Such exploratory experimentation is common not just in the social sciences, but also in the natural sciences [31,32]. Our study consisted of six stages.(1) We used a keyword search of the HathiTrust collection to generate an initial search space for the faceted search.(2) We constructed probabilistic topic models for the volumes in the initial search results.This model is a type of mixed-membership model, which captures the multiple contexts of the selected volumes and allows us to reduce the original search space even further.Topic models are also a type of bag-of-words model, making them well-suited for the unstructured text found in the HT.(3) Third, we used drill-down topic modeling to construct page-level models of the reduced set of volumes selected at the previous stage.(4) Using the page-level results to select pages for close-reading analysis, we thus supported semi-automatic argument extraction to showcase the interpretive results of our search process.(5) We exploited the close reading of arguments for exploratory investigation of sentence-level topic modeling in a single volume.(6) We used scientific mapping to find relevant volumes [33].As current science maps represent journal data and data overlays are created based on journal names, we used a classification crosswalk from the UCSD Map of Science to the Library of Congress Classifications of these journals, allowing us to project books onto the science map. Materials HathiTrust Digital Library The HathiTrust Digital Library is a collaboration between over ninety institutions to provide common access and copyright management to books digitized through a combination of Google, Internet Archive, and local initiatives.As of October 24, 2016, it consisted of over 14.7 million volumes represented both as raw page images and OCR-processed text 1 . Due to copyright concerns, access is given only to pre-1928 materials, which are assumed to be in the public domain in the United States. 2 When the work described in this paper was initiated in 2012, the public domain portion of the HathiTrust consisted of approximately 300,000 volumes.At the end of the funding period in 2014, the public domain consisted of 2.1 million volumes.That number is now 5.7 million volumes, as of October 24, 2016. While the corpus size has increased 20-fold, the methods presented in this paper are aimed to reduce the portion of the corpus for analysis.For example, the first step described below is keyword search, with our initial results returning 1,315 volumes (referred to as the HT1315 corpus).Using the same query on October 24, 2016, we returned 3,497 volumes.Both of these datasets are computationally-tractable on modern workstations, in contrast to (for example) the 1.2 terabyte HTRC Extracted Features Dataset, derived from 4.8 million volumes [35]. From the HT1315 corpus, we selected 86 volumes to model at the page-level (the HT86 corpus).This corpus was then further reduced to a 6-volume collection for argument mapping (HT6). Stop Lists Before analyzing the texts, it is common to apply a 'stop list' to the results, which omits words that are poor index terms [36].Frequently, these are high-frequency words such as articles ('a', 'an', 'the'), prepositions ('by', 'of', 'on'), and pronouns ('he', 'she', 'him'), which contain little predictive power for statistical analysis of semantic content [37].We use the English language stop list in the Natural Language Toolkit, which contains 153 words [38].Additionally, we filtered words occurring 5 or fewer times, which both excludes uncommon words and infrequent non-words generated by OCR errors. UCSD Map of Science For our macroanalysis, we want to see how our selected texts divide among the different academic disciplines.As a base map for the disciplinary space (analogous to a world map for geospatial space), we use the UCSD Map of Science [29] which was created by mining scientific and humanities journals indexed by Thomson Reuters' Web of Science and Elsevier's Scopus and laying them out as a map of 554 sub-disciplines -e.g., Contemporary Philosophy, Zoology, Earthquake Engineering -that are further aggregated into 13 core disciplines -e.g., Biology, Earth Sciences, Humanities.Each of the 554 sub-disciplines has a set of journals and keywords associated with it. Library of Congress Classification Outline (LCCO) The Library of Congress Classification Outline (LCCO) is a system for classifying books, journals, and other media in physical and digital libraries.It is different from the Library of Congress Control Number (LCCN), which provides an authority record for each volume.The HathiTrust stores the LCCN, which we then use to query the Library of Congress database for the call number, which contains the LCCO, providing us with a disciplinary classification for each volume in the HT1315, HT86, and HT6 datasets. Target Domain: History and Philosophy of Scientific Work on Animal Cognition Our specific test domain is the history and philosophy of scientific work on animal cognition [39][40][41].We aimed to identify and extract arguments about anthropomorphism from a relevant subset of the scientific works published in the late 19th and early 20th century.This period represents a critical time for the development of comparative psychology, framed at one end by the work of Charles Darwin and at the other end by the rise of the behaviorist school of psychology (see [42] for a full historical review).Using the methods described in this paper, we progressively narrowed the 300,000 volumes to a subset of 1,315 selected for topic modeling at the full-volume level, then 86 of these selected for page-level topic modeling, and then 6 specific volumes selected for manual analysis of the arguments. The term "anthropomorphism" itself illustrates the problem of word sense disambiguation.In the theological context, anthropomorphism refers to the attribution of human-like qualities to gods.In the animal cognition context, it refers to the projection of human psychological properties to animals.Given the theological controversy evoked by Darwin, our inquiry demands our system be robust in partitioning these separate discourses. Methods and Results Keyword Search: From Library to Reading List Methods We began by conducting a keyword search in the HathiTrust collection using the HathiTrust's Solr index.We searched using terms intended to reduce the hundreds of thousands of public domain works to a set of potentially relevant texts that could be efficiently modeled with the available computing resources.Specifically, we searched for "Darwin", "comparative psychology", "anthropomorphism", and "parsimony".While the specificity of our query may be seen as too restrictive, we emphasize that we are following an exploratory research paradigm -we are not narrowing in on a particular fact, but rather surveying the available literature at the intersection of our interest in the history and philosophy of animal mind and cognition. Results The search yielded a set of 1,315 books published between 1800 and 1962.We refer to this set of results as HT1315. 3The same query conducted in August 2015 yielded 3,027 full-text results.Notably, it took Charles Darwin 23 years to read a number of books comparable in size to HT1315, as documented in his Reading Notebooks [43].Even at the unlikely rate of one book a day, it would take nearly four years to read this set of books in its entirety.About one fifth of the volumes retrieved were course catalogs, but even eliminating those would leave a daunting, if not quite Olympian, reading task.As the majority of the volumes selected by keyword search were not directly relevant to the research project, the potential payoff made possible by more sophisticated computational analysis of the full texts is critical for information retrieval tasks. Probabilistic Topic Modeling of Volumes: Narrowing the Reading Lists Methods Probabilistic topic models [44] are a family of mixed-membership models that describe documents as a distribution of topics, where each topic is itself a distribution over all words in a corpus.Topic models are generative models, that we interpret as providing a theory about context blending during the writing process [43]. To construct the topic models used in this study, we use Latent Dirichlet Allocation (LDA - [45]) with priors estimated via Gibbs sampling [46] as implemented in the InPhO Topic Explorer [47]. We initially modeled the HTRC1315 set at four different values for the number of topics, k = {20, 40, 60, 80}.We applied cosine-similarity measures to the topic mixtures attributed to each volume by the model. Results Manual inspection of the topics generated for the different values of k showed that while all four of the models produced interpretable results, we judged that k = 60 provided the best balance between specificity and generality for our HPS goals. Table 1 shows the top ten topics related to the word 'anthropomorphism' in the k=60topic model.Inspection of this list indicates that 'anthropomorphism' relates to a theological topic (38), a biological topic (16), a philosophical topic (51), an anthropological topic (58), etc.The topic model checking problem [44] -i.e., how to assess the quality of the model's topics -remains an important open problem in topic modeling.Nevertheless, most of the topics in the model can be quickly summarized, with the second topic (16) being the most obvious attractor for researchers interested in comparative psychology.The second-to-last topic (1) is targeted on bibliographic citations, and is dominated by common German words that were not in the English language stop list used during initial corpus preparation. We use the topic model to narrow the search by querying topics with a combination of words.We do this by finding the topic or topics with the highest sum of the probabilities for each word.For example, Table 2 shows the top ten topics returned using 'anthropomorphism', 'animal', and 'psychology' as input.This new query reveals two relevant topics (numbers 26 and 10) that were not returned using 'anthropomorphism' alone. Subsequently, we used all three topics (10, 16, and 26) to filter relevant books from the original set of 1,315 books.We took the cosine distance between each of the three topics to each book in HT1315.We took the sum of these three distances and filtered them at the Drill-down Topic Modeling: From Books to Pages Methods We re-modeled the HT86 set at the level of individual pages, moving towards our goal of identifying arguments in text by "zooming in " to select books which had a high number of apparently relevant pages.These reduced sets of pages become appropriate targets for manual argument identification by a human reader. The notion of a "document" in LDA topic modeling is flexible.One can consider a full volume as a single document with a particular topic distribution.However, finer-grained models can also be made, in which each page, paragraph, or sentence receives its own topic distribution.Since OCR document scans in the HathiTrust have very little structural information -there is no encoding for section headings or paragraph breaks, let alone chapter breaks -page-level was the next level below the full volume that we could reliably recover. Results For the sake of direct comparison to results reported above with the HT1315 model, we probed the k = 60 page-level model with 'anthropomorphism' as the query term.Results are shown in Table 4.Note that topic numbers do not correlate across the HT86 and HT1315 models.Although a theological topic (18) is at the top of the list, it is clear that biological and psychological topics have become more prevalent.Even within topic 18, 'evolution' and 'science' are now among the ten highest probability words indicating that the topic is closer to a "religion and science" topic than the more general religion topic 38 from the HT1315 model (Table 1), and reflecting the tighter range of books in the HT86 subset.Using 'anthropomorphism', 'animal' and 'psychology' in combination as the query, topic 1 is the highest ranked topic (Table 5).In comparison to the earlier topics 10 and 16 from the HT1315 results in Table 2, this topic has more terms relevant to psychology (i.e., stimulus, experience, instinct, reaction), suggesting that for the purposes of locating specific pages in HT86 collection relevant to our initial interests, topic 1 provides the best starting point.Table 6 shows the first rows of a list of 800 highest ranked pages from HT86 using topic 1 as the query. Document Distance The animal mind, 1st ed., p. 43 1.00000The animal mind, 2nd ed., p. 47 1 Wesley Mills, physiologist, physician and veterinarian.6. Progress of Science in the Century, 1908, a book on the history of science for general readers by J. Arthur Thomson, naturalist.These books provide a broad array of perspectives on animal intelligence and psychology, from specialist monographs to textbooks to general-audience nonfiction.The texts were written by two Americans (Washburn and Needham), two Scots (Reid and Thomson), a Canadian (Mills), and an Austrian (Wasmann). Argument Extraction: From Pages to Arguments Methods From the HT6 collection, we selected 108 pages for further analysis (Table 7).These pages were annotated using the Argument Interchange Format ontology (AIF - [48]), which defines a vocabulary for describing arguments and argument networks.We generated 43 argument maps using AIF annotated documents, providing a visual representation of the structure of each argument (e.g., Figure 1). The argument content was marked up with OVA+4 , an application which links blocks of text using argument nodes.OVA+ provides a drag-and-drop interface for analyzing textual arguments.It also natively handles AIF structures.Each argument, as selected in the previous section, was divided into propositions and marked up as a set of text blocks.These text blocks containing propositions were linked to propositions that they support, or undercut, to create an argument map. Results We performed two types of argument analysis: Pass 1 aimed to summarize the arguments presented in each volume.Pass A aimed to sequence the arguments presented in each volume. All argument maps can be found at http://bit.ly/1bwJwF9.A full description of the study, including analysis of the arguments can be found in [50]. As a proof of concept, these arguments show the utility of new techniques for faceted search enabling access from a library of over 300,000 books to volume-level analysis of a subset of 1,315 books all the way down to page-level analyses of 108 pages for the purpose of identifying, encoding, modeling, and visualizing arguments.These argument diagrams function as a type of close reading, common in the humanities, and drawing on a rich tradition of philosophical literature (reviewed in [51]). Drilling Down Again: From Arguments to Sentences Methods To further investigate the utility of combining distant reading methods with close reading, we applied topic modeling to the relatively small units of text comprising the sentences within a single volume.We selected Washburn's The Animal Mind text book because of its historical significance, and modeled its 17,544 sentences as a collection of documents.To explore the power of topic modeling to identify latent but meaningful relationships at the micro-level, we arbitrarily chose a sentence from Argument 15 in the Washburn set and used it to query the sentence-level model of The Animal Mind for the most similar sentences using the cosine of the sentence-topic vectors. Results The query sentence and the first half dozen results (with their similarity scores) are shown below. 5uery: Every statement that another being possesses psychic qualities is a conclusion from analogy, not a certainty; it is a matter of faith.(1.0000) 1.If any consciousness accompanies it, then the nearest human analogy to such consciousness is to be found in organic sensations, and these, as has just been said, must necessarily be in the human mind wholly different in quality from anything to be found in an animal whose structure is as simple as the Amoeba's.(0.8413) 2. Fancy, for example, one of us entering a room in the dark and groping about among the furniture.(0.8239) 3. This, of course, does not refer to the power to judge distance.(0.8235) 4. Again, a bodily structure entirely unlike our own must create a background of organic sensation which renders the whole mental life of an animal foreign and unfamiliar to us.(0.8224) 5.She disposes of the psychic learning by experience theory of Nagel by saying that the only experience upon which the animal could reject the filter paper must be experience that it is not good for food.(0.8198) 6.We speak, for example, of an "angry" wasp (0.7924) Sentence 1 is obviously related in meaning to the query sentence: they overlap in some words, and directly express related ideas.But the relevance of the other examples is less direct.Sentence 6 provides a nice illustration of anthropomorphic attribution with no word overlap whatsoever.The inclusion of sentences 2 and 3 is, more puzzling.However, in the context of where these sentences appear in Washburn's book, the relationship become plainer.Sentence 2 comes in the context of the discussion of what it might be like to be an amoeba.It is thus related to sentence 1, and it is used by Washburn to make the point that our experience in the dark, which still involves visual imagination and memories of what we touch, must be "wholly different in quality" (per sentence 1) from what an amoeba might experience.Sentence 3 occurs in a footnote on page 238, and it is worth quoting the footnote in full: Porter observed that the distance at which spiders of the genera Argiope and Epeira could apparently see objects was increased six or eight times if the spider was previously disturbed by shaking her web (612).This, of course, does not refer to the power to judge distance.[Italics in original.] Here, then, we see the author cautioning the reader not to jump to a high-level interpretation of the spider behavior.The spiders may perceive objects at various distances but they don't judge it.The term 'judge' here is philosophically interesting, as it suggests an influence of Immanuel Kant on framing the debate.While Kant's name does not appear in Washburn's book, the term 'judgment' is important to Kant's theory of cognition, and fundamental to the cognitive divide he posits between humans and animals.We emphasize that this is just a speculative suggestion about Washburn's influences, but it does show how the topic modeling process can bring certain interpretive possibilities to the fore, moving the digital humanities another step closer to the goal of generating new insight into human intellectual activity. Zooming Out Again: Macroanalysis by Science Mapping Methods We created visualization of the retrieved books overlaid on a map of science [33] to help understand the distribution of the retrieved books with respect to scientific disciplines. New datasets are overlaid on this map by matching records via journal names or keywords to the 554 sub-disciplines.However, the present work is the first instance of using book data on a science map.We constructed a classification crosswalk to align the journal-based subdisciplines with a book classification system.The Library of Congress Classification Outline (LCCO) provides a disciplinary taxonomy similar to that of the UCSD Map of Science.By using the Library of Congress Control Numbers (LCCN) assigned to each of the 25,258 journal sources in the UCSD Map of Science, we were able to assign likelihoods of each LCCN belonging to each subdiscipline. We assigned each book in our HathiTrust collection a UCSD sub-discipline based on its LCCN.A number of items in the HathiTrust collection never receive LCCNs.For example, university library collections frequently contain course bulletins that are not catalogued by the Library of Congress.We removed the uncatalogued items and projected the remaining volumes onto the UCSD map of science. Results Using the LCCO classification crosswalk, we located 776 out of 1,315 books on the UCSD Map of Science, as shown in Figure 2. In general, the map confirms that the initial keyword-based selection from the HathiTrust retrieved books that are topically positioned below the "equator" of the map, with particular concentrations in the life sciences and humanities, as was to be expected.The map provides additional visual confirmation that the further selections via topic modeling to a subset of 86 and then six of the original collection of 1,315 managed to target books in appropriate areas of interest.In the interactive online version, nodes can be selected, showing which volumes are mapped and providing the title and links to various external sources of metadata. Ultimately, the map overlay provides a grand overview and a potential guide to specific books that were topic modeled, although without further guidance from the topic models, Figure 3: Schematic rendering of the drill-down modeling process.The approximate order of magnitude is listed below each bar, which is scaled logarithmically.the map does not fully meet the desired objective of linking a high-level overview to more detailed textual analysis. General Discussion The notion of "distant reading" [23] has captured the imagination of many in the digital humanities.But the proper interpretation of large-scale quantitative models itself depends on having a feel for the texts, similar to Barbara McClintock's stress on having a "feeling for the organism" [52] or Richard Feynman on the importance for nascent physicists of developing "a 'feel' for the subject" beyond rote knowledge of the basic laws [53].The interpretation of data and models, whether in science or the humanities, is itself (as yet, and despite a few small successes in fields such as medical diagnosis) a task at which humans vastly outperform machines.For this reason, the digital humanities remain a fundamentally hermeneutic enterprise [30], and one in which distant readings and close readings must be tightly linked if anything is to make sense. In this paper we have motivated, introduced, and exemplified a multi-level computational process for connecting macro-analyses of massive amounts of documents to micro-level close reading and careful interpretation of specific passages within those documents.Thus we have demonstrated how existing computational methods can be combined in novel ways to go from a high-level representation of many documents to the discovery and analysis of specific arguments contained within documents. We have also shown how to zoom out to a macro-level overview of the search results.We presented a novel classification crosswalk between the Library of Congress Classification Outline (LCCO) and the UCSD Map of Science, which was constructed using only journal data, to extend the data to books.Because of the mismatch between the book data and the journal metadata, the crosswalk is not perfect, and the method of averaging locations places many books in uninterpretable regions of the map.Nevertheless, the visualization provides some useful information about the effectiveness of a simple keyword search in locating items of interest within a collection of hundreds of thousands of books. That our method succeeded in discovering texts relevant to a highly specific interdisciplinary inquiry shows its robustness to inconsistent and incomplete data.The HathiTrust Digital Library had OCR errors in 2.4% of volumes as of May 2010 [54].While the quality of the HathiTrust has increased in the intervening years, it is still a pervasive issue in digital archives [55]. Multi-level topic modeling combined with an information-theoretic measure of distance can efficiently locate materials that are germane to a specific research project, going from more than a thousand books, to fewer than a hundred using book-level topic models, and further narrowing this set down to a small number of pages within a handful of books using page-level topic models.The similarity measure we used is mediated by the topics in the model, and because every topic assigns a probability to every word in the corpus, this approach is highly adept at finding implicit relationships among the documents.Typical applications of topic modeling, such as graphing the rise and fall of topics through time, may show large-scale trends, but do not mediate the interplay between distant reading and close reading that leads to deeper understanding.By connecting abstract, machine-discovered topics to specific arguments within the text, we have shown how topic modeling can bridge this gap. Conclusion The process and results of our iterative drill-down method are summarized in Figure 3, showing the reduction of the 300,000 public domain volumes in the HathiTrust in August 2012 to the HT1315 collection, to the roughly 32,000 pages in the HT86 collection, to the over 17,000 sentences of the HT6 collection, to smaller set of the 108 pages selected for close reading and argument markup.This reduction allowed us to identify key elements of late 19th and early 20th Century arguments about anthropomorphizing of nonhuman organisms, and to uncover the surprising taxonomic range of these arguments to include consideration even of consciousness in amoebae.The alternative approach of simply counting the occurrence of species names within these books would only have hinted at the presence of such discussions whereas, by putting words into context, topic modeling enabled researchers to zero in on passages worthy of detailed analysis and humanistic interpretation. Figure 1 : Figure 1: An argument map derived from The Animal Mind, represented in OVA+. Figure 2 : Figure 2: UCSD Map of Science with overlay of HathiTrust search results shows topical coverage of humanities and life science data.The basemap of science shows each sub-discipline denoted by a circle colored according to the 13 core disciplines.Links indicate journal co-citations from the basemap.The 776 volumes of HT1315 with LCCN metadata are shown on the map as circles.Volumes also in HT86 are shown with thicker circles, and those in HT6 are shown in the thickest circles.An online, interactive version can be explored at http://inpho.cogs.indiana.edu/scimap/scits. Table 1 : Topics ranked by similarity to 'anthropomorphism' in the HT1315 corpus.Topic 16 (highlighted with bold text) was used to derive the HT86 corpus, as it was most relevant to the inquiry. Table 2 : Topics ranked by similarity to 'anthropomorphism', 'animal', and 'psychology' in the HT1315 corpus.Topics 26, 16, and 10 (highlighted with bold text) were used to derive the HT86 corpus, as they were most relevant to the inquiry. Table 3 : Documents ranked by similarity to topics 10, 16, and 26 in the HT86 corpus.threshold of 1.25, yielding a smaller corpus of 86 volumes which we refer to as the HT86 collection.The top ten volumes identified in this way are shown in Table3. Table 6 : Pages ranked by similarity to Topic 1.We selected six volumes from the HT86 collection which had the most pages in the top 800 highest ranked pages.None of these volumes were in the top 10 keyword search results.These volumes formed the HT6 collection:1.The Animal Mind: A Textbook of Comparative Psychology, 1908 (first edition), by Margaret Floy Washburn, psychologist.Washburn's textbook was foundational for comparative psychology and she is notable as the second woman to be president of the American Psychological Association.2. Comparative studies in the psychology of ants and of higher animals, 1905, a monograph by Erich Wasmann, an entomologist who only partly accepted evolution within species, rejecting common descent, speciation via natural selection, and human evolution.3. The Principles of Heredity, 1906, a scientific monograph by G. Archdall Reid, a physician who argued against the Lamarckian idea of inheritance of acquired characteristics.4. General Biology, 1910, a text book by James G. Needham, entomologist and limnologist.5.The Nature and Development of Animal Intelligence, 1888, a compilation of articles by Table 7 : Pages for which OVA+ argument maps were created.
7,435.4
2017-02-03T00:00:00.000
[ "Computer Science" ]
Investigation of laser annealing mechanisms in thin film coatings by photothermal microscopy We study the evolution of the absorptance of amorphous metal oxide thin films when exposed to intense CW laser radiation measured using a photothermal microscope. The evolution of the absorptance is characterized by a nonexponential decay. Different models that incorporate linear and nonlinear absorption, free carrier absorption, and defect diffusion are used to fit the results, with constraints imposed on the fit parameters to scale with power and intensity. The model that best fits is that two types of interband defects are passivated independently, one by a one-photon process and the other one by a two-photon process. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Introduction One of the challenges in engineering multilayer dielectric interference coatings (IC) for near infrared high-energy lasers is to create structures that perform consistently under intense laser illumination, showing no signs of discoloration or catastrophic failure. The failure mechanisms in the IC's may be of different origin depending on the duration of the laser pulse. Interference coatings typically consist of stacks of two amorphous alternating materials, which have the largest variation in refractive index and the minimum extinction at the laser wavelength. The amorphous thin films are deposited by physical vapor deposition onto transparent substrates. Inherently, These nearly stoichiometric amorphous layers contained a family of interband point defects that affect their absorption and play a role in the coating's damage under illumination [1,2]. Shallow level defects are known to contribute to absorption at 1064 nm. Markosyan et al [3] showed that the 1064 nm absorption loss of Ta 2 O 5 thin films can be altered with simultaneous illumination at shorter wavelengths, an indication of the activation of electronic defects [3]. Langston et al [4], showed the deposition conditions affect the density of structural point defects in Sc 2 O 3 that were identified as oxygen interstitials [4]. Post-treatment, such as annealing and laser conditioning reduces the density of interband electronic states [5]. Moreover, the annealing out of structural defects has also been attributed to improvements on the laser damage performance of thin films when exposed to near infrared nanosecond pulses [6,7]. It was argued that these improvements result from passivating interband electronic states by laser conditioning [7]. In this work, we study the evolution of the absorptance of amorphous metal oxide thin films, and show a permanent reduction. The evolution of the absorptance at 1064 nm, measured using a photothermal microscope, is characterized by a nonexponential decay. Different models that incorporate linear and nonlinear absorption, free carrier absorption and defect diffusion, are used to fit the results, with constraints imposed on the fit parameters to scale with power and intensity. The model that best fits the behavior of the absorptance is one in which two types of interband defects are passivated independently, one by a one photon process and the other by a two photon process. The results of our analysis support previous findings and analysis of laser annealing in HfO 2 /SiO 2 and ZrO 2 /SiO 2 with nanosecond pulses [7]. They are more rigorous in that they identify a dominant mechanism for the passivation of shallow interband electronic states. The fact that the reduction in absorptance in Sc 2 O 3 -SiO 2 stacks is permanent indicates complete passivation. Setup The photothermal microscope used in this work is a modification of the focus error thermal lens technique previously demonstrated [8,9]. Figure 1 shows a diagram of the experimental setup. A 20W fiber coupled IPG Photonics laser model YLR-20-1064-LP with TTL current modulation was used as the pump laser. The laser delivers up to 6.7 W (average power) on the sample after losses in the optical system and 50% loss due to modulation are accounted for. A 13 mW collimated He:Ne laser was used as the probe beam. The pump beam was used to heat the sample for the absorptance measurement and was also responsible for the thin film annealing. Collimating lens (f 0 ) directs the pump beam towards two 1064nm-mirrors (M1 and M2) and reaches the sample passing through a long pass filter (LF: cutoff 750nm) and the objective lens (Ob: focal distance f Ob = 10mm). These two lenses (f 0 and objective) configure a telescope that focuses the pump beam to a beam radius to σ pump = 1µm on the sample (we define σ as the beam radius i.e. ½ the beam waist). The probe beam is reflected in two mirrors M3 and M4 and the beam splitter BS1 (90% probe reflectivity). The tube lens (TL, focal distance f TL = 200mm) focuses the probe beam onto the back focal plane (BFP) of the objective, after being reflected on the long pass filter (LF) in such a way that it is collimated at the sample surface with a size σ probe = 11.5µm. The focused pump beam is partially absorbed by the sample surface and the collimated probe beam passes through the substrate. Two identical cylindrical lenses (CL: focal length f = 75mm), whose axes are perpendicular to each other, are placed on the optical axis separated by a distance D z introducing astigmatism in the probe beam. Due to the thermal lensing effect the probe beam focus position at the Four-Quadrant detector (4Q) position changes at the pump modulation frequency. An operational amplifier based circuit (OPAMP) builds the unbalance signal (C1 + C3)-(C2 + C4), where the quadrants of the 4Q have been denoted as C1, C2, C3 and C4 in the clockwise direction. We use a lock-in amplifier (Stanford Sr810) to detect the component of this signal at the pump modulation frequency. The circuit also generates the sum signal C1 + C2 + C3 + C4 (Signal SUM). A short pass filter (SF: cutoff 850nm) was located before the detector in order to prevent unwanted contributions from the pump beam and neutral density (ND) filters were used to attenuate the power of the probe beam depending on the sample transmission to avoid saturating the four-quadrant detector. The alignment procedure and focusing of the pump beam over the sample were simplified by using a camera added to the experimental setup. The objective plus the tube lens (TL) configures also a telescope that images the sample surface onto the camera. A beam splitter (BS1) allows part of the reflected beams to reach the camera. A second beam splitter (BS2: Transmission 50%, Reflection 50%)) was used to allow illumination of the sample with a LED source. The surface of the sample is positioned at the focal plane of the pump beam and swept by means of a micrometric translation stage (XYZ). The focal spot size of the pump beam defines the lateral resolution of the system. The size of the probe beam defines the pump beam modulation frequency (f pump ), since it is chosen to equate the beam size to the diffusion distance (μ) of the thermal wave generated by the modulated pump beam inside the substrate. In this way, the chosen frequency corresponds to the time necessary for the heat to diffuse a distance of the order of the size of the probe beam in the sample. . The contribution to the signal from the material is proportional to the temperature dependence of the refractive index dn/dT and the length of the heat-affected zone. Hence, in order to neglect the film's contribution to the signal and have a common calibration for all samples, the probe beam diameter should be selected much larger than the film thickness. The film affects the calibration by a factor defined by Therefore, by selecting a probe beam diameter much larger than the film thickness, the path of the probe beam through the heat affected zone is dominated by the path through the substrate. As the probe beam diameter was selected much larger than the thin film thickness, the material properties relevant to the determination of the signal are the film absorptance and the substrate thermal and optical properties. This fact will allow the calibration of the signal in absorptance units irrespective of the film material. For fused silica, the thermal diffusivity is D = 0.008cm 2 /s, therefore: With the above considerations, the focus error signal FE at the modulation frequency of the pump beam is expressed as: Calibration of the photothermal microscope was carried out using samples whose absorption was known. The calibration samples are deposited also by ion beam sputtering. The PCI is calibrated using a sample with high absorptance that can be determined from spectrophotometry data or by fitting the transmission curve vs wavelength with a program like Optilayer and using zero as the second point to draw a straight line for detector voltage signal vs loss. The mean value of the signal obtained along spatial scans of 50μmx50μm and a step of 5μm was compared with the absorptance value of the same sample obtained by the PCI technique [8]. This process was carried out in two zones of the five samples, taking as uncertainty of the measurement the standard deviation of these values in each map. This uncertainty corresponds to the inhomogeneity of the sample and not to fluctuations of the technique. The calibration curve is presented in Fig. 2. For all cases the modulation frequency was 1650Hz, the lock-in amplifier integration time was 100ms, and the time between measurements was 1s. The probe beam size was σ Probe = 11.5µm and the minimum pump beam diameter was σ Pump = 1.0µm. The pump beam size was measured imaging the beam reflected on the substrate onto a camera previously calibrated and checked for linearity. The image was fit by a Gaussian. The probe beam was larger and could be measured by scanning a sharp knife edge. The maximum pump power was 6.7W, which yielded a maximum pump intensity of 0.23GW/cm 2 . To demonstrate the sensitivity of the photothermal microscope and the permanent reduction in the absorptance due to laser irradiation we used a SiO 2 film deposited on a fused silica substrate with an average absorptance of 5ppm. The measurement consisted of determining focus error signal maps subsequently converted to absorptance using the calibration of Fig. 2. To reject spurious peaks from the signal and reduce the noise the same area was repeatedly measured and averaged pixel by pixel. A first repeated scan on a 40µm by 40µm region was performed (not shown) and a day later a larger region including the previously scanned one was measured ( Fig. 3(a)). In this larger scan, the reduction in absorptance in the previously scanned region is evident, confirming the permanent nature of the changes induced in the absorptance. In Fig. 3(b) the average absorptance over 22 pixels as a function of time is presented showing the laser induced annealing phenomenon. Notice that this microscope is capable of identify changes in absorptance as small as 0.5 ppm. The temperature rise due to the absorption of the pump beam can be estimated from the model for a modulated Gaussian pump presented in [10] yielding a temperature rise at the beam center of about 10K. Hence, a temperature annealing mechanism can be discarded as an explanation of the observed decay in the absorptance since typically thermal annealing is performed during about 10 hours at 300C. Samples For the annealing study, a multilayer stack designed as an antireflection coating for 1064 nm deposited onto a UV fused silica substrate was used. This sample consisted of two layers of SiO 2 intercalated with two layers of Sc 2 O 3 . The first layer, in contact with the substrate, Sc 2 O 3 , has a thickness of 50nm. It is followed by a 400 nm thick SiO 2 layer, then a 180 nm Sc 2 O 3 layer and finally a 160 nm SiO 2 layer. This sample was deposited by reactive ion beam sputtering. During the deposition of Sc2O3 from a metal target the oxygen flow was selected to obtain a film with an absorption loss of ∼50 ppm [4]. The SiO 2 instead was deposited from an oxide target, which typically results in an amorphous thin film with an absorption loss of a few parts per million at 1064 nm. For this case the Sc 2 O 3 total thickness was 230nm, 50 times smaller than the probe beam radius, which guarantees that the neglected correction factor due to the film contribution to the signal given in Eq. (1) is satisfactory. The average absorptance of this sample at 1064 nm was 113 ppm. Two different pump beam diameters and different powers were used. For each set, the time evolution of the absorptance was obtained. Table 1 summarizes the beam power, area, and the number of points measured on the different runs. The measured average absorptance as a function of time for each condition is presented in Fig. 4. The variations in the absorptance are of the order of 10 ppm. The time evolution of both the SiO 2 shown in Fig. 3 and the SiO 2 -Sc 2 O 3 multilayer of Fig. 4 cannot be characterized by a single exponential decay and will be analyzed in terms of different models. Again, the direct thermal annealing mechanism is discarded as the temperature rise (around 200K for the highest pump intensity) and the time span of this study are below the values for thermal annealing treatments used for this type of samples. Table 2 as a function of time showing that the annealing process is characterized by an apparent fast process in the first minute followed by a slower process of tens of minutes. In order to test if water adsorbed on the surface could have any influence in the observed evolution of the absorptance behavior shown in Fig. 4, the experiments were repeated flowing dry Argon. No detectable difference was encountered. Models Several models were used to fit the absorptance decay of Fig. 4 and gain insight on the annealing mechanisms. The antireflection coating design used for this study yields an electric field distribution essentially uniform across the coating, and hence the absorption can be in any layer or at the interface. Absorption from the SiO 2 layers can be discarded as it was shown before that these layers have only around 5ppm absorptance. In Fig. 5 a schematic description of the possible mechanisms to be analyzed is drawn. Shallow native defects absorbing at around λ = 1µm and λ = 500nm [4] are probably related to oxygen interstitials as the absorption increases with increased O 2 partial pressure [11]. We assume transitions to the conduction band by one photon absorption from the shallowest state and two photon absorption from the second state. Electrons in the conduction band can absorb photons relaxing within the band by collisions with the lattice vibrations, heating the film. Multiphoton excitations from deeper states are included. The relaxation of the electrons to a deep trap state, depleting the electrons from the shallow states and hence reducing the absorptance is also considered. Diffusion of defects within the heated region, not included in the schematics of Fig. 5, is also discussed. The pump intensity in our experiments was as high as 0.23 GW/cm 2 , a high value for CW irradiation without damage. Hence, the experiments were probably carried out very close to the damage threshold of Sc 2 O 3 . At such high intensities, free carrier absorption in the conduction band becomes important and was therefore included in the models. One possibility was to assume that the local heating gives rise to diffusion of the defects until annealed by collision with other type of defects. The other possibility is that the defects simply anneal by the temperature rise. As direct absorption from shallow states cannot heat the material to high enough temperature [12], a significant temperature rise, if present, must be originated in free carrier absorption. Other options are multiphoton absorption from deeper lying interband states, and combination of several of the mentioned mechanisms. The reduction in the absorptance can be explained by the annealing out of the shallow defects or the capture of the electrons by deep lying interband states. For each measurement made at different pump powers and pump beam sizes the data were fitted and the quality of the fit, which was limited by the noise of the measurement, was evaluated based on the consistency of the trends and dependences of the model parameters with the pump power and beam size. This allowed us to select the possible mechanism responsible for the laser annealing among the different proposals tried. The expected analytical dependence of the absorptance decay and pump beam parameters for all different models is derived next. Stretched exponential relaxation (SER) Since the seminal paper by Kohlrausch in 1854 [13] many systems have been shown to relax following a common equation given by: ( ) where c is a concentration and β is a stretching factor between 0 and 1. In particular, Devine [14] has shown that the annealing of defects in amorphous SiO 2 follows such law. This behavior has been found in more than 70 different systems as diverse as glasses, polymers and spin glasses, and have been explained as due to the presence of traps frozen in the amorphous matrix that capture excitations that diffuse freely within an otherwise homogeneous medium [15][16][17]. This trapping process annihilates the defects depleting their concentration. This model provides a value for the stretching parameter: where d is the effective dimension for the diffusion within the matrix and is expected to be between 1 and 3. For d = 3 the value β = 3/5 is expected, that would drop to β = 1/3 for d = 1. If a SER process were responsible for the laser induced annealing observed in the deposited dielectric thin films one would expect that the rate coefficient 1/τ to be proportional to the beam intensity and the stretching parameter β to be constant at least within a single sample. Multiphoton defect annealing For this laser induced annealing model, we assume that the defect concentration evolves following locally an exponential decay as: where the rate 1/τ accounts for the laser induced reaction and hence changes from site to site following a power law that depends on the number of photons simultaneously absorbed for the transition to take place ( ) where a Gaussian beam profile has been assumed, The photothermal signal is proportional to the absorbed power that results from the overlap integral of the incident beam intensity with the remaining defect concentration: Changing variables as where Γ is the incomplete gamma function. For the special case of single photon transitions n = 1 and For a two photon transition, n = 2 and Values n>2 are less likely to occur due to the low cross sections for multiphoton processes [18] but we included them in the fit because as it will be shown a higher n gives rise to better fits of the temporal behavior of the experimental results indicating the convenience of including these mechanisms for completeness. One type of defect excited by one or two photons In this case, the excitation rate has two terms: 14) and the signal results where b 1 is proportional to the intensity and b 2 to the intensity squared, and One particular case for this mechanism is shallow state absorption followed by intraband free carrier absorption. Assuming the occupied shallow state initial density is N 0 and N is the free carrier concentration, then (N 0 -N) is the steady state occupied shallow state density. In this case, the absorptance (α) is proportional to: where σ is the shallow state absorption cross section, I the pump intensity and τ the decay time from the conduction band back to the shallow state. The approximate solution for unsaturated transitions (low power) predicts an absorption coefficient that varies linearly with intensity as: as found in this case of combined one photon and two photon mechanism of Eq. (14). Two types of defects excited independently by one or two photons This is simply a linear combination of the single defect situation presented in 3.2. Three cases will be analyzed: a-both defects anneal by the action of a single photon mechanism, b-one defect is annealed by a one photon mechanism and the other one by a two photon mechanism, c-both defects are annealed independently by two photon mechanisms. Fits and discussion The results were fitted using the Trust-Region Method for Nonlinear Minimization of the least absolute residual (LAR) (uncertainties presented are the 95% confidence bounds) with the models described in section 3 and combinations of two of them. For each case, we show one fit and a table of the fitted parameters with confidence intervals for the entire set of measurements. The parameters from the fits for each model are contrasted with the expected dependences with pump intensity as discussed in section 3. Figure 6 shows the result of the fit using SER. The parameters obtained from the fit are summarized in Table 2. The expression used for the fit is: From the model one should expect that the time constant τ be inversely proportional to the power and that β (that depends on the dimension) remain constant. The drastic reduction of β with decreasing power is inconsistent with the assumptions and despite the power dependence of τ is not conclusive (due to the large uncertainties from the fit) this mechanism can be ruled out as responsible for the annealing. n photons process In this case we assume a single type of defect that is annealed by the simultaneous absorption of n photons. In Fig. 7 the fit for n = 1, 2, 3, 4, and 10 are shown for one example (a detail for short times is presented in Fig. 7(b). As mentioned before the high n's were included because reproduce better the short time behavior of the measured data. In Table 3 the retrieved time constant and R 2 of the fit are presented. The fit appears to improve for short times as the number of photons increases but the retrieved parameters are inconsistent (see Table 4). In fact, the parameter 1/τ 0 in Eq. (7) should scale with the n th power of the intensity and conversely it decreases with power. This inconsistency rules out any single n photon process. Single type of defect annealed by one and two photons As discussed in section 3.3 two rate parameters appear in this process, b 1 and b 2 , that should scale with the intensity and the intensity squared respectively. The equation used for the fit (from Eq. (15)) is: A single fit is shown in Fig. 8 Fig . 8. Fit of the absorptance decay for sample A1 for one type of defect with an excitation process that includes a combination of one and two photons absorbed. The fit is not adequate for short times. From the model one would expect c 1 to remain constant with the intensity because b 1 scales linearly and b 2 quadratically with intensity. This is consistent with the measurements and fits. But c 2 decreases when it should scale linearly with the intensity. This inconsistency rules out this mechanism. Two type of defects In this case we assume that there are two type of defects with different and independent concentrations that anneal both by a one photon mechanism (M_1 + 1) or one by a one photon mechanism and the other one by a two photon mechanism (M_1 + 2) or that both defects anneal by a two photon mechanism (M_2 + 2). For the three processes, one particular fit is shown in Fig. 9 and the parameters are listed in Table 5 for M_1 + 1, Table 6 for M_1 + 2 and Table 7 for M_2 + 2. The equations for the fit are a combination of Eq. (11) and result for M_1 + 1 and for M_1 + 2 and finally for M_2 + 2: Fig. 9. a) Two types of defects that anneal each by a one photon mechanism: (M_1 + 1). b) Two type of defects that anneal one by a one photon process and the other one by a two photon process (M_1 + 2). c) Two type of defects that anneal each by a two photon process (M_2 + 2) From the results the mechanism M_2 + 2 is ruled out because the rate coefficients b 1 and b 2 do not scale as the square of the intensity as expected for a two photon process. For M_1 + 1 the fits correspond to a slow and a fast process. The fast process coincides, within the uncertainties of the fit, with the one photon process in mechanism M_1 + 2. The rate parameters depend on the intensity, not the power, and the set of measurements labeled A used a pump beam area twice as large as the set labeled B. Hence for the test of consistency with pump power a new parameter k was used defined as the rate coefficient was multiplied by the beam area for one photon processes (k = b*area) and by the beam area squared for two photon processes (k = b*area 2 ). The retrieved parameters were plotted as a function of pump power for M1 + 1 case and fitted with a linear function intercepting the origin (Fig. 10). For the M1 + 2 process the k1 parameter was plotted as a function of the power and k2 as a function of the power squared (Fig. 11). The R 2 of the fits were acceptable within the uncertainties of the measurement for the M_1 + 2 process and the linear correlation was found to be too poor for the M_1 + 1 process. Fig. 10. Rate coefficients as a function of the pump power for the two defects, one and one photon annealing mechanism (M_1 + 1). a) k1 = beam area*b1 for the fast one photon contribution and linear fit (dashed lines indicate the confidence bounds from the linear fit). b) k2 = (beam area)*b2 for the slow one photon contribution and linear fit (dashed lines indicate the confidence bounds from the linear fit). Fig. 11. Rate coefficients as a function of the pump power for the two defects, one and two photon annealing mechanism (M_1 + 2). a) k1 = beam area*b1 for the one photon contribution and linear fit (dashed lines indicate the confidence bounds from the linear fit). b) k2 = (beam area) 2 *b2 for the two photon contribution and linear fit (dashed lines indicate the confidence bounds from the linear fit). Conclusions It has been found that defects in dielectric thin films responsible for the residual absorption are annealed under 1064 nm laser beam illumination with an intensity of up to 0.23 GW/cm 2 . Changes in the absorptance of less than 10 ppm were detected using a photothermal microscope based on a focus error signal. Several models were developed to analyze possible annealing mechanisms, that included stretched exponential relaxation, multiphoton processes and combinations of such processes including one defect type and two mechanisms (one and two photon absorption) which includes free carrier absorption, and two type of defects that annealed independently by one or two photon absorption. The choice of the most probable process was made by the consistency of the retrieved parameters with the expected dependence with the pump intensity. This analysis showed that the most probable mechanism responsible for the annealing is one in which two type of defects are present that anneal independently one by a one photon process and the other one by a two photon process. There are a family of shallow defects within the bandgap of amorphous oxides. In the case of amorphous SiO 2 , energy states 1-2 eV below the conduction band are associated with positively charged defects such as O 3 + or Si 3 + [1]. Non-bridging oxygen (O 1 -) are identified as acceptor states with binding energies of ∼2 eV [1]. As one of the processes ruled out is the heating of the material by free carrier absorption, the temperature driven annealing predicted by full atomistic simulations of the annealing of amorphous SiO 2 [19] has been discarded. The passivation of the defects and reduction of the absorptance can be attributed to the trapping of the electrons in deep states, leaving the shallow states unoccupied as suggested in [12]. The fact that the shallow defects are not completely passivated by this mechanism might be explained in this model by the fact that not enough unoccupied deep traps are available.
6,687.8
2019-02-18T00:00:00.000
[ "Physics", "Materials Science" ]
Synthesis of Promising Cathode Material for Lithium Polymer Batteries . Original method for synthesis of lithium vanadium phosphate was developed. The method includes two stages: 1st, synthesis of iron phosphate from a mixture of ammonium dihydrophosphate and metal oxide; and 2st, synthesis of lithium vanadium phosphate by thermal lithiation of the product obtained in the 1st stage, with mechanical activation of the precursor in the course of plastic deformation. Our results would provide some basis for further improvement on the Li 3 V 2 (PO 4 ) 3 electrode materials for advance lithium-ion batteries. Introduction The modern energy, in particular, hydrogen, requires the development of new efficient storage systems of energy generation and accumulating. The electricity accumulat ion to create power plants based on renewable energy sources (RES) is relevant due to the variability derived energy. The energy storage generated by small power plants for the subsequent smoothing of peak loads is very important task. In addition to small and large energy, a significant need for highly efficient generators of electricity and batteries demand on transport, portable (mobile phones, gadgets, laptops, etc.) technology, aviation, space and other fields [1][2][3]. Fo r the autonomous wind and solar power sources it is appropriate to use electrochemical batteries. They are mostly limit the cost performance, reliability and efficiency of wind and solar power plants with a capacity up to 100 kW. At the mo ment lithiu m poly mer batteries are the most promising rechargeable chemical current sources: they dominate due to their light weight, high density electrical energy. Recently, the demand for lithiu m poly mer battery has increased, which is due both to the tendency toward miniaturization of electronic boards and to the increased requirements imposed by power consumers [4][5][6][7]. The development of lithiu m poly mer battery substantially expands the opportunities of modern miniature devices, such as smart cards, imp lanted medical devices, memo ry units, various sensors, and converters. One of the main difficult ies in creation of film batteries consists in development of efficient cathode materials. Particularly, monoclinic Li 3 V 2 (PO 4 ) 3 has emerged as one of the pro mising cathode candidates for h igh power lithiu m-ion batteries due to its high theoretical capacity, high operate voltage (3.6 V, 4.1 V) and good ion mobility [10]. The large interstitial spaces created by units allows fast ion migration in three d imensions, and three reversible lithiu m ions can be totally extracted fro m the lattice of Li 3 V 2 (PO 4 ) 3 with in a range of 3.0 and 4.8 V with the highest theoretical capacity o f 197 mA*h/g obtained . However, the power perfo rmance of Li 3 V 2 (PO 4 ) 3 is seriously limited by the poor electronic conductivity (2.4 × 10 -7 S/cm) . Up to now, tremendous effective approaches have been investigated to overcome these obstacles by minimizing the particle size, coating with carbon and doping with metal ions [10]. In the methods known from the literature, synthesis of lithiu m metal phosphates is a double-stage thermal synthesis of ternary mixtures: ammoniu m dihydrophosphate, metal o xide, and lithiu m co mpounds. However, it has been found that its mechanis m is rather complicated and presumably includes several parallel processes. Therefore, the following two-stage process model has been suggested: 1st, synthesis of metalphosphate from a mixtu re of ammoniu m dihydrophosphate; and 2nd, synthesis of lithiu m metal phosphate by thermal lithiation of the product obtained in the 1st stage [6,8]. It has been shown previously that the mechanical activation of a precursor in a high-pressure apparatus of the Bridg man anvil type can be successfully used to synthesize h ighly dispersed cathode materials for lithiu m batteries [11][12][13]. Therefore, we studied the effect of mechanical activation on synthesis and electrochemical properties of lithium vanadium phosphate. Experimental section We chose NH 4 H 2 PO 4 , Li 2 CO 3 , and V 2 O 3 of chemically pure grade as objects of study. Starting mixtures of powdered components were prepared by mixing in a mortar. In the first stage of synthesis, a mixtu re of NH 4 H 2 PO 4 and V 2 O 3 was annealed in a mu ffle furnace at a temperature of 750°C for 6h. In the second stage, 20% Li 2 CO 3 was added to the product obtained and the mixtu re was thermally treated at temperatures of 600, 700, and 800°C for 4-10h. The plastic deformation of the precursors under a pressure of 1.5 GPa was performed at room temperature on anvils made of a VK6 hard alloy, with the working surfaces of the anvils having a diameter o f 15 mm and anvil rotation angle equal to 300°. The heat effects that occur in the prepared materials at temperatures fro m room temperature to 800ºC were studied by scanning differential calorimet ry on a TA Instruments model Q100 instrument at a scanning rate of 20 K* min -1 ; the sample weight was fro m 1 to 3 mg. Thermograv imetric analysis was performed on TA Instrument model Q500 thermogravimet ric analy zer; the scan rate was 20 K* min -1 and the sample weight was 2-4 mg . X-ray diffraction (XRD) measurements were performed on an Empyrean diffracto meter using Cu K radiat ion (t wo wavelengths -1.5406 and 1.5444E were used for calculations, considering a 2:1 rat io of their intensities in the doublet) and scanning over a 2Ө range of 5°-100°. The calculated phase composition was verified by dual phase Rietveld refinement using the MRIA software program [14]. A composite electrode was prepared by mixing 80 wt % Li 3 V 2 (PO 4 ) 3 , 10 wt % PVDF and 10 wt % acetylene black with NMP (1-methyl-2-pyrrolidone) to form the slurry, which was then spread on to a alu minum foil and dried at 120ºC for 24h in a vacuum oven. The batteries were assembled in an argon-filled gloved-box, in which o xygen and mo isture level less than 1 pp m, and the electrolyte was 1 M LiPF 6 in a mixture of EC (ethylene carbonate), DMC (dimethyl carbonate) and EM C (ethylmethyl carbonate) (1:1:1 by weight). Typically, a working electrode of 1.5 cm 2 was prepared with the active material mass loading of 3.0 mg per cm 2 . The coin cell was fabricated using the lithiu m metal as a counter electrode. Electrochemical measurements were conducted with galvanostatic charge and discharge on a Elins P-20X8 cell testing apparatus in the voltage range of 3.0 -4.3 V at room temperature. The discharge-rate range is fro m 0.5 to 1 C to 3.0 V and charge is at 0.5 C to 4.3 V. The C-rates and storage capacities were calculated fro m the mass of Li 3 V 2 (PO 4 ) 3 with the amount of carbon being subtracted (1C = 130 mA*h/g). Cyclic voltammetry (CV) measurements were performed on a Elins P-20X8 electrochemical workstation. CVs were conducted in the cut-off voltage range of 3.0-4.3 V versus Li/Li + at a scan rate of 0.1 mV/s. Results and discussion The thermogravimetric curves for the init ial mixture of V 2 O 3 and NH 4 H 2 PO 4 and the same mixture after being subjected to mechanical activation were similar. In both cases, we observed a process spanning the temperature range of 100-800°C featuring a characteristic exothermic peak (Fig. 1). The enthalpy associated with the exothermic peak was 2177 and 2519 J/g for the initial mixture and for the mechanically activated one, respectively. The exothermic processes in the specimens being heated were acco mpanied by the decrease in sample weight by 30.9 and 32.1% for the in itial mixture and for the mechanically act ivated one, respectively (Fig. 2). [8] for the phases VPO 4 : V 2 O 3 , the following ratios were obtained: 11: 1 (Fig. 3). The thermogravimetric curves for the initial mixture of VPO 4 and Li 2 CO 3 and the same mixture after being subjected to mechanical activation were similar. In both cases, we observed a process spanning the temperature range of 150-800°C featuring a characteristic exothermic peak (Fig. 4). The enthalpy associated with the exothermic peak was 3796 and 3946 J/g for the initial mixture and for the mechanically activated one, respectively. The exothermic processes in the specimens being heated were acco mpanied by the decrease in sample weight by 17.4 and 15.6% for the in itial mixture and for the mechanically act ivated one, respectively (Fig. 5). For the initial mixture, weight loss occurred as a onestep process, and a major fract ion of the sample weight was lost with in the range of temperature of 200 to 650°C. For the mechanically activated sample, weight loss started 30°C lower and ended 50°C lo wer co mpared to the initial mixture. Th is difference can be explained by the fact that the chemical reactions partially occurred during mechanical activation. Structurization processes, which may not exhib it associated heat effects, can proceed in parallel with the physicochemical processes in our samples. We identified four peaks of Li 3 V 2 (PO 4 ) 3 and LiVP 2 O 7 peaks in XRD patterns of the initial VPO 4 -Li 2 CO 3 mixture annealed at 700°C for 10h. By applying dual phase Rietveld refinement to our XRD data [8], we established that the phase ratio Li 3 V 2 (PO 4 ) 3 :LiVP 2 O 7 was 8:2 for the init ial mixture and 9:1 fo r the mixture annealed at 800°C for 6h. It was found that the plastic deformation of the precursor is effective in the second stage of Li 3 V 2 (PO 4 ) 3 synthesis. XRD patterns of samples subjected to mechanical activation and annealing at 600°C for 7h featured peaks due to Li 3 V 2 (PO 4 ) 3 and LiVP 2 O 7 phases, as shown in Fig. 6. Applying dual phase Rietveld refinement yielded the phase ratio Li 3 V 2 (PO 4 ) 3 :LiVP 2 O 7 = 9:1. With annealing temperatures raised to 750°C (4 h ) the phase ratio was 11:1. [5,9,10]. We thus see that the mechanical activation of the precursor in the Bridg man anvil apparatus shortened the annealing time required to achieve the desired nanodispersed material. We exp lain these results by considering the follo wing processes. Plastic deformation induces numerous structural defects in individual solids with different chemical natures. These processes are particularly active in binary mixtures; namely, mass transfer processes resulting in the formation of solid solutions are very intense under these conditions. As was established earlier, structure format ion processes proceed considerably more facile in mixtures subjected to deformations [15]. Fig. 6. X-ray diffraction pattern of lithium vanadium phosphate Electrochemical tests were carried out for electrodes prepared using the active mass based on the VPO 4 -Li 2 CO 3 mixture subjected to mechanical activation followed by annealing at 750°C. The tests showed that our cathodes based on lithiu m vanadium phosphate displayed reversible cycling at current densities of 0.2-1.0 mA/cm 2 . Figure 7 shows the CV curves of Li 3 V 2 (PO 4 ) 3 electrode at a scan rate of 0.1 mV/s fro m 3.0 to 4.3 V. Fig. 7. Results of cyclic voltammetry analysis Each of the CV curves includes three oxidation peaks and three reduction peaks, which is consistent with the galvanostatic charge/discharge curves. The good overlap of CV cycles and the symmetry of the o xidation and reduction peaks in the CV curves indicate the good reversibility of lithium insertion/deinsertion reactions. A comparison of the results we obtained with published data demonstrated that electrodes based on Li 3 V 2 (PO 4 ) 3 synthesized in the study compare well in specific capacity and stability with the known foreign and domestic analogues [5,9,10]. Conclusions An original method for synthesis of lithiu m vanadium phosphate was developed. The method includes two stages: 1st, synthesis of vanadium phosphate fro m a mixtu re of ammon iu m d ihydrophosphate and metal oxide; and 2st, synthesis of lithiu m iron phosphate by therma l lithiation of the product obtained in the 1st stage, which includes mechanical activation of the precursor in the course of plastic deformation. Our results would provide some basis for further improvement on the Li 3 V 2 (PO 4 ) 3 electrode materials.
2,917.6
2019-04-01T00:00:00.000
[ "Materials Science", "Chemistry" ]
Spatial correction improves accuracy of catheter positioning during ablation of premature ventricular contractions: differences between ventricular outflow tracts and other localizations Background Hybrid activation mapping is a novel tool to correct for spatial displacement of the mapping catheter due to asymmetrical contraction of myocardium during premature ventricular contractions (PVC). The aim of this study is to describe and improve our understanding of spatial displacement during PVC mapping as well as options for correction using hybrid activation mapping. Methods and results We analyzed 5798 hybrid mapping points in 40 acquired hybrid maps of 22 consecutive patients (age 63 ± 16 years, 45% female) treated for premature ventricular contractions (PVCs). Median PVC-coupling interval was 552 ms (IQR 83 ms). Spatial displacement was determined by measuring the dislocation of the catheter tip during PVC compared to the preceding sinus beat. Mean spatial displacement was 3.8 ± 1.5 mm for all maps. The displacement was 1.3 ± 0.4 mm larger for PVCs with non-outflow-tract origin compared to PVCs originating from the ventricular outflow tracts (RVOT/LVOT; p = 0.045). Demographic parameters, PVC-coupling-interval and chamber of origin had no significant influence on the extent of spatial displacement. Conclusion Ectopic activation of the ventricular myocardium during PVCs results in spatial displacement of mapping points that is significantly larger for PVCs with non-outflow-tract origin. The correction for spatial displacement may improve accuracy of radiofrequency current (RFC)-application in catheter ablation of PVCs. Introduction During the last decades, catheter ablation of premature ventricular contractions (PVCs) has developed into a standard treatment for symptomatic patients [1][2][3][4]. Technical and procedural advances have improved outcome and safety so far that catheter ablation is now recommended as primary treatment option for outflow tract PVCs [5]. In light of these developments, the role Open Access *Correspondence<EMAIL_ADDRESS><EMAIL_ADDRESS>of catheter ablation in clinical practice is expected to increase in the near future. To guide the catheter for mapping and ablation, threedimensional, electroanatomic mapping of PVCs is a routine method during interventional treatment [6]. During PVCs, the ectopic origin of myocardial activation results in an asymmetrical contraction sequence, commencing during the diastolic phase of the cardiac cycle. Depending on the contraction sequence and the time of PVC-onset, a spatial shift of myocardial tissue occurs during PVC compared to its location during normal sinus rhythm, as it has first been described by Andreu et al. [7]. In conventional activation mapping, the location of each mapping point is recorded after the predefined pattern in surface ECG was matched to the PVCmorphology. It therefore represents the catheter position after complete myocardial activation. Since catheter ablation is usually performed during sinus rhythm, this phenomenon can lead to imprecise localization of ablation targets. Correcting for this shift may facilitate the precise localization of the origin of PVCs and therefore allow to direct radio-frequency-current (RFC)-impulses with higher accuracy. A novel mapping software tool (CARTO III, Software Version 7 Carto Prime, Biosense Webster) integrates an algorithm for correction of the aforementioned shift: for each registered PVC, the electrogram during the ectopic beat is paired with the location of the preceding sinus beat. This novel mapping algorithm is referred to as "hybrid mapping" [8]. Recently, first clinical results have been published for mapping and ablation with correction for the spatial displacement [8,9]. However, more data is needed to evaluate the extent and underlying mechanisms of the myocardial shift. Patient-specific factors and anatomical characteristics of different myocardial areas might affect spatial displacement. The aim of this study is to describe the extent and influencing factors of spatial displacement in hybrid activation mapping of PVCs. Patient selection For this monocentric study, all patients who underwent catheter ablation for PVCs between April 2019 and April 2020 using the novel mapping feature were analyzed. Procedures, in which pacemapping was used due to insufficient intraprocedural incidence of PVCs, were excluded. Clinical characteristics were obtained by review of the medical records and charts. Structural heart disease was defined as coronary artery disease leading to interventional treatment, history of myocarditis, significant valvular disease leading to ventricular dysfunction, dilated/ hypertrophic cardiomyopathy or systemic disease with cardiac manifestation (e.g. sarcoidosis). Mapping and ablation Ablation procedures were performed under conscious sedation using propofol and fentanyl. Orciprenaline was administered to provoke PVCs during ablation procedures when required. Catheters were positioned via femoral venous and/or retrograde arterial access depending on the respective chamber of interest. Systemic heparinization to achieve an activated clotting time of 250-300 s was performed for left-sided procedures. Three-dimensional mapping was performed using CARTO III (Software Version 7 Carto Prime, Biosense Webster, Diamond Bar, CA, USA) with the integrated hybrid mapping module: Both the clinical PVC and a normal sinus beat were saved as ECG-pattern. When a PVC matching the predefined pattern was recorded, the local activation time (LAT) during the PVC was projected onto the catheter position recorded during the preceding sinus beat. In that way, catheter movement due to unphysiological contraction is corrected and the mapping point represents the position of the corresponding myocardium in sinus rhythm. The distance between catheter location during PVC and the location during the preceding sinus beat was defined as spatial displacement. The threshold for matching a PVC to the predefined pattern was set to 98%. As reference for local activation time, a precordial lead with a well-defined, stable R-peak during PVCs was selected. Standard catheter for mapping and ablation was a 3.5 mm irrigated tip catheter (Carto NaviStar ThermoCool, 8 French, D-Curve, Biosense Webster). Ablation was performed in the area of earliest activation using radiofrequency current with a power between 20 and 50 W depending on the target area. Adequate lesion formation was secured by monitoring of local impedance. Contact force was not used routinely as it can have a slight but sometimes significant increase in catheter stiffness in our experience. Quantification of spatial displacement During the ablation procedure, we carefully reviewed the annotations to obtain correct data for spatial displacement. Offline, all non-hybrid points and floating points were deleted in order to create a map comprised exclusively of hybrid points. For each point, spatial displacement and PVC-coupling interval were analyzed. Additionally, the mean spatial displacement was calculated for all recorded hybrid points. In each map, the exact location of PVC-origin, chamber of PVC-origin, number of mapping points, number of hybrid mapping points and median spatial displacement were analyzed. The timing of PVC-onset during the cardiac cycle was represented by median PVC-coupling interval as this period is most relevant for the acquisition of hybrid mapping points. Statistical analysis Descriptive statistics are presented as count and percentage for categorical and ordinal variables and as mean ± standard deviation for continuous variables if normally distributed, and as median (interquartile range) otherwise. To analyze the association between spatial displacement and potential influencing variables, linear regressions were calculated using a mixed effects model to account for repeat measurements (multiple maps) in several cases. For regression analyses, maps of the great cardiac vein and aorta were excluded. Demographic parameters, antiarrhythmic medication, mapped cardiac chamber, left-ventricular ejection fraction, origin of PVCs and median coupling interval were defined as fixed effects. Patient-ID was defined as random effect. The regression coefficient calculated in the linear mixed effects model was used to determine the alteration of spatial displacement depending on the change of the predictive variable. Two-sided p < 0.05 were considered statistically significant. The reported p-values are used as descriptive measures only. All statistical calculations were performed in IBM SPSS Version 26.0.0.0. Results Baseline parameters are shown in Table 1. Twentytwo patients were included in the study (55% male, age 63 ± 16 years). Twenty-four ablation procedures were performed using hybrid mapping. In two cases, a second ablation procedure was performed: One patient with cardiac sarcoidosis developed PVCs of a different morphology that was treated via catheter ablation 9 months after the first procedure. Another patient showed PVCs originating close to the His-bundle. After careful RFCapplication and initial suppression of PVC in the first procedure, early recurrence of the targeted PVCs necessitated a second catheter ablation, which resulted in lasting suppression of PVCs. In all other cases, acute suppression of PVCs was achieved. In total, 40 threedimensional maps containing 5798 hybrid points were analyzed. An example for the impact of the novel mapping modality is shown in Fig. 1. Twenty-four maps (60%) were recorded in the left ventricle (LV), 12 maps (30%) in the right ventricle (RV), 2 maps (5%) in the great cardiac vein and 2 maps (5%) in the proximal aorta. In 12 procedures (50%), the origin of the mapped PVC-morphology was confirmed in the ventricular outflow-tracts (6 RVOT, 6 LVOT, outflow-tract PVCs). Non-outflow tract PVCs originated from the LV in 9 cases (39%; high LV-septum: 3, inferoseptal LV: 2, free LV-wall: 1, posteromedial papillary muscle: 1, posterior mitral annulus: 1, septal LV: 1) and from the basal RV in 1 case. Two patients (8%) showed PVCs with an epicardial origin, which were treated by ablation via the great cardiac vein. Mean correction for spatial displacement was 3.8 ± 1.5 mm for all mapping points. Median PVC-coupling interval was 552 ms (IQR 83 ms). On average, the spatial displacement of mapping points was 1.3 ± 0.4 mm larger for non-outflow-tract PVCs (4.5 ± 1.5 mm vs. 3.2 ± 1.2 mm; p = 0.045 in linear mixed effects model). The impact of the site of PVC-origin on spatial displacement is shown in Fig. 2. A slight tendency towards larger displacement values with longer PVC coupling interval was observed (p = 0.252). Patient age, BMI, sex, antiarrhythmic medication, left-ventricular ejection fraction and the mapped cardiac chamber had no influence on spatial displacement ( Table 2). The spatial displacement had no effect on total ablation energy or procedure time. To evaluate the influence of the mapping point's exact location on its spatial displacement, a subanalysis on three anatomically complete maps was performed. We chose PVCs with a similar area of earliest activation to minimize the effect of PVC-origin on spatial displacement. The maps were comprised of 1310 hybrid mapping points. Two maps were obtained from the right ventricle and one map from the left ventricle. All patients suffered from structural heart disease (dilated cardiomyopathy/ ischemic cardiomyopathy) and showed PVCs originating from the septal myocardium. The results are shown in Table 3. In total, we observed a comparatively low median spatial displacement of 2.9 (IQR 1.9-4.1) mm for all mapping points. There were no significant differences in spatial displacement between outflow-tracts and ventricle in general (p = 0.96). Indeed, we observed significant differences between distinct locations within the ventricular myocardium. The displacement was significantly wider for points acquired in the RV free wall and the LV free wall (Table 3). Conversely, significantly lower values for spatial displacement were recorded for the inferior and septal ventricular walls. While a trend towards more displacement in the LVOT was observed, the small number of points (n = 27) does not allow a conclusive statement. Discussion For catheter ablation of PVCs, accurate mapping is essential for correct localization of ablation targets and efficient suppression of PVCs. Even small inaccuracies in mapping might lead to unsuccessful ablation attempts, which may induce edema in the myocardial area of interest and complicate further ablation attempts [10]. The main finding of this study was that the spatial displacement of mapping points, determined by movement of the catheter tip during PVC compared to the preceding sinus beat, is larger in activation mapping of non-outflow tract PVCs compared to outflow tract PVCs. This has implications for mapping procedures targeting e.g., the Purkinje system and the LV wall including papillary muscles. Hybrid activation mapping is a novel tool to correct for spatial displacement of the mapping catheter during premature ventricular contractions (PVC). First clinical results have been published [8], but more data is needed to thoroughly assess the myocardial displacement and the factors influencing it. Our results show that the average mapping point is displaced by ca. 4 mm, with a 33% larger displacement for non-outflow tract PVCs. The extent of spatial displacement observed in our study is supported by the observation of Steyers et al. in their study about 21 patients and 606 mapping points. The authors found a mean displacement of 4.4 ± 2.4 mm [9]. Andreu et al. described a median spatial displacement of 9.42 mm (IQR 6.19-12.85) in a study including 55 patients, analyzing 6923 mapping points [7]. In this study, the authors manually corrected for the shift by re-annotating each point to the location of its preceding sinus rhythm beat. De Potter et al. found a mean spatial displacement of 8.9 ± 5.5 mm in a small sample of 127 hybrid points using automatic correction for spatial displacement [8]. The cranial portions of the heart containing the outflow tracts and the valvular plane are more fixed by adjoining structures like the great vessels in comparison to non-outflow-tract myocardium, which is surrounded by the pericardial space [7]. Their physiological displacement is less prominent than in the apical portions or the free wall of the ventricles. This might be a factor limiting the extent of spatial displacement in PVCs originating from the ventricular outflow tracts, especially in patients with preserved ventricular function. Additionally, parts of the outflow tracts are located close to the AV-node and His-bundle. Ectopic electrical activity might therefore enter His-purkinje-system earlier than in non-outflow tract PVCs. The resulting myocardial contraction might resemble the physiological contraction more closely, contributing to a smaller spatial displacement. Similarly, small differences of spatial displacement between mapping points located in different myocardial areas were observed in an earlier study [7]. However, the authors differentiated between location of mapping points, while we differentiated between different areas of PVC-origin. The aforementioned anatomical conditions for the outflow tracts might play a role for both findings. However, an earlier entry of electrical activity into the His-purkinjesystem is influenced by PVC-origin, not on the sole location of mapping points. Therefore, the results are not entirely comparable. Reported data concerning influencing factors for spatial displacement are heterogeneous: Since movement of the myocardium during diastolic filling is expected, the phase of the cardiac cycle in PVC and in sinus rhythm on which the point is acquired, and which is dependent on the coupling interval, represents an important factor in our analysis. Steyers et al. reported a trend towards larger displacement with a lower PVC-coupling interval that did not reach statistical significance, while this correlation reached statistical significance in the work published by Andreu et al. [7,9]. In contrast, we saw a slight trend towards larger displacement for PVCs with a longer coupling interval. The difference in the reported results could be explained by disparities in the patient collectives: The vast majority of patients in the study of Andreu et al. had PVCs originating from the ventricular outflow tracts [7], while the origin of PVCs was distributed almost equally between ventricular outflow tracts and non-outflow tract myocardium in our study population. Since the ventricular outflow tracts are more fixed by adjoining vessels, inadequate ventricular filling associated with a shorter coupling interval might be more relevant for spatial displacement for PVCs originating in the cranial portions of the heart [7]. With a short coupling interval, the ectopic activity commences early in the diastole with a short time for relaxation and ventricular filling after the preceding sinus beat, leading to a lower stroke volume and inotropy for the premature beat. It is therefore conceivable that in our population with more non-outflow-tract PVCs, the better ventricular filling and inotropy associated with a longer coupling interval might accentuate the spatial displacement as one can expect a more unphysiological contraction sequence in non-outflow-tract origins. The previously reported influence of the mapped heart chamber on spatial displacement [7] could neither be confirmed by our study nor by Steyers et al. [9]. Again, this discrepancy might be explained by differences in the study populations: PVCs originating outside the outflow tracts are associated with structural heart disease [11], which was more prevalent in the left ventricle in our study population. That these PVCs showed significantly more displacement, might balance out the association between spatial displacement and mapped chamber previously reported for PVCs originating around the valvular plane [7]. The subanalysis yields interesting results that nicely demonstrate the multifactorial genesis of the spatial displacement: The free walls of both ventricles are prone to a wider displacement, which may be explained by the fact that they are not fixed to adjoining structures and 1.9-4.0 0.14 therefore have a low resistance to movement. The inferior wall and the ventricular outflow tracts are more constrained by large vessels and fixation on the diaphragm, leading to a lower spatial displacement during a PVC. However, the comparatively low displacement values for these maps in general cannot solely be explained by cardiac anatomy. The fact that all maps showed a septal PVC-origin is in line with our considerations that the point of origin also has an influence on spatial displacement: More constrained locations of origin close to the His-purkinje-system show lower displacement in general. Although first clinical results show the efficacy of catheter ablation using hybrid mapping [8,9,12], a clinical benefit compared to conventional activation mapping is yet to be demonstrated. Since standard ablation catheters have a 3-to 4-mm tip, it seems plausible that a correction for spatial displacement might improve the accuracy of RFC-delivery in a clinically significant extent. According to our results this might be especially relevant for nonoutflow tract PVCs, as those showed a significantly larger displacement. With broader use of the novel mapping software, we can expect to learn more about the spatial displacement of mapping points in PVCs. A randomized, controlled study comparing results of ablation procedures with hybrid activation mapping against procedures with conventional activation mapping would be desirable to evaluate the potential clinical benefit of the novel mapping feature. Limitations In some maps, singular points with a very high value for spatial displacement were observed. Since even small catheter displacement is supposed to be recorded, stability filters are disabled during hybrid mapping. Therefore, movement of the catheter between PVC and the preceding sinus beat may lead to false-high values for spatial displacement. Although we carefully checked all annotations during the procedure, singular false-high measurements for spatial displacement cannot be completely ruled out. Since we used median values for our analyses, we believe this bias to be of minor relevance for our results. Secondly, even though our analysis covered a total of 5798 individual mapping points, the studied patient population of 22 constitutes a small sample size. Thirdly, all analyzed data was recorded using the CARTO system. Although the described spatial displacement should be detectable irrespective of the mapping system, small differences between the platforms cannot be ruled out. There are several possible factors influencing the spatial displacement of myocardium during PVCs. Whether changes in diastole or systole are the dominant factors cannot be answered conclusively by the here presented study. Lastly, without a control group, the effect of spatial displacement on clinical outcome cannot be evaluated with certainty based on this study. Conclusion Ectopic activation of the ventricular myocardium during PVCs results in a spatial displacement of mapping points. We observed a mean spatial displacement of 3.8 ± 1.5 mm that was dependent on the location of PVCorigin: mapping points of PVCs with non-outflow-tract origin showed a larger displacement than PVCs originating from the outflow tracts. The correction for spatial displacement may help to improve accuracy of RFC-delivery in catheter ablation of PVCs. Our results suggest that this is especially relevant for PVCs with non-outflow-tract origin.
4,523.4
2021-06-16T00:00:00.000
[ "Medicine", "Biology" ]
Gut dysbiosis and bacterial translocation in the aneurysmal wall and blood in patients with abdominal aortic aneurysm Inflammation plays a part in the development of abdominal aortic aneurysm (AAA), and the gut microbiota affects host inflammation by bacterial translocation. The relationship between abdominal aortic aneurysm and the gut microbiota remains unknown. This study aimed to detect bacterial translocation in the aneurysmal wall and blood of patients with abdominal aortic aneurysm, and to investigate the effect of the gut microbiota on abdominal aortic aneurysm. We investigated 30 patients with abdominal aortic aneurysm from 2017 to 2019. We analysed the aneurysmal wall and blood using highly sensitive reverse transcription-quantitative polymerase chain reaction, and the gut microbiota was investigated using next-generation sequencing. In the 30 patients, bacteria were detected by reverse transcription- quantitative polymerase chain reaction in 19 blood samples (detection rate, 63%) and in 11 aneurysmal wall samples (detection rate, 37%). In the gut microbiota analysis, the Firmicutes/Bacteroidetes ratio was increased. The neutrophil-lymphocyte ratio was higher (2.94 ± 1.77 vs 1.96 ± 0.61, P < 0.05) and the lymphocyte-monocyte ratio was lower (4.02 ± 1.25 vs 5.86 ± 1.38, P < 0.01) in the bacterial carrier group than in the bacterial non-carrier group in blood samples. The volume of intraluminal thrombus was significantly higher in the bacterial carrier group than in the bacterial non-carrier group in aneurysmal wall samples (64.0% vs 34.7%, P < 0.05). We confirmed gut dysbiosis and bacterial translocation to the blood and aneurysmal wall in patients with abdominal aortic aneurysm. There appears to be a relationship between the gut microbiota and abdominal aortic aneurysm. Introduction Abdominal aortic aneurysm (AAA) is one of the most common aortic diseases, and the rupture of AAA is an important cause of death. AAAs are generally asymptomatic, and the mortality rate in patients with ruptured AAAs is approximately 75% [1]. The risk factors for AAA include smoking, male sex, age, and hypertension [2,3]. There is only invasive treatment for a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 AAA, such as open repair or endovascular repair. Medical treatments for AAA have not been developed. There are several causes of the development of AAA, such as atherosclerosis, and infectious and inflammatory diseases. Previous studies have suggested pathophysiological mechanisms of the development and progression of AAA, such as atherosclerosis, degeneration of connective tissue, an effect of inflammatory cells (e.g., lymphomonocytes and macrophages), and the role of matrix metalloproteinases [4]. However, an obvious mechanism of AAA remains unknown. Inflammation may play a part in the development of AAA in animal models. Inflammatory cell types and markers have also been detected in human AAA [5]. An inflammatory reaction causes degeneration of collagen and elastic fibres of the aortic wall, which is an important characteristic of AAA lesions [4]. The gut microbiota affects host inflammation and the immune system [6]. Recent studies have suggested an association between the gut microbiota and various diseases, such as cardiovascular diseases [7,8]. Bacterial translocation (BT) is an important aspect involved in the association of the gut microbiota with various diseases. BT is defined as the passage of viable bacteria from the gastrointestinal tract to extraintestinal sites, and BT can be the cause of sepsis and organ dysfunction [9]. Examples of BT include the detection of gut bacteria in blood of patients with type 2 diabetes, which is relevant to AAA, and its relation to insulin resistance, and the detection of bacteria in atherosclerotic lesions of patients with coronary heart disease [10,11]. Although the relationship between AAA and the gut microbiota has previously been reported [12], there have been no reports on BT in patients with AAA. This study aimed to detect BT in the aneurysmal wall and blood of patients with AAA using the highly sensitive reverse transcription-quantitative polymerase chain reaction (RT-qPCR) method, and to investigate the effect of the gut microbiota on AAA. Patients In this study, performed a census survey conducted at Kyushu University Hospital between 2017 and 2019. Thirty patients with AAA who had open repair performed at our hospital and National Hospital Organization Kyushu Medical Center between May 2017 and April 2019 were enrolled in this study. The sample size was determined as 30 because approximately 30 patients with AAA undergo surgery each year in our hospital. All patients provided written informed consent prior to enrolment in the study. Patients who underwent surgery for ruptured AAA, impending ruptured AAA, or inflammatory AAA were excluded. We also excluded the patients who did not consent to inclusion in this study. The study protocol was approved by the institutional review board of Kyushu University (approval number: 29-74). This research complies with the Declaration of Helsinki. RNA extraction from aneurysmal wall, thrombus, and blood samples After adding the RNAprotect Bacteria Reagent nine times, aneurysmal wall and thrombus samples were homogenised and suspended. Blood samples stored in the RNAprotect Bacteria Reagent were centrifuged (14000 × g, 10 min), and the pellet was used for RNA extraction. RNA extraction from aneurysmal wall, thrombus, and blood samples was performed by modified methods as previously described [13,14]. Briefly, RNA was isolated using a modified acidic guanidinium thiocyanate-phenol-chloroform extraction method. Samples were resuspended with 346.5 μL of RLT lysis buffer (Qiagen, Hilden, Germany), 3.5 μL of β-mercaptoethanol, and 100 μL of Tris-EDTA buffer, and 300 mg of glass beads (diameter, 0.1 mm) were added to the suspension. The mixture was then vortexed for 5 min using a FastPrep FP 120 (MP Biomedicals, Irvine, CA, USA) at a power level of 5.0, and 500 μL of acid phenol was added and the mixture was incubated. After 100 μL of chloroform-isoamyl alcohol was added and centrifugated, the supernatant was collected and subjected to isopropanol precipitation. Finally, the nucleic acid fraction was suspended in nuclease-free water. RT-qPCR The detection of bacteria was performed using the sensitive bacterial ribosomal RNA (rRNA)targeted RT-qPCR method as previously described [13]. The RT-qPCR analysis was conducted using a Qiagen OneStep RT-PCR kit. Group-or species-specific primers were used for RT-qPCR. Twenty-two types of target bacteria are shown in the supporting information online (S1 Table). These bacteria cover � 70% of the entire bacterial populations in healthy adults' faeces. The results of RT-qPCR were assessed by a calibration curve, which was obtained from the corresponding number of bacteria. A standard curve was generated with the RT-qPCR data using the threshold cycle value for dilution series of the reference strains. Threshold cycle values of RNA extracted from the samples were applied to the standard curve to obtain the corresponding number of bacterial cells in a sample. DNA extraction from faecal samples DNA extraction from faecal samples was performed using methods described previously with slight modifications [13]. Briefly, faecal samples were suspended to 10 times with phosphatebuffered saline, and 200 μL of the suspensions were re-suspended in 250 μL of extraction buffer (200 mM Tris-HCl, 80 mM EDTA, pH 9.0) and 50 μL of 10% sodium dodecyl sulphate. A total of 300 mg of glass beads (diameter, 0.1 mm) and 500 μL of Tris-EDTA buffer-saturated phenol (Nacalai Tesque, Inc., Kyoto, Japan) were added to the suspension, and the mixture was vortexed for 60 s by using a FastPrep FP 120 (MP Biomedicals, Irvine, CA, USA) at a power level of 5.0. After centrifugation (20380 × g for 5 min), 400 μL of the supernatant was collected. Phenol-chloroform extractions were performed, and 250 μL of the supernatant was subjected to isopropanol precipitation. Finally, the DNA was suspended in 1 mL of TE buffer (10 mM Tris-HCl, 1 mM EDTA, pH 8.0). Gut microbiota analysis The V3-V4 region of the 16S rRNA gene was amplified by PCR, and amplicons were sequenced using Miseq (Illumina, Inc., San Diego, CA, USA), as previously described [15]. Sequencing data were then processed, and diversity trends were analysed using QIIME (www.qiime.org). Microbial α-diversity was evaluated by calculating the Shannon index and Chao1. Alpha diversity is described in terms of the richness or evenness. The Shannon index shows both richness and evenness of the species. Chao1 assesses the number of species in a community and represents richness. Gut dysbiosis was assessed by the ratio of the phyla Firmicutes/Bacteroidetes (F/B). Data analysis We measured the neutrophil-lymphocyte ratio (NLR) and the lymphocyte-monocyte ratio (LMR) as indicators of inflammation from the peripheral blood cell count 2 days before the operation. The NLR and LMR were calculated from the absolute value of blood neutrophils, lymphocytes, and monocytes. The intraluminal thrombus thickness was determined at the point of maximal thickness. Which was measured in a slice of a preoperative contrast computed tomography (CT) scan, with the maximal aneurysmal diameter determined by axial imaging. The intraluminal thrombus volume was obtained by the ratio of intraluminal thrombus and the aneurysmal lumen at the same slice of contrast CT scan imaging. Statistical analysis Categorical variables were assessed using Fisher's exact test. Continuous variables were assessed using Student's t-test, the paired t-test, or the Mann-Whitney U-test. A P value of < 0.05 was considered statistically significant. Statistical analysis was performed using the JMP software program, version 14.0 (SAS Institute, Inc., Cary, NC, USA). Characteristics of the patients Sixty-nine surgical aortic repairs were performed, and 30 (43%) patients were included in this study. Tables 1 and 2 show the baseline characteristics of these patients. The median age of the patients was 66.9 years (range, 44-88 years), and most (93%) of the patients were men and had a smoking history (93%). We found that 10% of patients took intestinal drugs. No patients had antibiotics preoperatively. Gut microbiota analysis We analysed the gut microbiota from faecal samples of the patients. The relative abundance of the phyla is shown in S1 Fig. The median of the Shannon index and Chao1 of all patients were 6.2 (range: 4.5-7.6; Fig 1A) and 2545 (range: 1143-4617; Fig 1B), respectively. The median abundance of the phylum Bacteroidetes abundance was 3.0% and the F/B ratio was 39.7 ( Fig 1C), which indicated that the gut microbiota of patients with AAA was disturbed. Detection of bacteria by RT-qPCR We collected aneurysmal wall and blood samples from all 30 participants. Thrombus samples were assessed in eight patients. In 19 blood samples (detection rate, 63%) and in 11 aneurysmal wall samples (detection rate, 37%), bacteria were detected by RT-qPCR (Tables 3 and 4). No bacteria were detected in the eight intraluminal thrombus samples. We assessed 22 primers of bacteria (S1 Table). Eight types of bacteria were detected from 19 blood samples (Table 3). The most common type of bacteria was Streptococcus, which was detected in eight blood samples (Table 3). No Enterococcus species was detected in blood samples (Table 3). Eleven types of bacteria were detected from 11 aneurysmal wall samples ( Table 4). The most common type of bacteria was Staphylococcus, which was detected in seven aneurysmal wall samples (Table 4). Enterobacteriaceae was detected in four aneurysmal wall samples (Table 4). Bacteria were detected from blood and aneurysmal wall samples in eight patients. Among these eight patients, the types of bacteria were different between blood and the aortic wall in six patients. These findings indicated the diversity of BT and an association between the gut microbiota and aneurysmal wall. Comparison of characteristics of the patients with or without bacteria We next examined the association between bacterial carriage and the patients' characteristics (Tables 5 and 6). In patients who had bacteria detected in blood, there were significantly higher neutrophil (P = 0.03) and monocyte counts (P = 0.02, Table 5), but a lower lymphocyte count, in the bacterial carrier group than in the bacterial non-carrier group (P = 0.04, Table 5). Consequently, in blood, the NLR was higher (P = 0.04, Table 5) and the LMR was lower (P<0.01, Table 5) in the bacterial carrier group than in the bacterial non-carrier group. In patients who had bacteria detected in the aneurysmal wall, there was a significantly higher volume of intraluminal thrombus in the bacterial carrier group than in the bacterial non-carrier group (P = 0.04, Table 6). Additionally, in these patients, the median intraluminal thrombus volume in the bacterial carrier group was 64.0% (range, 6%-77%) and that in the bacterial non-carrier group was 34.7% (range, 0%-76%). Discussion In this study, we found gut dysbiosis in patients with AAA, with a decrease in the abundance of the phylum Bacteroidetes and an increase in the F/B ratio. Other previous reports evaluated the gut microbiota in patients with atherosclerotic diseases as follows [16][17][18]. Emoto et al. reported that the F/B ratio was higher in patients with coronary artery disease (F/B ratio: 1.6 ± 1.0) than in age-and sex-matched controls with no coronary artery disease (F/B ratio: 1.3 ± 2.0) and healthy volunteers (F/B ratio: 1.1 ± 1.4) [16]. Szabo et al. showed that an increased F/B ratio was associated with an increased carotid intima-media thickness (mean F/B ratio of intima-media thickness > 0.9 vs intima-media thickness < 0.9 groups: 2.299 vs 1.436, P = 0.031) [17]. Additionally, gavage of some species of Bacteroidetes prevented the formation of atherosclerotic plaques [18]. Another study showed that mice with more atherosclerotic plaques showed a high F/B ratio [19]. Atherosclerosis appears to be related to the progression of AAA. Our results are consistent with these studies, and they suggest that an increase in the F/B ratio in the gut microbiota is an important aspect of development of atherosclerosis and AAA. To the best of our knowledge, this is the first study to detect bacteria from the aneurysmal wall and blood using the RT-qPCR method and to confirm BT in patients with AAA. Our highly sensitive RT-qPCR method enabled the detection of the presence of bacteria in the AAA wall and in blood of patients with AAA. RT-qPCR used in this study followed the method Table 3. Results of RT-qPCR-positive blood samples. Target bacteria 2 4 5 6 8 11 13 14 15 16 17 20 21 22 23 24 25 28 30 Clostridium coccoides group ---2 ---------------Clostridium leptum subgroup ---7 --------------- -22 26 46 ----65 19 ----28 ---- for targeting rRNA molecules that was developed by Matsuda et al. [13] rRNA is a universal constituent of bacterial ribosomes and shows high copy numbers in a single bacterial cell. The rRNA-targeted RT-qPCR is 100-to 1000-fold more sensitive than conventional PCR targeting DNA. A study reported that, in blood samples of patients with neutropenia and fever, the bacterial detection rate (69.6%) by bacterial rRNA-targeted RT-PCR was higher than that by blood culture (17.4%, P < 0.001) [14]. This finding suggested the usefulness and reliability of this method. Moreover, Sato et al. reported that the detection rate of bacteria in blood samples was 28% (14/50) in patients with type 2 diabetes and 4% (2/50) in controls by a similar method [10]. The detection rates in our study were 63% in blood and 37% in the aneurysmal wall. Although 8 (28%) of 30 our patients had diabetes, our detection rates were remarkably high. This finding suggested an association between bacterial detection and AAA. Streptococcus The types of bacteria detected from aneurysmal wall samples were the Clostridium coccoides group, Clostridium leptum subgroup, genus Bifidobacterium, Lactobacillus gasseri subgroup (Lactobacillus), Lactobacillus ruminis subgroup (Liquorilactobacillus and Ligilactobacillus), Atopobium cluster, genus Prevotella, genus Enterobacteriaceae (formerly taxonomic nomenclature), genus Staphylococcus, genus Streptococcus, and Clostridium perfringens. Detection of this wide range of typical bacteria as gut microbiota suggested that nonspecific bacteria translocated to blood or aneurysmal wall, and that commensal bacteria of the skin or oral cavity was unlikely to have contaminated the samples. The former five types of bacteria include intestinal resident bacteria and some of them are used as probiotics. There have been few clinical cases where these five bacteria caused adverse events, and only several reports that a relative reduction in them in the gut microbiota was related to developing inflammatory diseases and anti-inflammatory effects [20][21][22][23][24]. In contrast, the latter six types of bacteria, which are detected at a higher frequency than the former five types, are often pathogenic. Sato et al. reported that Atopobium cluster was detected at a significantly higher rate in patients with type 2 diabetes than in controls (detection rate: 14% vs 0%, P < 0.05) [10]. In our study, Atopobium cluster was detected from three blood samples of patients with AAA, and two of them had type 2 diabetes. The detection rate of Atopobium cluster in all eight patients with type 2 diabetes was 25%, which appeared to be a reasonable result compared with Sato et al.'s report [10]. The detection of Atopobium cluster from blood samples of patients with type 2 diabetes may be affected by that disease. In addition, an association between Atopobium and tuboovarian abscess or bacterial vaginosis has been suggested [25,26]. Prevotella is related to inflammatory periodontal diseases [27], and Enterobacteriaceae causes enteric infection. Staphylococcus produces enterotoxin and is one of the most common types of bacteria that result in food poisoning [28]. Streptococcus is associated with tonsillitis and acute glomerulonephritis. Clostridium perfringens causes gas gangrene. These bacteria also cause the development of an inflammatory response in various sites in vivo. Regarding the progression of AAA, previous studies have shown that an inflammatory response in the arterial wall plays an important role [29]. In this study, we found bacteria in the blood and aneurysmal wall. Blood samples in the bacterial carrier group showed a high differential count of neutrophils and monocytes, high NLR, and low LMR. The NLR and LMR are objective parameters, which indicate an inflammatory response, and are predictors of various diseases [30][31][32]. In vascular disorders, the LMR is related to the severity of coronary artery disease [33]. Xie et al. [34] suggested that a low LMR indicates a greater inflammatory response in the aortic wall, and that patients with thoracic aortic aneurysms with a high LMR are more likely to have type I endoleak during thoracic endovascular aortic repair. The present study showed that there was an association between detected bacteria and inflammation in the blood. In addition, bacterial detection in the aneurysmal wall suggested that these bacteria affect the progression of AAA by involving inflammation in aneurysmal wall. Although there have been many studies regarding the role of intraluminal thrombus in the progression and rupture of AAA, many details of this process are still unclear. The development of intraluminal thrombus is mainly caused from the activation of platelets associated with a turbulent and stagnant blood flow [35], and there is a potential benefit of antiplatelet treatment of medium-sized AAAs [36]. Haller et al. reported that intraluminal thrombus of AAA might be a marker of aortic wall weakening and it was associated with early rupture [37]. In our study, the intraluminal thrombus volume tended to be higher in the bacterial carrier group than in the bacterial non-carrier group in aneurysmal wall samples. However, the rate of antiplatelet medication was not different between the groups. The presence of bacteria in the aneurysmal wall may contribute to the progression of intraluminal thrombus and affect early rupture. There are several limitations to this study that should be considered. First, the number of patients was small. Therefore, additional larger studies are required to verify our findings. Second, we could not obtain samples from patients without AAA because this was a census study of patients with AAA only. Non-AAA controls were not included in this study. Furthermore, collecting the aortic wall of patients without AAA was impossible because only patients with AAA underwent aortic surgery in the hospital and centre included in this study. Third, the primers for RT-qPCR in this study did not cover all bacterial strains. Accordingly, other bacteria not targeted in this study might be important. However, the types of primers used in our examination appear to be satisfactory. Conclusion In conclusion, this study shows gut dysbiosis and bacterial translocation to the blood and aneurysmal wall in patients with AAA. Our findings suggest a relationship between the gut microbiota and AAA. The next step in this research protocol is to determine the localization of detected bacteria by using fluorescence in situ hybridization method. Additionally, further analyses are required to investigate the specific association of translocated bacteria and inflammation in the aneurysmal wall and blood. Furthermore, intervention of the gut microbiota may contribute to preventing the progression of AAA in the future.
4,695.6
2022-12-14T00:00:00.000
[ "Medicine", "Biology" ]
Utilisation of plasma centrifuges for life support systems on Mars In this paper the possibility of utilising a plasma centrifuge for oxygen generation in outer space is discussed. It is proposed that a plasma centrifuge can not only create oxygen for human consumption very efficiently but is also able to produce useful by-products. Special emphasis is given to life support systems working in the atmosphere of Mars, where oxygen and carbon raw materials can be obtained directly from the atmosphere. The system under consideration in this work is a plasma centrifuge with axial circulation that contains a fully ionised plasma. Under these conditions the carbon dioxide from the Mars atmosphere will be entirely dissociated. Thus, the atomic oxygen and carbon can easily be separated. © G-Labs 2018 I. Introduction Using centrifugal force and plasma to dissociate and separate molecules from the Mars atmosphere or from CO 2 that is exhaled by humans, has been proposed by different authors over the last view years [1][2][3][4][5]. While these authors suggested classical gas centrifuges with a mechanical rotor, the work presented here will focus on the use of plasma centrifuges (PC). The physical principal of isotope separation with a plasma was experimentally demonstrated by Bonnevier in the 1970'ies [6]. Since this pioneering work PCs have been examined for partially [7,8] and fully ionised [9][10][11] gases. The most recent advancement in the field of PCs has been the introduction of an additional axial flow. It has been shown that a PC with such an additional circulation has a much higher separation power than conventional ones [12][13][14] or PCs with only radial separation. These papers by previous authors show that there are some limitations for conventional gas centrifuges that are no issue by PCs. It is impossible, for example, to separate isotopes with a low vapour pressure in classical centrifuges. Hence, the PC separation of calcium isotopes was put forward in Ref. [14]. Another issue of purely mechanical centrifuges is the high cost connected with the separation of isotopes of light noble gases such as helium or neon. PCs have yet another additional feature that has to the knowledge of the author not been discussed before. They are able to turn at least a part of the separated gas particles into useful byproducts via chemical vapour deposition (CVD) or plasma enhanced chemical vapour deposition (PECVD). This is particularly the case for carbon containing gases such as CO and CO 2 , which can be used to deposit numerous carbon phases such as graphene, graphite or diamond. These deposition processes are used already frequently here on earth and it was proposed in Refs. [2,3] to use plasma and centrifugal forces to obtain oxygen and carbon based raw materials from the Mars atmosphere. The objective of this paper is to investigate the possibility of applying a PC instead of conventional centrifuges in this previously suggest life support technology. It shall also be noted that such a hybrid system of PC and deposition reactor can, in principle, not only be used for manned Mars missions but also for life support on long term space missions and here on Earth for coating/deposition technologies and for CO 2 sequestration from the Earth atmosphere. Another interesting field of application might be the usage of such a system for chemical treatment of exhaled air in submarines, where the need for compact air recycling systems is very high. II. Physical and Technical Parameters The following Fig. 1 depicts an exemplary schematic drawing of a PC with axial flow separation. It has to be emphasised that in this paper the means of plasma creation (e.g. rf, microwave, etc.) are not considered. This paper is supposed to be only a starting point for the construction of plasma technology base life support systems and is focused on the calculation of the most important technical and physical parameters. The comparison between different mechanisms of plasma creation and the exact way of producing axial flows is left for future work. However, rf waves or high-frequency rotating magnetic fields for inducing axial flows have been proposed in a previous work [12,13]. For this work the creation of axial flows by a travelling magnetic field as described in [13] is considered. For the following calculations we assume a fully ionised plasma in a cylindrical centrifuge chamber. This has the advantage that all molecules (i.e. CO 2 from the Mars atmosphere) are fully dissociated into carbon and oxygen atoms and the content of the PC is a binary mixture. Furthermore, it is assumed that the axial separation flows are fully established and have reached a steady state condition. The cylindrical PC has an inner radius of R 0 and a height denoted with L. The feed gas will be introduced via a pipe with slits, which is situated at the central axis of the chamber. As shown by Borisevich and Potanin [13], the axial flows will transport the heavier species to the top of the vessel while the lighter species will accumulate at the bottom. Hence, the oxygen and carbon plasma flows are denoted with F O and F C , respectively. The concentration of the separated particles is written as c(0) for the lower part of the machine and c(L) at the ceiling. As oxygen has the higher molar mass M (~ 16 g/mol) than carbon (~ 12 g/mol), it can easily be extracted from the top of the PC vessel. Carbon, on the other hand, can be deposited on a temperature controlled substrate on the bottom of the chamber. The temperature control is highly important since the substrate temperature determines the form of carbon phase that is deposited. This enables the on-side production of a large variety of carbon materials, which will be extremely beneficial in future long term space missions or even the colonisation of Mars. Since the latter is sincerely considered by a growing number of companies, NGOs and governments the PC might play an important role on such endeavours. The basic technical and physical parameters of such a system will be presented in this section. The first entity to look at is the plasma rotation frequency Ω. If there is no diamagnetic drift, Ω is given by [12]: With Ω i and Ω e are the ion cyclotron and the ExB rotation frequency, respectively. They are defined via: Ω = (2) and The radial position is to be taken at the vicinity of the chamber wall. Hence, r = R 0 = 0.25 m, according to [3]. Inserting the following values for the electrical field E = 5000 V/m, the mass of the carbon ion m C = 1. . It is evident that for high magnetic field strength the curves converge because of the dominance of the ion cyclotron frequency, which is independent of the mass of the ions. However, it has to be noted that the technically most relevant range of magnetic field strength is depicted in the inlay of Fig. 2 between 0 and 0.4 T, which yields a plasma rotation frequency in the order of 10 5 Hz. This high rotation frequency can be achieved by suitable radiofrequency waves. As discussed by Fetterman and Fisch [12], the upper power limit for the excitation of such a wave is connected to the gas flow F into the PC via: It was pointed out in Ref. [2] that the oxygen consumption of an average human is about 4 m³ per day, which corresponds to a flow of 3000 sccm or 0.0315 g/s. Thus, a life support system for Mars missions based on a PC would require 1.7 kW input power to induce a wave driven plasma rotation in an exemplary magnetic field of 0.33 T. This has to be added to the energy requirements for the plasma generation and is considerably more than the energy consumption of a classical centrifuge (about 4.5 kW [3]). On a first glance this would seem disadvantageous but one has also to take into account the maximum separation power of a PC compared to a regular mechanical one. This can be estimated by the following equation given by Fetterman [12]: Here T i is the ion temperature in eV, ln(N D ) is the natural logarithm of the Debye number N D , v a =ω centrifuge x R 0 , is the peripheral speed of the gas centrifuge and τ i is the ion-ion collision time. The ion-ion collision time and the Debye number are calculated according to [15] where n e has to be taken in m -3 while T e is in eV. The electron density and temperature as well as the ion temperature and mass are determined as follows: The Mars atmosphere has an average pressure of 636 Pa at 240 K [16]. This corresponds to a neutral gas density of 1.9 x 10 23 m -3 . As a fully dissociated and ionised gas is assumed for our calculations, there will be 1. (5) the ratio of the maximal separation powers is 22. This indicates that a single PC can replace 22 conventional centrifuges in terms of radial separation power. It has to be taken into account that due to the heating of the ions the particle density will drop about 2-3 orders of magnitude after reaching a steady state operation with the PC, which will have to be taken into account in the next considerations. For the following calculations the magnetic Reynolds number Re m and the dimensionless parameter χ will be needed. The former is defined via: where μ 0 is the vacuum permeability, σ is the plasma conductivity and ω 0 is the magnetic field rotation frequency. Χ is given by [14]: Here ber 0 and bei 0 are the Kelvin functions. The plasma conductivity can be calculated with the electron density and the electron-electron collision frequency ν ee via: For Re m < 1 the eddy current braking is similar to the one of an induction motor but if Re m > 30 much more energy is dissipated in the plasma and the eddy braking torque decreases much slower for higher rotation speed than in an induction motor [17]. Thus, the critical radii for a PC chamber are depicted in Fig. 3 for high rotation frequencies of the magnetic field. A rotation frequency of 10 4 rad/s corresponds to a critical radius of 78.6 cm, while a 5 x 10 4 rad/s correspond to a critical radius of 35.5 cm. Therefore, it is evident that the higher ω 0 , the smaller the radius of the chamber. Hence, for an assumed Re m ~ 0.5 χ = 1.121. In accordance with Refs. [13,14] the radial and axial enrichment factors (ε r and ε z ) are calculated as follows: Where ΔM is the difference in the molar masses, η is the dynamic viscosity of the working gas and ℜ is the ideal gas constant. The dynamic viscosity can be calculated from the product of kinematic viscosity ξ and the particle density. ξ is given as a function of the thermal velocity v t for a fully ionised plasma via [18]: Here where G is the compressibility of the working gas, ρ* is the gas mass density at elevated temperature (1.4 x 10 -4 kg/m³, assumed to be constant over the whole volume in steady state), D is the self-diffusion value of the working gas (1.56 x 10 -3 m²/s [3]), <…> denotes the mean value, y = r/R 0 is a normalised radial position, ψ 0 is a stream function and λ is a dimensionless parameter. V z0 is defined by: where f1 and f2 are dimensionless parameters, as defined in [13] and are of the order 1. ω 1 is the angular frequency of the of the travelling magnetic field, which is considered to be of the order of 10 4 rad/s and k z = ω 1 /v ph = 0.2 /m. The phase velocity of the wave is considered to be about 5 x 10 4 m/s for our example. This yields with Eq. (20) V z0 = 589.1 m/s. Since, we consider a fully ionised plasma with strong Coulomb interactions and spatially constant density the compressibility will be G = 0 and <ρD> = ρD. If G vanishes, λ = 4/3 [13] and the second fraction on the r.h.s. of Eq. (18) converges to 2. Hence, the stream function from Eq. (19) becomes: The results for the stream function in dependence of the normalized radial position are depicted in the following Fig. 4: and N 1 = 1.26 x 10 -3 , N 2 = 0.62. With these data the longitudinal enrichment factor becomes for a PC with a length of 1m ε z ~ 1.1 x 10 -2 . This below the radial enrichment factor in our example but the value of ε z can be improved considerably by increasing the ratio L/R 0 . This enables not only the efficient production of oxygen but also the creation of high-purity carbon phases as part of the process. Under optimal operation conditions, the logarithm of the separation factor ln(α) can be obtained via [14]: In Eq. (23) N denotes the ratio N 1 /N 2 , which is defined by in [14]. In our case N = 2.3 x 10 -3 . L 1 = L p /R 0 where L P is the length of the enriching part of the whole reactor length. h = P/(<ρD>R 0 ) where P denotes the Product flow rate, i.e. the breathable oxygen mass flow created by the life support system. An exemplary calculation of the logarithm of the separation factor is shown in the following Fig. 5 for L 1 = 5. Despite the fact that ln(α) is rather small in this example, it has to be kept in mind that the calculations in this paper have been done for a life support system with a rather large radius and short height as it was proposed in Refs. [2,3] in order to get a direct comparison. Thus, it is emphasised at this point that there is a lot of room for optimisation by choosing longer, thinner PC chambers, which will enhance the enrichment and separation coefficients considerably. III. Conclusion The usage of PCs for oxygen and carbon raw material creation with a strong focus on Mars life support systems has been presented in this work. It was demonstrated that this type of centrifuge has much higher separation and enrichment factors as classical, mechanical centrifuges. It is, thus, very reasonable to argue that this kind of life support can play a crucial role in future long term space missions but may also be used here on earth, for example, in submarines. The combination of plasma enhanced CO 2 dissociation, separation and deposition will be extremely beneficial in space, especially on manned Mars habitats, where a high degree of on-side production technologies will be needed. A direct comparison between a classical centrifuge and a PC shows that the main advantage of the latter so far is the high separation power. The enrichment and separation factors, on the other hand, can be extremely improved by adapting the geometry of the PC vessel. Especially the aspect ratio L/R 0 has to be substantially increased to obtain an even better separation in a plasma centrifuge. This is a striking difference to an ordinary, mechanical centrifuge, where it has been shown in previous work by the author that a bulky ellipsoidal gas centrifuge with a height of 1 m and a diameter of 0.5 m can deliver enough breathable oxygen for about 10 average humans. On the contrary, a PC must be designed a long, rather thin pipe in order to obtain an optimised output, but if done properly one PC may be able to replace some dozens of mechanical centrifuges, while producing quite substantial amounts of carbon based raw materials directly from the Mars atmosphere. It shall also be pointed out that such a plasma based life support system can be easily combined with other life support systems, especially with those that use microorganisms and create hydrogen as a byproduct. A combination of such reactors enables an even broader range of raw materials that can be produced. Further considerations of technical details and engineering problems are left for future work.
3,803.8
2018-12-10T00:00:00.000
[ "Physics" ]
Observation and interpretation of motional sideband asymmetry in a quantum electro-mechanical device Quantum electro-mechanical systems offer a unique opportunity to probe quantum noise properties in macroscopic devices, properties which ultimately stem from the Heisenberg Uncertainty Principle. A simple example of this is expected to occur in a microwave parametric transducer, where mechanical motion generates motional sidebands corresponding to the up and down frequency-conversion of microwave photons. Due to quantum vacuum noise, the rates of these processes are expected to be unequal. We measure this fundamental imbalance in a microwave transducer coupled to a radio-frequency mechanical mode, cooled near the ground state of motion. We also discuss the subtle origin of this imbalance: depending on the measurement scheme, the imbalance is most naturally attributed to the quantum fluctuations of either the mechanical mode or of the electromagnetic field. expected imbalance between up and down converted sidebands 14,15 . Here, we demonstrate the analogous physics in a quantum circuit, where it is now microwave photons (not optical photons) which probe the mechanical motion. We also address a subtlety about these measurements which originates from their use of linear detection of the scattered electromagnetic field: they measure the field amplitude (e.g. via heterodyne detection). This is contrast to measurements based employing direct photodetection, where one filters the output light and counts photons associated with a motional sideband. Although the predicted and measured motional sideband asymmetry obtained using either detection method are identical [14][15][16] , the interpretation is more nuanced when one employs linear field detection. As discussed by Khalili et al. 16 , the asymmetry in this case can be fully attributed to the detector, namely the presence of a precisely tuned correlation between the backaction noise generated by the measurement device and its imprecision noise (see SI). We provide a simple exposition of this physics using standard input-output theory, which lets us easily track the scattering of incident vacuum fluctuations. In the case of linear detection of the cavity output field, the imbalance is naturally attributed to the input electromagnetic field fluctuations (classical and quantum); the intrinsic quantum fluctuations of the mechanical mode contribute equally to the up and down-converted spectrum. In contrast, in experiments which employ direct photodetection 8 , the imbalance in the output spectrum (in the absence of thermal electromagnetic noise) is naturally attributed to asymmetric quantum noise of the mechanical motion. After a brief discussion of these theoretical issues, we present measurements of the imbalance in a microwave-frequency electromechanical device. Theory We begin with the Hamiltonian of our electro-mechanical system, whereâ â † is the annihilation (creation) operator of the microwave resonator mode with frequency ω c ,b b † is the annihilation (creation) operator of the mechanical resonator with frequency ω m , and g 0 is the parametric coupling strength between the two modes. We consider the standard regime of a cavity strongly driven at frequency ω p , where dissipation is treated as per standard input-output theory 17 ; we also consider a two-sided cavity, which corresponds to our experimental setup. The operatorsd σ,in (t),ĉ in (t) describe noise incident on the microwave and mechanical resonator, respectively, and satisfy: Here, n th m (n th σ ) denotes the amount of thermal fluctuations incident on the mechanical resonator (microwave resonator from port σ), and α, β describe the quantum vacuum fluctuations driving the microwave and mechanical resonators, respectively; we have α = β = 1, consistent with the uncertainty principle and the canonical commutation relation of the noise operators. In what follows, we keep α and β unspecified in order to clearly track the contributions of both mechanical and electromagnetic vacuum noise to the measured noise spectrum. We further specialize to the case where a single microwave cavity drive is applied at ω p = ω c − ∆ with ∆ either ±ω m , and consider the up-and down-converted sidebands generated by the mechanical motion. For simplicity, we ignore any internal loss of the cavity, consider the system to be in the sideband resolved regime (κ ω m ), and also consider the limit of a weak cooperativity 4G 2 /κ γ m . This last condition implies that the backaction effects on the mechanics are minimal: the mechanical linewidth and temperature are set by its coupling to its intrinsic dissipative bath. For amplitude detection either with a linear amplifier as in this experiment, or optical heterodyne detection 14,15 , the symmetric noise spectrum is: with the amplitude of the output fieldÎ tot =d R,out +d † R,out and whered R,out =d R,in + √ κ Rd . The output spectrum near the cavity resonance for the two choices of drive detuning are found to bē S II,tot [ω] where for ∆ = ω m (∆ = −ω m ), the up-(down-) converted sideband is centered on the cavity resonance. The noise floor for both cases is given byS 0 = α/2 + n th R + 4κ R (n th c − n th R )/κ, and we have defined n th eff = 2n th c − n th R (where n th c = (κ L n th L + κ R n th R )/κ is the effective cavity thermal occupancy). In Fig.1(c), we illustrate the underlying components of this spectrum. One sees explicitly that the sideband imbalance,S II,tot [ω]| ∆=−ωm −S II,tot [ω]| ∆=+ωm , is proportional to (2n th eff + α), and hence is entirely due to fluctuations in the microwave fields driving the cavity. This is true both when this noise is thermal, and when it is purely quantum (i.e. n th R = n th L = 0). These terms in the spectrum result from the interference between the two ways the incident field noise can reach the output: either by directly being transmitted through the cavity, or by first driving the mechanical resonator whose position then modulates the amplitude quadrature of the outgoing microwaves (see SI for further insights based on a scattering approach). This is the basic mechanism of noise squashing, which in the case of thermal noise was previously observed in a cavity electromechanical system 4 . This mechanism can also be fully described using a general linear measurement formalism 16 , where it is attributed to the presence of correlations between the backaction and imprecision noises of the detector, correlations which are out-of-phase and have magnitudeh/2 in the zero-temperature limit. Interestingly, this precise value plays a special role in the theory of quantum limits on linear amplification 7 (see SI for more details). The above calculation also shows that both thermal and zero-point force noise emanating from the mechanical bath (i.e. terms ∝ n th m + β/2) contribute symmetrically to Eqs. (3) and (4), and hence play no role in determining the asymmetry of the sidebands. In the weak-cooperativity limit, it is the mechanical bath which almost entirely determines the mechanical oscillator fluctuations. This suggests that the sideband asymmetry observed using linear detection of the scattered field is not directly probing the asymmetric quantum noise spectrum of the mechanical mode. 3 In contrast, direct measurement of the sideband signal via photon counting yields the normal ordered spectrum, with output spectra given by Note that when one sets α = β = 1, the asymmetry of these normal-ordered spectra, S N II,tot [ω]| ∆=−ωm −S N II,tot [ω]| ∆=+ωm , is identical to that obtained from the linear measurement (where spectra are calculated using Eq. (2)). In this case, however, the asymmetry is naturally attributed to both the mechanical quantum fluctuations, β, and to the thermal microwave fluctuations described by n th eff ; this is illustrated in Fig.1(b). Note that in direct photodetection, one cannot attribute the zero-temperature sideband asymmetry to a correlation between backaction-driven position fluctuations and imprecision noise, as there is no imprecision noise floor. While the above simple calculations suggest that the sideband asymmetry measured using linear detection versus direct photodetection have different origins, it is no accident that the magnitudes of the asymmetry are the same in both schemes. This follows directly from the fact that the canonical commutation relation of the output field is the same as . It necessarily follows that the spectra in Eqs. (2) and Eqs. (5) will differ only by a frequency-independent noise floor of magnitude α/2 16 . If one assumes this commutation relation, then one can legitimately say that both spectra essentially measure the same thing. However, on a formal level, this involves an additional assumption on the value of β: (if β = α, then the output commutator would not be the same as the input, see SI). Having explored the interpretation subtleties associated with sideband asymmetry, we now turn to presenting our main result: the experimental observation of this imbalance in a microwave-cavity based electromechanical system. Experiment Our system is composed of a superconducting microwave resonator, also referred to as "cavity", where the resonance frequency is modulated by the motion of a compliant membrane 13 . This frequency modulation leads to the desired parametric coupling between microwave field and mechanical motion ( Fig.2(a)). Measurements of the cavity response below 100 mK yield the resonance frequency ω c = 2π × 5.4 GHz, total loss rate κ = 2π × 860 kHz, output coupling rate κ R = 2π × 450 kHz, and input coupling rate κ L = 2π × 150 kHz. The capacitor top gate is a flexible aluminum membrane (40µm×40µm×150nm) with a fundamental drumhead mode with resonance frequency ω m = 2π × 4.0 MHz and intrinsic loss rate γ m = 2π × 10 Hz at 20mK. Motional displacement of the top gate modulates the microwave resonance frequency with an estimated coupling rate of g 0 = ∂ωc ∂x x zp = 2π × 16 Hz. In Fig. 2(c), we present a schematic of the measurement circuit. Tunable cavity filters at room temperature reduce the source phase noise to the thermal noise at 300K; cryogenic attenuators further reduce the noise down to the shot noise level 4 . A pair of microwave switches at the device stage select between the device or a bypass connection for high precision noise floor calibration of the cryogenic amplifier. The output signal passes through two cryo-circulators at ∼100mK followed by a cryogenic low-noise amplifier at 4.2K, and finally to a room temperature circuits for analysis. The occupation factor of the microwave resonator, n th c , which is expected to thermalize below 5 × 10 −3 at temperatures below 50mK, can be increased and controlled by the injection of microwave frequency noise from amplified room temperature Johnson noise. From careful measurements of the noise power emanating from the cavity at zero pumping and comparing this to power spectra with the bypass switched in place (see SI), we conclude that there is a small contribution to n th c due to thermal radiation from the isolated port of the cryogenic circulators, given by the occupation factor n th R = 0.34 ± 0.03. When a single microwave tone is applied to the device at ω p , the parametric coupling converts mechanical oscillations at ω m to up and down-converted sidebands at ω p ± ω m . In this experiment, we apply microwave tones at frequencies near ω c ± ω m and at powers given by the mean number of photons in the resonator, n p . The microwave resonance suppresses motional sidebands outside of the linewidth and we consider only the contributions of signals converted to frequencies near ω c . These are the Lorentzian components of the noise power spectra of Eqs. (3) and (4), which for the remainder of the paper are denoted by "+" and "-", respectively, and are labeled in Fig.1(c). Throughout the measurement, we simultaneously apply three microwave tones. We place a cooling tone at ω c − ω m − δ c to control the effective mechanical damping rate, γ M , and mode occupation,n m , via back-action cooling 18 . Two additional probe tones, placed at ω c ± (ω m + δ), produce up and down converted sidebands symmetrically detuned from cavity center ( Fig.3(a)). The detunings are chosen to ensure no interference between the sidebands (δ c = 2π × 30 kHz, δ = 2π × 5 kHz) so that we may consider the probe sidebands as independent measurements of the dressed mechanical mode as validated by theory. To summarize the main differences between the simplified theory model presented above and our actual experiment, we measure the mechanical sidebands produced in a two-port microwave resonator with limited sideband resolution and a noisy output port, and in the presence of multiple injected tones with a range of detunings and powers. From further analysis (see SI), we estimate corrections to the sideband asymmetry that are 1 and far below the measurement resolution of our system. To convert the motional sideband powers into equivalent mechanical occupation, we turn off the cooling tone and measure the probe sidebands (δ = 2π × 500 Hz) with low optical damping (n + p = n − p 5 × 10 2 ) and high mechanical occupation set by the cryostat temperature. Regulating the temperature to calibrated levels between 20 to 200mK, we calculate the integrated noise power under the sideband Lorentzians, P ± m , normalized by the respective microwave probe power transmitted through the device, P ± thru . In the limit of high thermal occupation, the normalized power is directly proportional ton m . 19 As we vary the cryostat temperature, T , we compare the normalized power to the thermal occupation factor [exp(h ωm k B T ) − 1] −1 ( Fig.2(b)). A linear fit yields the conversion factors for the up-converted (n + m ) and down-converted (n − m ) sidebands: n + m = (9.9±0.2)×10 8 ·P + m /P + thru and n − m = (5.4±0.1)×10 8 ·P − m /P − thru . Further detuning the probe tones (δ = 2π × 5 kHz) and turning on the cooling tone (δ c = 2π × 30 kHz), we explore the sideband ratio, n − m /n + m , over various the mechanical and microwave occupations. To reducen m to values approaching 1, we increase the cooling tone power up to n cool p = 4 × 10 5 . For sideband characterization, the probe tone powers are set to n − p = n + p = 10 5 and the probe sideband spectra are analyzed using the conversion factors described above. The imbalance between n − m and n + m is clearly evident in the noise spectra ( Fig.3(b)). As further demonstration of the asymmetry with respect to n th eff , we plot n − m /n + m as a function of n + m in Fig. 3(c). Each curve corresponds to one setting of injected microwave noise. The data shows excellent agreement to the expected ratio, n − m /n + m = 1 + (2n th eff + 1)/n + m . This relationship highlights the combined effect of quantum and classical noise in Eqs. (3) and (4) (see SI). By fitting each curve to a two parameter model: a + b/n + m , we find an average constant offset a = 0.99 ± 0.02 for all curves, accurately matching the model and confirming our calibration techniques. Fitting for b, the data indicates n th eff spanning 0.71 to 4.5 with uncertainty all within ±0.09 quanta. To quantify the contributions due to quantum fluctuations and classical cavity noise, we fix the cooling tone power at n cool p = 4 × 10 5 (γ M = 2π × 360 Hz) and measure the imbalance n − m − n + m as we sweep n th eff . At each level, we measure the average noise power density, η, over a 250 Hz window centered at ω c and away from any motional sideband. Over this range, η contains two contributions: the noise radiating out of the microwave resonator, proportional to n th eff , and the detector noise floor, set by the noise temperature of the cryogenic amplifier (T N ≈ 3.6K). We directly measure the detector noise floor by switching from the device to an impedance-matched bypass connection and measure the noise power density, η 0 , over the same window with matching detected tone powers. In Fig. 3(d), we plot the sideband imbalance against the noise floor increase, ∆η = η − η 0 , which is expected to follow: n − m − n + m = 2n th eff + 1 = 4λ · ∆η + 1, where λ is the conversion factor for ∆η in units of cavity quanta, n th c . The data clearly follows a linear trend with a slope of λ = (2.7 ± 0.1) × 10 −1 (aW/Hz) −1 . More importantly, we observe an offset of 1.2 ± 0.2, in excellent agreement with the expected quantum imbalance of "+1" from the quantum fluctuations of the microwave field. As an additional check, we also consider the sideband average, (n + m + n − m )/2, as a function of ∆η. Averaging Eqs. (3) and (4), we see that the resulting occupation,n m + β 2 , does depend on n th eff due to the coupling between the mechanical and microwave modes,n m = γm γtot n th m + γopt γtot (2n th c +α)+ γ cool opt γtot n th c , where γ opt (γ cool opt ) is the optical coupling rate for the individual probe (cooling) tones. Accounting for this so-called back-action heating of the mechanical mode 13, 18 , we recover λ = (2.5 ± 0.2) × 10 −1 (aW/Hz) −1 , consistent with the imbalance results above. Notably, the average sideband occupation does contain contributions from mechanical zero-point fluctuations. Future experiments could infer the mechanical quantum contribution of β 2 with a method to independently calibratē n m to high accuracy, for example, with a passively cooled high frequency mechanical mode thermalized to a primary low temperature thermometer. In summary, we report the quantum imbalance between the up and down-converted motional sideband powers in a cavity electro-mechanical system measured with a symmetric, linear detector. We show that for linear detection of the microwave field, the imbalance arises from the correlations between the mechanical motion and the quantum fluctuations of the microwave detection field. For normal-ordered detection of the microwave field, however, the imbalance arises directly from the quantum fluctuations of the mechanics. By further assuming that the output microwave field satisfies the cannonical commutator, which also determines the quantum fluctuations of the mechanical mode, the measurement can be interpreted as performing either symmetric or normal-ordered detection regardless of the type of detector utilized. In both scenarios, the imbalance in motional sidebands is a fundamental quantity originating from the Heisenberg's uncertainty relations and provides a quantum calibrated thermometer for mesoscopic mechanical systems. Figure 1: Comparison between photodetection and linear detection. a. Pump scheme. We consider a single microwave cavity (dotted line) pumped at ω c ± (ω m + δ) (green). The up-converted (red) and down-converted (blue) motional sidebands are placed tightly within the cavity linewidth. For figure clarity, the occupation of the microwave and mechanical modes are assumed to be zero. b. Normal-ordered detection. Photodetection is sensitive to the asymmetric motional noise spectrum, S xx . The photodetector is not sensitive to microwave shot noise and the noise floor (S II ) is from detector non-idealities (light grey), analogous to dark counts for a photodetector. c. Linear detection. The contribution from the symmetrized motional noise,S xx , is present in both sidebands. Microwave shot noise (dark grey) and amplifier noise (light grey) combine to form the imprecision noiseS II . This measurement is sensitive to noise correlation between the microwave and mechanical modes (S IF ), which results in asymmetric sqashing (red) and anti-squashing (blue) of the noise floor. Though the source is different, the sideband imbalance is identical in both photodetection and linear detection. For mathematical description of S II and S IF , refer to SI. Figure 2: Device, calibration, and measurement scheme. a. Electron micrograph of the measured device. A suspended aluminum (grey) membrane patterned on silicon (blue) forms the electro-mechanical capacitor. It is connected to the surrounding spiral inductor to form a microwave resonator. Out of view, coupling capacitors on either side of the inductor couple the device to input and output co-planar waveguides. b. Motional sideband calibration. The cryostat temperature is regulated while the mechanical mode is weakly probed with microwave tones set at ω c + ω m + δ (blue) and at ω c − ω m − δ (red) detunings, with δ = 2π × 500 Hz. The observed linear dependence provides the calibration between the normalized sideband power and the mechanical occupation factor. Inset, up-converted motional sideband spectra collected at 20mK (top) and 200mK (bottom), with ∆ω = ω − (ω c − δ). c. Schematic of the microwave measurement circuit. Three tones are placed about the microwave resonance. Two probe tones generate up-converted (red) and down-converted (blue) sidebands. An additional tone (purple) cools the mechanical mode. b. Sideband spectra.S II,tot (ω) measured at n th eff = 0.60 (blue) and 2.5 (purple) withn m = 4.7 ± 0.1. c. Sideband asymmetry. The ratio n − m /n + m vs. n + m is plotted for increasing noise injection. d. Sideband imbalance (blue) and sideband average (purple) vs. the measured noise increase, ∆η. Sideband imbalance, n − m − n + m , and average, (n − m + n + m )/2, exhibit a linear trend with ∆η. The imbalance at ∆η = 0 is the quantum imbalance due to the squashing of fluctuations of the microwave field. Supplementary Information for "Observation and interpretation of motional sideband asymmetry in a quantum electro-mechanical device" In this section, we give a framework to calculate the output noise spectrum of an opto/electro-mechanical system with arbitrary pump configuration by utilizing the input-output theory. As a first example, we analyze an ideal (without intrinsic losses) two-port opto/electro-mechanical system with a single pump tone either at frequency ω p = ω c − ω m or ω p = ω c + ω m , and discuss the origin of the sideband asymmetry in the output noise spectrum. We then use this method to study the system in our experiment, i.e., a two-port electro-mechanical system with three pumps (balanced detuned two tones and a cooling tone). We start with the standard Hamiltonian of an opto/electro-mechanical system whereâ (â † ) is the annihilation (creation) operator of the cavity field.b (b † ) is the annihilation (creation) operator of the phonon, g 0 is the coupling strength between the cavity and the mechanical oscillator. We assume an external driving, described byĤ drive , which is applied on the input port on the left side of the cavity. The optical and the mechanical system are both coupled to dissipative baths, described byĤ diss , giving rise to the decay rates γ m for the mechanical and κ for the optical system. The total cavity linewidth κ consists of the contributions from the different decay channels, namely the right (R) and the left (L) port, as well from intrinsic losses (I) inside of the cavity, i.e., κ = κ R + κ L + κ I . For large pumping fields, we may split the fields into classical and quantum components,â →ā +d andb →b +ĉ, whered andĉ describe the quantum fluctuations of the cavity photon and the phonon. By using input-output theory and neglecting the second order contributions from the quantum fluctuations, the linearized quantum Langevin equations arė where ω c = ω c + g(b +b * ) ω c . Including the possibility of multiple drives at frequencies ω n , we obtain a(t) = nā n e −iωnt as the driving field inside the cavity, withā n = √ κ L αn κ 2 −i(ωn−ωc) . Without loss of generality, we takeā n to be real. In Equations (S.1.2),d σ,in describes the input fluctuations to the cavity from channel σ with damping rate κ σ , andĉ in describes the input fluctuations to the mechanical oscillator. The input 1 field operators satisfy the following commutation relations where α σ = β = 1, n th σ is the photon occupation in port σ, and n th is the thermal occupation factor of the bath responsible for the intrinsic mechanical dissipation. The total thermal occupation of the cavity is the weighted sum of the contributions from different channels, n th c = σ κσ κ n th σ . Note, that the relations in Eq. (S.1.3) are only valid if we deal with frequencies close to cavity resonance. Single Tone We start with the case of a single pump tone at frequency ω p = ω c − ∆, where the drive detuning ∆ is chosen to either be ±ω m ; our goal is to make the origin of the asymmetry between the spectra measured for these two cases clear. For maximum clarity, we also consider the good-cavity limit ω m κ and work within the rotating-wave approximation. In this limit, we can describe the relevant spectra in terms of a 3 × 3 scattering matrix, involving the fieldsD +(−) ≡ (d R ,d L ,ĉ ( †) ) T , where the +(−) refers to a driving on the red (blue) sideband, i.e., ∆ = ±ω m . By using the input-output relationsd σ,out =d σ,in + √ κ σdσ and c out =ĉ in + √ γ mĉ and solving the corresponding quantum Langevin equations, we obtain in frequency space (working in a rotating frame at the cavity frequency) 1 For frequencies close to the cavity resonance (i.e., |ω − ∆| κ), the scattering matrix s[ω] is Here, the denominator N ± [ω] describes the mechanical response including optical damping / anti-damping: with G = g 0āp being the many-photon optomechanical coupling rate. Our interest is on the output field leaving the right port of the cavity, and hence on the first row of s ± [ω]. For a weak optomechanical cooperativity, we can ignore the modification of the mechanical damping by the cavity, and approximate γ m ± γ opt γ m . The only remaining differences in the first row of s + versus s − are in the overall sign of the mechanical contributions (terms ∝ γ opt ) in the elements s 11 and s 12 . These elements describe how the incident microwave fluctuations show up in the output; the sign difference of the mechanical term directly mirrors the fact that for the red (blue) detuned drive, the cavity provides positive (negative) optical damping on the mechanics. Note finally that for weak coupling, the coefficient s 13 describing the transmission of mechanical bath fluctuations to the output is identical for both choices of drive detuning. The normal ordered noise spectral density and the symmetrized noise spectral density of the output field on the right side of the cavity are defined as whereÎ (t) =d R,out (t) +d † R,out (t) is defined in terms of lab frame output operators. This definition makes the symmetrized noise spectral density consistent with that measured by a classical voltage spectrum analyzer, as used in the experiment. Note that to describe a homodyne measurement, one should instead takeÎ to be defined in terms of output operators in the rotating frame. This difference only affects the frequencyindependent noise floor. In our case, we will focus on frequencies near the cavity resonance frequency (i.e. ω ω c in the lab frame). For such frequencies, terms in the spectra involving the output operator d † R,out (t) will not contribute, as these operators only have spectral weight at negative frequencies in the lab frame (see, e.g., Appendix D in [2].) We can thus replace {Î (t) ,Î (0)} by {d R,out (t),d † R,out (0)} in the definition of the symmetrized spectrum. Correspondingly, we can replace the expectation value in the definition of the normal ordered spectrum by d † R,out (0)d R,out (t) . Having established the definition of the noise spectra, we now return to our rotating frame, where the cavity frequency is situated at ω = ∆. By using the correlators defined in Eq. (S.1.3), we can calculate these noise spectral densities and express them in terms of the elements of the scattering matrix Eq. (S.1.5). We obtain for the symmetrized spectra: while the normal-ordered spectra take the form: Note crucially that for a given drive detuning, the scattering matrix elements appear identically in both the symmetrized and normal-ordered spectra. The only difference is how these elements are weighted by the input noise. For the symmetrized spectra, it is always the symmetrized bath noise which enters (i.e., n th σ + 1/2), irrespective of the drive detuning. In the normal ordered case, we see that the only contribution from vacuum noise is from the mechanical bath, and only for the case of a blue-detuned drive. We also note that the form of the symmetrized spectra given above could be obtained from a completely classical set of Langevin equations, as the input noise correlators enter the same way for both detunings. This is not true for the normal ordered case, as the effective mechanical bath correlator is different for ∆ = ω m versus ∆ = −ω m . Setting α R = α L ≡ α for clarity, the imbalance of the spectra (i.e., the difference between the output spectra for the two choices of detuning) δS = S| ∆=−ωm − S| ∆=ωm , become δS N II,tot = |s − 11 | 2 − |s + 11 | 2 n th R + |s − 12 | 2 − |s + 12 | 2 n th L + |s − 13 | 2 − |s + 13 | 2 n th m + |s − 13 | 2 β. (S.1.10) We have omitted writing the explicit frequency dependence of the elements of s ± for clarity. Finally, we insert the explicit elements of the scattering matrix in Eq. (S.1.5) into the expressions for the different output spectra derived above. The symmetrized noise in the rotated frame becomes where we define n th eff = 2n th c − n th R , γ tot = γ m ± γ opt and the noise floor For the normal-ordered spectra, we obtain: (S.1.14) For a weak optomechanical cooperativity, γ tot γ m . If assume this case, take α L = α R , and transform back into the lab frame, we recover the spectral densities given in the main text, cf., Eq. (2,3,5,6). It is also useful to characterize the asymmetry of the ∆ = ±ω m spectra in terms of the total integrated weight of the mechanical feature. Defining δI = dω 2π δS[ω] and taking γ opt γ m , we find We thus see that the asymmetry of the symmetrized spectra (corresponding to linear field measurement) are most naturally interpreted as being due to the contribution of fluctuations of the incident microwave fields, whereas the asymmetry in the normal ordered spectra are most naturally attributed to the fluctuations of the mechanical oscillator. Finally, note that the output spectra are linked via the commutation relation of the output field, which must be the same as those of the corresponding input field: Calculating the commutator using the scattering matrix in Eq. (S.1.5) and keeping α L , α R and β unspecified, we obtain for both detuning cases (S.1.16) We thus see that preserving the commutation relation of the output R fields requires in general α L = α R = β. The fact that the commutator of the output field is a constant means that for any detuning, the symmetrized spectrum will be equal to the normal ordered spectrum plus a frequency-independent noise background. Balanced Detuned Two Tones with Cooling In our actual experiment, we have a two-port electro-mechanical system, which we pump during our measurement simultaneously with three microwave tones. These tones are all detuned from the cavity resonance, and in a frame rotated at the cavity frequency ω c the drive Hamiltonian readŝ the first term describes the balanced detuned two tones: one is in the amount of δ detuned below the red sideband (ν = −) and the other one is with the same amount detuned above the blue sideband (ν = +). The second term corresponds to the cooling tone, which we assume to be sufficiently detuned below the red sideband (δ c > δ γ m ), so that the cooling tone acts independently from the probe tones. Now, we start with the driving scheme in Eq.(S.1.17) and the standard Hamiltonian in Eq. (S.1.1), which we rotate in a frame at the cavity frequency and the mechanical frequency ω m . Additionally, we perform a rotating wave approximation as usual, where we neglect non-resonant processes (ω m κ). As before we use input-output theory to include the dissipative environment and derive the quantum Langevin equations for the fluctuation operators of the microwave (mechanical)d(ĉ) system. By solving these Langevin equations for the noise operatorĉ[ω] of the mechanical oscillator, we can derive the symmetrized noise spectral density of the mechanical motion (x = ĉ +ĉ † x zp ) with the total damping γ tot = γ M + γ + opt − γ − opt , where the γ ± opt = 4G 2 ± /κ corresponds to the optical damping/antidamping induced by the red/blue tone. The optical damping γ cool opt , associated with the cooling tone at ω = ω c − ω m − δ c , is included in the enhanced mechanical linewidth γ M = γ m + γ cool opt , as well as in the modified mechanical occupation n th M = (γ m n th m + γ cool opt n th c )/γ M . In the calculation of the output spectra we assume that the anti-Stokes sideband created by the red tone and the Stokes sideband created by the blue tone, can be treated independently. The distance between the two sideband in frequency space is 2δ, thus for δ γ tot we have two well separated Lorentzians and we can neglect a direct coupling of the drives in the Langevin equations; see the next section for further discussions. The noise in the output field near the Stokes sideband (ω = ω + δ) and anti-Stokes sideband (ω = ω − δ) becomeŝ with the effective mechanical input noisê . Though here we have also a contribution from the cooling tone and a coupling tod † σ,in , arising from the fact that the mechanical oscillator sees both drives and thus, mediates an indirect coupling between the two sidebands. With the noise correlators and commutation relations given in Eq. (S.1.3) and setting α σ = β = 1, the symmetrized noise spectral densities arē For equal coupling strengths G + = G − the amount of optical damping and antidamping is the same, i.e., γ + opt = γ − opt = γ opt , and thus the total damping contains only the enhanced mechanical linewidth γ tot = γ M . For the case G + = G − the asymmetries in terms of their integrated weights become Thus, the observed sideband asymmetry of the symmetrized spectrum and the sideband asymmetry for the normal order coincide. For balanced optical damping rates γ + opt = γ − opt , we obtain the expected result for the asymmetry, which scales with 2n th eff + 1. Effects of asymmetric parameters and next sideband contributions In our main analysis, we assumed that the direct coupling between the cavity fields at the two mechanicallygenerated sidebands near the cavity resonance is negligible. By this, we mean that the Lorentzian-shaped resonance around ω c − δ (lab frame), created due to the drive on the red sideband, does not overlap with the one created from the drive on the blue sideband at frequency ω c + δ. Within this approximation we could derive two independent noise spectral densities, one valid for frequencies close to the Stokes sideband Eq. (S.1.21b), and the other for frequencies close to the anti-Stokes sideband Eq. (S.1.21a). From these expressions follows, that the width of each Lorentzian is given by the total damping rate γ tot . Thus, the detuning of the control lasers from the sideband frequencies should fulfill the condition δ γ tot . Briefly, we want to confirm the validity of this condition by calculating the complete RWA solution. In this case, the noise in the output field near the cavity resonance becomes (G − = G + and rotated frame) For simplicity, we focus on the symmetrized noise spectral density, which in this case yields Here, we have written the noise spectral density in a frame rotating at the cavity resonance frequency. This expression contains both Lorentzian near the anti-Stokes (AS) and Stokes (S) sideband, as well as the noise floorS 0 and a mixing termS mix II,tot [ω]. Figure S1(a) shows a plot of this output spectrum for the parameters used in the experiment. Both resonances are clearly separated and each well described by the spectra calculated without a coupling of the fields (dashed red/blue lines). By decreasing the detuning δ the distance between the peaks decreases and they start to overlap. Without any detuning we end up with a Lorentzian at the cavity resonance, with an integrated weight containing solely the mechanical bath. To study the influence of the detunings, we compare the symmetrized output spectrum Eq. (S.1.26) to the single Lorentzian approximationsS Fig. S1(b). Hence, in this regime we can describe our spectra as two individual resonances. Note, that the actual chosen detuning in the experiment lays clearly in the regime where both peaks are well separated, cf. Fig. S1(a,b). Finally, we want to briefly comment on the influence of higher-order mechanical sidebands. The general linearized interaction Hamiltonian for our setup readŝ where the counter-rotating terms inĤ CR describe the strongly non-resonant Stokes and anti-Stokes processes generated by the two control lasers. The coupling strengths G ± contain the drive amplitudes as usual, but we assume again that they can be different in magnitude, which leads to a total mechanical damping of γ opt = γ M + γ + opt − γ − opt with γ ± opt = 4G ± /κ. Note, for the system to be stable the total damping has to be positive, which roughly translates into the condition G + > G − . The inclusion of the counter-rotating terms leads to a time-dependent problem, which can not be solved exactly. In principle,Ĥ CR generates an infinite number of sidebands at multiples from ±ω m . If one is not too far from the resolved sideband limit, a perturbative approach keeping track of only the leading-order sidebands created byĤ CR is sufficient. Figure S1(c) depicts the maximum of the symmetrized spectral density as a function of ω m /κ including the next sidebands at frequencies ω = ω c ± 2ω m . As expected, when one even modestly approaches the resolved sideband regime, i.e., ω m > κ, the contributions from the counter-rotating terms are negligible. Moreover, for the given experimental setup we are far in the resolved side-band regime as indicated in the graph. Linear Response Theory In this section we briefly review the linear response approach to understand the sideband asymmetry observed using linear field detection; this explanation was first discussed in Ref. [3]. For linear field detection the observed asymmetry can be fully attributed to noise correlations in the detector (in this case, the driven cavity), correlations that could exist classically. We generalize the discussion of Ref. [3] to include thermal noise driving the cavity, showing the same backaction-imprecision correlations allow one to understand the squashing of thermal noise seen in previous experiments [6]. We also show that the particular value of the backaction-imprecision correlator, required to account for the zero-temperature sideband asymmetry, plays a special role in the linear-response approach to quantum measurements [1,2]: it is precisely the value needed to ensure there is no additional constraint on the detector's symmetrized noise correlators besides what would exist classically. Following Ref. [2], the general linear response approach starts by assuming a linear coupling between the detector and the observable to be measured (in this casex, the mechanical position): Here,F is the detector quantity which couples to the measured system, and plays the role of a backaction force. In our optomechanical case, we haveF = −(g/x zp )â †â . A is a dimensionless coupling constant that we will use to track the order at whichĤ int appears in expressions; we will set it to one at the end of the calculation. Next, consider the detector output observableÎ. We assume that this quantity responds linearly to the mechanical position, where χ IF [ω] is the response coefficient or "forward-gain" of the detector; it is given by a standard Kubo formula. We are interested in understanding the fluctuations of the detector output. Quantum linear response theory tells us that these can be completely understood within an equivalent classical stochastic model [1,2], where we now replace the operatorsÎ(t),F (t) andx(t) by classical random variables. The fluctuations of the output in this model are written: 3) The first term here represents the intrinsic fluctuations of the output in the absence of any coupling to the mechanics (the imprecision noise). δx 0 [ω] describes the position fluctuations of the mechanics in the absence of any backaction, whereas δx BA [ω] describes the additional backaction-driven fluctuations of the mechanical resonator. δx 0 [ω] is due to the intrinsic mechanical dissipation. Assuming this dissipation to be in thermal equilibrium, these fluctuations are described by the spectral density where χ xx [ω] is the mechanical force susceptibility, Similarly, the backaction-driven position fluctuations are described by The spectral density of the output fluctuations are then given by For our optomechanical system, the needed detector correlation functions are easily computed from the linearized Heisenberg-Langevin equations. As in the main text, we assume a two-sided cavity, and measure the quantityÎ defined below Eq. (S.1.7). One finds where for the cross-correlator, we have introduced the functions We are again interested in the symmetrized output spectrum of the detector in a narrow range (∼ γ m ) near the cavity resonance frequency, for a drive detuning ∆ = ±ω m ; as always, we consider the good-cavity limit ω m κ. Over this range of frequencies, we can neglect the frequency dependence of the cavity correlators, and evaluate them on resonance (i.e., ω = ∆ in the rotating frame). Of particular interest is the cross-correlator. One finds where the − sign (+ sign) corresponds to the drive detuning ∆ = +ω m (∆ = −ω m ). We see thatS zF is purely imaginary, and changes sign for the two choices of detuning; in contrast, one can confirm that |χ IF |,S II andS F F at resonance are the same for ∆ = ±ω m . It immediately follows that the asymmetry between the spectra obtained at ∆ = −ω m and ∆ = ω m is entirely due to the detector backaction-imprecision correlations described byS zF . Returning to Eq. (S.2.7) for the output spectrum, we further note that for a sufficiently weak detectorsystem coupling, the term S xx,BA will be negligible to the term S xx,0 , as the backaction term is second-order in the coupling (i.e. ∝ A 2 ). However, the last correlation term remains significant: its contribution relative to S xx,0 is independent of coupling strength. In our case, where S IF /χ IF ≡ S zF is purely imaginary, we can combine the leading mechanical contributions to the output spectrum as We see that the mechanics will give rise to a Lorentzian signature in the output spectrum, but that the presence of imaginary back-action imprecision correlations modifies the weight of the Lorentzian -it no longer simply reflects the mechanical temperature. This results in the well known phenomenon of noise squashing. Using Eq. (S.2.17) for the cross-correlator, we see that this linear-response calculation reproduces the asymmetry found earlier between spectra obtained for ∆ = ±ω m . This approach emphasizes the fact that the asymmetry can be completely attributed to the detector, namely the presence of backaction-imprecision correlations. These correlations are purely imaginary; the only difference between the cases is the sign of the correlator. For ∆ = ω m , the correlations are positive, and serve to decrease the weight of the mechanical Lorentzian; they completely cancel the contribution if the mechanics is at zero temperature. For ∆ = −ω m , they instead serve to increase the mechanical contribution. In the absence of thermal cavity noise, the effect of the noise correlations is to cause the weight of the mechanical Lorentzian in the output spectrum to have the expected form for phonon emission or absorption: for ∆ = ω m , we have the emission factor n th m , for ∆ = −ω m we have n th m + 1. We thus see that the asymmetry can be interpreted in terms of a finely tuned backaction-imprecision noise correlation. We stress that a completely classical detector could have an identical noise correlation. Nonetheless, this value of correlation plays an extremely special role in the theory of quantum limits on linear quantum detectors and amplifiers [2]. Quantum limits on such detectors (e.g., on their added noise or noise temperature) follow from a fundamental Heisenberg-like inequality on their noise properties at each frequency. These take the form: Device calibrations Measurement parameters are deduced from two calibrations. First, we place a single pump tone at ideal "red" detuning, ω c − ω m , and monitor the linewidth of the up converted mechanical sideband via weak homodyne detection (n p ≤ 5 × 10 2 ). Sweeping the pump power over n red p = 10 3 − 10 7 , we explore the sideband linewidth, γ tot , as a function of detected pump power, P red thru . In the resolved sideband regime (ω m κ) with weak coupling (G κ), the effective linewidth follows γ tot = γ m + 4g 2 0 κ n red p . Fitting to this model, we extract the natural linewidth of the mechanics, γ m , and the optical damping as a function of P red thru . Second, two pump tones, denoted as "+" and "-", are placed at ω ± = ω c ∓ (ω m + δ), with δ = 2π × 500 Hz, and balanced at relatively low powers (γ − op = γ + op γ m ). We then measure the integrated noise power of each sideband, P ± m , as we sweep the cryostat over calibrated temperatures. This measurement is performed for both detunings to account for two issues: asymmetric cavity transmission about ω c and gain fluctuations at frequencies separated by ∼ 2ω m . The source of this skewed cavity transmission is addressed in Sec.5. For small detunings (δ κ) and high mechanical occupation factor (n th m 1), the integrated Lorentzian weights of Eqs. S.1.21a, S.1.21b simplify to where G(ω) is the system gain between the device output and the room temperature analyzer at frequency ω. We remove the pump power dependence of γ ± opt by normalizing by the detected tone power, given by P ± thru = G(ω ± ) ·hω ± · [1 + ∆(ω ± )] · κ R · n ± p . We include the term ∆(ω) to incorporate corrections to the microwave transmission mentioned above (See Sec.5). The resulting ratio is, For the prescribed cryostat temperatures, the two pump powers are kept low enough (n ± p ≈ 10 2 ) to ensure that classical noise in the microwave resonator and mechanical bath heating effects are negligible, so that the occupation factor inferred from the sideband areas quantifies the thermal occupation factor of the mechanical mode: n ± m = n th m = 1/(exp(h ωm kBT ) − 1). Furthermore, the pump detunings used in the calibration and measurement routines are small enough so that detuning corrections, on the order of ( 2δ κ ) 2 can be ignored [4]. Noise floor calibration The increase in the device noise floor at cavity resonance is measured relative to the noise floor of an impedance matched through connection with matching amplifier conditions. Following Eq. (S.1.12) and the noise floor treatment of previous work [7], the noise floor increase is proportional to n th eff and n th R , where n th eff = 2n th c − n th R as above, and where λ is the conversion factor for ∆η in units of n th c . To see how this behavior affects our measurements, we consider the sideband powers in the presence of classical noise n th c , n th R . Integrating the noise power under the Lorentzian of Eqs. where we follow the notation of Eq. (S.1.18) and have set α σ = β = 1, γ + opt = γ − opt = γ opt . The n th R contribution does not affect the slope of either data set. For sideband imbalance and average measurements, we expect linear dependence on ∆η with slope proportional to λ. The n th R factor does, however, add fixed offsets to both data sets. For the sideband difference, the contribution is suppressed relative to the quantum offset of "+1". With the experimental parameters n th R = 0.34 ± 0.03, κ = 2π × (860 ± 10) kHz, and κ R = 2π ×(450±30) kHz, we estimate an offset correction of 2κ R −κ κ R n th R ≈ (3±4)×10 −2 , well within the measurement uncertainty for sideband imbalance. This is not the case for the sideband average, where we expect a correction to the offset that is significant when compared to the mechanical quantum contribution of "+1/2". Output port occupation We estimate the occupation factor of the output port, n th R , by measuring the microwave noise spectrum absent any microwave pumping. In this setup, we assume that n th c is solely due to noise radiating into the device from the the isolated port of a cryogenic circulator thermalized to an elevated temperature, so that n th c = n th R κ R /κ. Following [7] and Eqs. (S.1.12), (S.4.1), the detected noise floor, now spanning frequencies over the cavity linewidth and also including the noise contribution from the cryogenic amplifier,S HEMT , followsS Taking κ R κ from independent calibration measurements and λ from the sideband imbalance measurements, we fit the observed Lorentzian to find n th R = (3.4 ± 0.3) × 10 −1 . A typical noise floor spectrum is shown in Fig. S2. Figure S2: Example spectra of microwave noise taken at zero pumping (light blue) with Lorentzian fit (dark blue). Asymmetric microwave transmission We find that the driven response of the microwave circuit noticeably deviates from a Lorentzian lineshape at frequencies outside the resonator linewidth. One distinct feature of the observed spectrum is an antiresonance (Fig.S4), indicating interference of multiple current channels at the output of the microwave circuit. As a first step to understand this behavior, we model the input and output transmission line discontinuities with shunt capacitors [5], C s,in and C s,out , as presented in Fig. S3. Figure S3: Equivalent microwave circuit model with shunt capacitors. We recover the circuit transmission, S s 21 = Vamp V0 , by applying Kirchoff's Circuit Law over the equivalent circuit model and solving the resulting system of equations. Based on estimates of the circuit parameters, the transmission can be approximated as S s 21 (ω) = S 0 21 (ω) + 2R L · jω c C s,out , (S.6.1) where S 0 21 (ω) = − √ κ R κ L j(ω−ωc)+κ/2 is the Lorentzian transmission for the case of C s,in , C s,out → 0, j = √ −1, and R L = 50Ω is the source impedance. Notably, we also find that the voltage at the resonator, V smr (ω), is negligibly modified by these shunt capacitors. This shows that the additional channels can be treated solely as modification to the cavity output scattering rate. Fitting S 21 data from our device to this model, we estimate C out = C s,out = 2.7 fF. We believe that these values are realistically acceptable given the geometry of our device and the proximity of the output coupler to both the tank circuit and the ground plane. Additionally, shunt capacitance from wirebonds will also contribute to this effect.
12,404
2014-04-11T00:00:00.000
[ "Physics" ]
A Statistical Analysis of Summarization Evaluation Metrics using Resampling Methods The quality of a summarization evaluation metric is quantified by calculating the correlation between its scores and human annotations across a large number of summaries. Currently, it is unclear how precise these correlation estimates are, nor whether differences between two metrics' correlations reflect a true difference or if it is due to mere chance. In this work, we address these two problems by proposing methods for calculating confidence intervals and running hypothesis tests for correlations using two resampling methods, bootstrapping and permutation. After evaluating which of the proposed methods is most appropriate for summarization through two simulation experiments, we analyze the results of applying these methods to several different automatic evaluation metrics across three sets of human annotations. We find that the confidence intervals are rather wide, demonstrating high uncertainty in the reliability of automatic metrics. Further, although many metrics fail to show statistical improvements over ROUGE, two recent works, QAEval and BERTScore, do in some evaluation settings. Introduction Accurately estimating the quality of a summary is critical for understanding whether one summarization model produces better summaries than another. Because manually annotating summary quality is costly and time consuming, researchers have developed automatic metrics that approximate human judgments (Lin, 2004;Tratz and Hovy, 2008;Giannakopoulos et al., 2008;Zhao et al., 2019;Deutsch et al., 2021, among others). Currently, automatic metrics themselves are evaluated by calculating the correlations between their 1 Our code is available at https://github.com/ CogComp/stat-analysis-experiments. scores and human-annotated quality scores. The value of a metric's correlation represents how similar its scores are to humans', and one metric is said to be a better approximation of human judgments than another if its correlation is higher. However, there is no standard practice in summarization for calculating confidence intervals (CIs) for the correlation values or running hypothesis tests on the difference between two metrics' correlations. This leaves the community in doubt about how effective automatic metrics really are at replicating human judgments as well as whether the difference between two metrics' correlations is truly reflective of one metric being better than the other or if it is an artifact of random chance. In this work, we propose methods for calculating CIs and running hypothesis tests for summarization metrics. After demonstrating the usefulness of our methods through a pair of simulation experiments, we then analyze the results of applying the statistical analyses to a set of summarization metrics and three datasets. The methods we propose are based on the resampling techniques of bootstrapping (Efron and Tibshirani, 1993) and permutation (Noreen, 1989). Resampling techniques are advantageous because, unlike parametric methods, they do not make assumptions which are invalid in the case of summarization ( §3.1; §4.1). Bootstrapping and permutation techniques use a subroutine that samples a new dataset from the original set of observations. Since the correlation of an evaluation metric to human judgments is a function of matrices of values (namely the metric's scores and human annotations for multiple systems across multiple input texts; §2), this subroutine must sample new matrices in order to generate a new instance, in contrast to standard applications of bootstrapping and permutation that sample vectors of numbers. To that end, we propose three different bootstrapping ( §3.2) and permutation ( §4.2) techniques for resampling ma-trices, each of which makes different assumptions about whether the systems or inputs are constant or variable in the calculation. In order to evaluate which resampling methods are most appropriate for summarization, we perform two simulations. The first demonstrates that the bootstrapping resampling technique which assumes both the systems and inputs are variable produces CIs that generalize best to held-out data ( §5.1). The second shows that the permutation test which makes the same assumption has more statistical power than the equivalent bootstrapping method and Williams' test (Williams, 1959), a parametric hypothesis test that is popular in machine translation ( §5.2). Finally, we analyze the results of estimating CIs and applying hypothesis testing to a set of summarization metrics using annotations on English single-and multi-document datasets (Dang and Owczarzak, 2008;Fabbri et al., 2021;Bhandari et al., 2020). We find that the CIs for the metrics' correlations are all rather wide, indicating that the summarization community has relatively low certainty in how similarly automatic metrics rank summaries with respect to humans ( §6.1). Additionally, the hypothesis tests reveal that QAEval (Deutsch et al., 2021) and BERTScore (Zhang et al., 2020) emerge as the best metrics in several of the experimental settings, whereas no other metric consistently achieves statistically better performance than ROUGE ( §6.2; Lin, 2004). Although we focus on summarization, the techniques we propose can be applied to evaluate automatic evaluation metrics in other text generation tasks, such as machine translation or structure-totext. The contributions of this work include (1) a proposal of methods for calculating CIs and running hypothesis tests for summarization metrics, (2) simulation experiments that provide evidence for which methods are most appropriate for summarization, and (3) an analysis of the results of the statistical analyses applied to various summarization metrics on three datasets. Preliminaries: Evaluating Metrics Summarization evaluation metrics are typically used to either argue that a summarization system generates better summaries than another or that an individual summary is better than another for the same input. How similarly an automatic metric does these two tasks with respect to humans is quantified as follows. Let X be an evaluation metric that is used to approximate some ground-truth metric Z. For example, X could be ROUGE and Z could be a human-annotated summary quality score. The similarity of X and Z is evaluated by calculating two different correlation terms on a set of summaries. First, the summaries from summarization systems S = {S 1 , . . . , S N } on input document(s) D = {D 1 , . . . , D M } are scored using X and Z. We refer to these scores as matrices X, Z ∈ R N ×M in which x j i and z j i are the scores of X and Z on the summary output by system S i on input D j . Then, the correlation between X and Z is calculated at one of the following levels: where CORR(·) typically calculates the Pearson, Spearman, or Kendall correlation coefficients. 2 These two correlations quantify how similarly X and Z score systems and individual summaries per-input for systems S and documents D. The system-level correlation r SYS calculates the correlation between the scores for each system (equal to the average score across inputs), and the summarylevel correlation r SUM calculates an average of the correlations between the scores per-input. 3 The correlations r SYS and r SUM are also used to reason about whether X is a better approximate of Z than another metric Y is, typically by showing that r(X, Z) > r(Y, Z) for either r. Correlation Confidence Intervals Although the strength of the relationship between X and Z on one dataset is quantified by the correlation levels r SYS and r SUM , each r is only a point 2 For clarity, we will refer to rSUM and rSYS as correlation levels and Pearson, Spearman, and Kendall as correlation coefficients. 3 Other definitions for the summary-level correlation have been proposed, including directly calculating the correlation between the scores for all summaries without grouping them by input document (Owczarzak and Dang, 2011). However, the definition we use is consistent with recent work on evaluation metrics (Peyrard et al., 2017;Zhao et al., 2019;Bhandari et al., 2020;Deutsch et al., 2021) Our work can be directly applied to other definitions as well. estimate of the true correlation of the metrics, denoted ρ, on inputs and systems distributed similarly to those in D and in S. Although we cannot directly calculate ρ, it is possible to estimate it through a CI. The Fisher Transformation The standard method for calculating a CI for a correlation is the Fisher transformation (Fisher, 1992). The transformation maps a correlation coefficient to a normal distribution, calculates the CI on the normal curve, and applies the reverse transformation to obtain the upper and lower bounds: where r is the correlation coefficient, n is the number of observations, z α/2 is the critical value of a normal distribution, and b and c are constants. 4 Applying the Fisher transformation to calculate CIs for ρ SYS and ρ SUM is potentially problematic. First, it assumes that the input variables are normally distributed (Bonett and Wright, 2000). The metrics' scores and human annotations on the datasets that we experiment with are, in general, not normally distributed (see Appendix A). Thus, this assumption is violated, and we expect this is the case for other summarization datasets as well. Second, it is not clear whether the transformation should be applied to the summary-level correlation since its final value is an average of correlations, which is not strictly a correlation. 5 Bootstrapping A popular nonparametric method of calculating a CI is bootstrapping (Efron and Tibshirani, 1993). Bootstrapping is a procedure that estimates the distribution of a test statistic by repeatedly sampling with replacement from the original dataset and calculating the test statistic on each sample. Unlike the Fisher transformation, bootstrapping is a very flexible procedure that does not assume the data is normally distributed nor that the test statistic is a correlation, making it appropriate for summarization. 4 b = 3, 3, 4 and c = 1, 1 + r 2 /2, √ .437 for Pearson, Spearman, and Kendall, respectively (Bonett and Wright, 2000). 5 Correlation coefficients cannot be averaged because they are not additive in the arithmetic sense, however it is standard practice in summarization. However, it is not clear how to perform bootstrap sampling for correlation levels. Consider a more standard bootstrapped CI calculation for the mean accuracy of a question-answering model on a dataset with k instances. Since the mean accuracy is a function of the k individual correct/incorrect labels, each bootstrap sample can be constructed by sampling with replacement from the original k instances k times. In contrast, the correlation levels are functions of the matrices X and Z, so each bootstrap sample should also be a pair of matrices of the same size that are sampled from the original data. There are at least three potential methods for sampling the matrices: 1. BOOT-SYSTEMS: Randomly sample with replacement N systems from S, then select the sampled system scores for all of the inputs. 2. BOOT-INPUTS: Randomly sample with replacement M inputs from D, then select all of the system scores for the sampled inputs. 3. BOOT-BOTH: Randomly sample with replacement M inputs from D and N systems from S, then select the sampled system scores for the sampled inputs. Once the samples are taken, the corresponding values from X and Z are selected to create the sampled matrices. An illustration of each method is shown in Figure 1. Each sampling method makes its own assumptions about the degrees of freedom in the sampling process that results in different interpretations of the corresponding CIs. BOOT-INPUTS assumes that there is only uncertainty on the inputs while the systems are held constant. CIs derived from this sampling technique would express a range of end for 10: Append r(X s , Z s ) to samples 11: end for 12: , u ← (α/2)×100 and (1−α/2)×100 percentiles of samples 13: return , u values for the true correlation ρ between X and Z for the specific set of systems S and inputs from the same distribution as those in D. The opposite assumption is made for BOOT-SYSTEMS (uncertainty in systems, inputs are fixed). BOOT-BOTH, which can be viewed as sampling systems followed by sampling inputs, assumes uncertainty on both the systems and the inputs. Therefore the corresponding CI estimates ρ for systems and inputs distributed the same as those in S and D. Algorithm 1 contains the pseudocode for calculating a CI via bootstrapping using the BOOT-BOTH sampling method. In §5.1 we experimentally evaluate the Fisher transformation and the three bootstrap sampling methods, then analyze the CIs of several different metrics in §6.1. Significance Testing Although CIs express the strength of the correlation between two metrics, they do not directly express whether one metric X correlates to another Z better than Y does due to their shared dependence on Z. This statistical analysis is performed by hypothesis testing. The specific one-tailed hypothesis test we are interested in is: Williams' Test One method for hypothesis testing the difference between two correlations with a dependent variable that is used frequently to compare machine translation metrics is Williams' test (Williams, 1959). It uses the pairwise correlations between X, Y , and Z to calculate a t-statistic and a corresponding p-value. 6 Williams' test is frequently used to compare machine translation metrics' performances at the system-level (Mathur et al., 2020, among others). However, the test faces the same issues as the Fisher transformation: It assumes the input variables are normally distributed (Dunn and Clark, 1971), and it is not clear whether the test should be applied at the summary-level. Permutation Tests Bootstrapping can be used to calculate a p-value in the form of a paired bootstrap test in which the sampling methods described in §3.2 can be used to resample new matrices from X, Y , and Z in parallel (details omitted for space). However, an alternative and closely related nonparametric hypothesis test is the permutation test (Noreen, 1989). Permutation tests tend to be used more frequently than paired bootstrap tests for hypothesis testing because they directly test whether any observed difference between two values is due to random chance. In contrast, paired bootstrap tests indirectly reason about this difference by estimating the variance of the test statistic. Similarly to bootstrapping, a permutation test applied to two paired samples estimates the distribution of the test statistic under H 0 by calculating its value on new resampled datasets. In contrast to bootstrapping, the resampled datasets are constructed by randomly permuting which sample each observation in a pair belongs to (i.e., resampling without replacement). This relies on assuming the pair is exchangeable under H 0 , which means H 0 is true for either sample assignment for the pair. Then, the p-value is calculated as the proportion of times the test statistic across all possible permutations is greater than the observed value. A significant p-value implies the observed test statistic is very unlikely to occur if H 0 were true, resulting in its rejection. In practice, calculating the distribution of H 0 across all possible permutations is intractable, so it is instead estimated on a large number of randomly sampled permutations. 7 For example, a permutation test applied to testing the difference between two QA models' mean : An illustration of the three permutation methods which swap system scores, document scores, or scores for individual summaries between X and Y . Algorithm 2 Permutation Hypothesis Test if random Boolean is true then swap 8: if δ s > δ then 17: c ← c + 1 18: end if 19: end for 20: return c/k accuracies on the same dataset would sample a permutation by swapping the models' outputs for the same input. Under H 0 , the models' mean accuracies are equal, so randomly exchanging the outputs is not expected to change their means. In the case of evaluation metrics, each permutation sample can be taken by randomly swapping the scores in X and Y . There are at least three ways of doing so: 1. PERM-SYSTEMS: For each system, swap its scores for all inputs with probability 0.5. 2. PERM-INPUTS: For each input, swap its scores for all systems with probability 0.5. 3. PERM-BOTH: For each summary, swap its scores with probability 0.5. To account for differences in scale, we standardize X and Y before performing the permutation. Fig. 2 contains an illustration of each method, and the pseudocode for a permutation test using the PERM-BOTH method is provided in Alg. 2. Similarly to the bootstrap sampling methods, each of the permutation methods makes assumptions about the system and input document underlying distribution. This results in different interpretations of how the tests' conclusions will generalize. Since PERM-SYSTEMS randomly assigns system scores for all documents in D to either sample, we only expect the test's conclusion to generalize to a system distributed similarly to those in S evaluated on the specific set of documents D. The opposite is true for PERM-INPUTS. The results for PERM-BOTH (which can be viewed as first swapping systems followed by swapping inputs) are expected to generalize for both systems and documents distributed similarly to those in S and D. In §5.2 we run a simulation to compare the different hypothesis testing approaches, then analyze the results of hypothesis tests applied to summarization metrics in §6.2. Simulation Experiments We run two sets of simulation experiments in order to determine which CI ( §5.1) and hypothesis test ( §5.2) methods are most appropriate for summarization metrics. The datasets used in the simulations are the multi-document summarization dataset TAC'08 (Dang and Owczarzak, 2008) and two subsets of the single-document summarization CNN/DM dataset (Nallapati et al., 2016) et al., 2019) scores, respectively, by human annotators. The scores of the automatic metrics are correlated to these human annotations. Confidence Interval Simulation In practice, evaluation metrics are almost always used to score summaries produced by systems S on inputs D which are disjoint (or nearly disjoint) from and assumed to be distributed similarly to the data that was used to calculate the CI, S and D. It is still desirable to use the CI as an estimate of the correlation of a metric on S and D , however this scenario violates assumptions made by some of the bootstraping sampling methods (e.g., BOOT-SYSTEMS assumes that D is fixed). This simulation aims to demonstrate the effect of violating these assumptions on the accuracy of the CIs. Setup. The simulation works as follows. The systems S and inputs D are each randomly partitioned into two equally sized disjoint sets S A , S B , D A , and D B . Then the submatrices X A , Z A , X B , and Z B are selected from X and Z based on the system and input partitions. Matrices X A and Z A are used to calculate a 95% CI using one of the methods described in §3, and then it is checked whether sample correlation r(X B , Z B ) is contained by the CI. The entire procedure is repeated 1000 times, and the proportion of times the CI contains the sample correlation is calculated. It is expected that a CI which generalizes well to the held-out data should contain the sample correlation 95% of the time under the assumption that the data in A and B is distributed similarly. The larger the difference from 95%, the worse the CI is at estimating the correlation on the held-out data. The results of the simulation calculated on TAC'08 and CNN/DM using both the Fisher trans-formation and the different bootstrap sampling methods to CIs for QAEval-F 1 (Deutsch et al., 2021) are shown in Table 1. 8 BOOT-BOTH generalizes the best. Among the bootstrap methods, BOOT-BOTH produces CIs that come closest to the ideal 95% rate. Any deviations from this number reflect that the assumption that all of the inputs and systems are distributed similarly is not true, but overall violating this assumption does not have a major impact. The other bootstrap methods, which sample only systems or inputs, captures the correlation on the held-out data far less than 95% of the time. For instance, the CIs for ρ SYS on Bhandari et al. (2020) only successfully estimate the held-out correlation on 80% and 68% of trials. This means that a 95% CI calculated using BOOT-INPUTS is actually only a 68% CI on the held-out data. This pattern is the same across the different correlation levels and datasets. The lower values for only sampling inputs indicates that more variance comes from the systems rather than the inputs. Fisher analysis. The Fisher transformation at the system-level creates CIs that generalize worse than BOOT-BOTH. The summary-level CI captures the held-out sample correlation 100% of the time, implying that the CI width is too large to be useful. We believe this is due to the fact that as the absolute value of r(X, Z) decreases, the width of the Fisher CI increases. Summary-level correlations are lower than system-level correlations (see §6.1), and therefore Fisher results in a worse CI estimate at the summary-level. Conclusion. This experiment presents strong evidence that violating the assumptions that either the systems/inputs are fixed or that the data is normally distributed does result in worse CIs. Hence, the BOOT-BOTH method provides the most accurate CIs for scenarios in which summarization metrics are frequently used. Power Analysis The power of a hypothesis test is the probability of accepting the alternative hypothesis given that it is actually true (equal to 1.0 -the type-II error rate). It is desirable to have as high of a power as possible in order to avoid missing a significant difference between metrics. This simulation estimates the power of each of the hypothesis tests. Setup. Measuring power requires a scenario in which it is known that ρ is greater for one metric than another (i.e., H 1 is true). Since this is not known to be true for any pair of proposed evaluation metrics, we artificially create such a scenario by adding randomness to the calculation of ROUGE-1. 9 We define R k to be ROUGE-1 calculated using a random k% of the candidate summary's tokens. We assume that since R k only evaluates a summary with k% of its tokens, it is quite likely that it is a worse metric than standard ROUGE-1 for k < 100. To estimate the power, we score summaries with ROUGE-1 and R k for different k values and count how frequently each hypothesis test rejects H 0 in favor of identifying ROUGE-1 as a superior metric. This trial is repeated 1000 times, and the proportion of significant results is the estimate of the power. Since the various hypothesis tests make different assumptions about whether the systems and inputs are fixed or variable, it is not necessarily fair to directly compare their powers. Because the assumptions of BOOT-BOTH and PERM-BOTH most closely align with the typical use case of summarization, we compare their powers. We additionally include Williams' test because it is frequently used for machine translation metrics and it produces interesting results, discussed below. PERM-BOTH has the highest power. the CNN/DM annotations by Fabbri et al. (2021). We find that PERM-BOTH has the highest power among the three tests for all values of k. As k approaches 100%, the difference between ROUGE-1 and R k becomes smaller and harder to detect, thus the power for all methods approaches 0. BOOT-BOTH has lower power than PERM-BOTH both at the summary-level and system-level, in which it is near 0. This result is consistent with permutation tests being more useful for hypothesis testing than their bootstrapping counterparts. We believe the power differences in both levels are due to the variance of the two correlation levels. As we observe in §6.1, the system-level CIs have significantly larger variance than at the summary-level, making it harder for the paired bootstrap to reject the system-level H 0 . Williams' test has low power. Interestingly, the power of Williams' test for all k is ≈ 0, implying the test never rejects H 0 in this simulation. This is surprising because Williams' test is frequently used to compare machine translation metrics at the system-level and does find differences between metrics. We believe this is due to the strength of the correlations of ROUGE-1 to the ground-truth judgments as follows. The p-value calculated by Williams is a function of the pairwise correlations of X, Y , and Z and the number of observations. The closer both r(X, Z) and r(Y, Z) are to 0, the higher the pvalue. The correlation of ROUGE-1 in this simulation is around 0.6 and 0.3 at the system-and summary-levels. In contrast, the system-level correlations for the metrics submitted to the Workshop on Machine Translation (WMT) 2019's metrics shared task for de-en are on average 0.9 (Ma et al., 2019). Among the 231 possible pairwise metric comparisons in WMT'19 for de-en, Williams' test yields 81 significant results. If the correlations are shifted to have an average value of 0.6, only 3 significant results are found. Thus we conclude that Williams' test's power is worse for detecting differences between lower correlation values. Because this simulation is performed with summarization metrics on a real summarization dataset, we believe it is faithful enough to a realistic scenario to conclude that Williams' test does indeed have low power when applied to summarization metrics. However, we do not expect Williams' test to have 0 power when used to detect differences between machine translation metrics. Conclusion. Since PERM-BOTH has the best statistical power at both the system-and summarylevels, we recommend it for hypothesis testing the difference between summarization metrics. Summarization Analysis We run two experiments that calculate CIs ( §6.1) and run hypothesis tests ( §6.2) for many different summarization metrics on the TAC'08 and CNN/DM datasets ( §5). Each experiment also includes an analysis which discusses the implications of the results for the summarization community. Confidence Intervals Confidence intervals are large. The most apparent observation is that the CIs are rather large, es- The size of the CIs has serious implications for how trustable existing automatic evaluations are. Since Kendall's τ is a function of the number of pairs of systems in which the automatic metric and ground-truth agree on their rankings, the metrics' CIs can be translated to upper-and lower-bounds on the number of incorrect rankings. Specifically, ROUGE-2's system-level CI on Fabbri et al. (2021) implies it incorrectly ranks systems with respect to humans 9-54% of the time. This means that potentially more than half of the time ROUGE ranks one summarization model higher than another on CNN/DM, it is wrong according to humans, a rather surprising result. However, it is consistent with similar findings by Rankel et al. (2013), who estimated the same result to be around 37% for top-performing systems on TAC 2008-2011. We suspect that the true ranking accuracy of ROUGE (as well as the other metrics) is not likely to be at the extremes of the confidence interval due to the distribution of the bootstrapping samples shown in Fig. 4. However, this experiment highlights the uncertainty around how well automatic metrics replicate human annotations of summary quality. An improved ROUGE score does not necessarily mean a model produces better summaries. Likewise, not improving ROUGE should not disqualify a model from further consideration. Conse-quently, researchers should rely less heavily on automatic metrics for determining the quality of summarization models than they currently do. Instead, the community needs to develop more robust evaluation methodologies, whether it be task-specific downstream evaluations or faster and cheaper human evaluation. Comparing CNN/DM annotations. The CIs calculated on the annotations by Bhandari et al. (2020) are in general higher and more narrow than on Fabbri et al. (2021). We believe this is due to the method of selecting the summaries to be annotated for each of the datasets. Bhandari et al. (2020) selected summaries based on a stratified sample of automatic metric scores, whereas Fabbri et al. (2021) selected summaries uniformly at random. Therefore, the summaries in Bhandari et al. (2020) are likely easier to score (due to a mix of high-and low-quality summaries) and are less representative of the real data distribution than those in Fabbri et al. (2021). Hypothesis Testing Although nearly all of the CIs for the metrics are overlapping, this does not necessarily mean that no metric is statistically better than another since the differences between two metrics' correlations could be significant. In Fig. 5, we report the p-values for testing H 0 : ρ(X , Z) − ρ(Y, Z) ≤ 0 using the PERM-BOTH permutation test at the system-and summary-levels on TAC'08 and CNN/DM for all possible metric combinations (see Azer et al. (2020) for a discussion about how to interpret p-values). The Bonferroni correction (which lowers the significance level for rejecting each individual null hypothesis such that the probability of making one or more type-I errors is bounded by α; Bonferroni, 1936;Dror et al., 2017) was applied to test suites grouped by the X metric at α = 0.05. 10 A significant result means that we conclude that ρ(X , Z) > ρ(Y, Z). The metrics which are identified as being statistically superior to others at the system-level on TAC'08 and CNN/DM using the annotations from Fabbri et al. (2021) are QAEval and BERTScore. Although they are statistically indistinguishable from each other, QAEval does improve over more metrics than BERTScore does on TAC'08. At the summary-level, BERTScore has significantly better results than all other metrics. Overall, none of the other metrics consistently outperform all variants of ROUGE. Results using either the Spearman or Kendall correlation coefficients are largely consistent with Fig. 5, although QAEval no longer improves over some metrics, such as ROUGE-2, at the system-level on TAC'08. The results on the CNN/DM annotations provided by Bhandari et al. (2020) are less clear. The ROUGE variants appear to perform well, a conclusion also reached by Bhandari et al. (2020). The hypothesis tests also find that S3 is statistically better than most other metrics. S3 scores systems using a learned combination of features which includes ROUGE scores, likely explaining this result. Similarly to the CI experiment, the results on the annotations provided by Bhandari et al. (2020) and Fabbri et al. (2021) are rather different, potentially due to differences in how the datasets were sampled. Fabbri et al. (2021) uniformly sampled summaries to annotate, whereas Bhandari et al. (2020) sampled them based on their approximate quality scores, so we believe the dataset of Fabbri et al. (2021) is more likely to reflect the real data distribution. Limitations The large widths of the CIs in §6.1 and the lack of some statistically significant differences between metrics in §6.2 are directly tied to the size of the datasets that were used in our analyses. However, to the best of our knowledge, the datasets we used are some of the largest available with annotations of summary quality. Therefore, the results presented here are our best efforts at accurately measuring the metrics' performances with the data available. If we had access to larger datasets with more summaries labeled across more systems, we suspect that the scores of the human annotators and automatic metrics would stabilize to the point where the CI widths would narrow and it would be easier to find significant differences between metrics. Although it is desirable to have larger datasets, collecting them is difficult because obtaining human annotations of summary quality is expensive and prone to noise. Some studies report having difficulty obtaining high-quality judgments from crowdworkers (Gillick and Liu, 2010;Fabbri et al., 2021), whereas others have been successful using the crowdsourced Lightweight Pyramid Score (2020) System-Level Figure 5: The results of running the PERM-BOTH hypothesis test to find a significant difference between metrics' Pearson correlations. A blue square means the test returned a significant p-value at α = 0.05, indicating the row metric has a higher correlation than the column metric. An orange outline means the result remained significant after applying the Bonferroni correction. (Shapira et al., 2019), which was used in Bhandari et al. (2020). Then, it is unclear how well our experiments' conclusions will generalize to other datasets with different properties, such as documents coming from different domains or different length summaries. The experiments in Bhandari et al. (2020) show that metric performance depends on which dataset you use to evaluate, whether it be TAC or CNN/DM, which is supported by our results. However, our experiments also show variability in performance within the same dataset when using different quality annotations (see the differences in results between Fabbri et al. (2021) and (Bhandari et al., 2020)). Clearly, more research needs to be done to understand how much of these changes in performance is due to differences in the properties of the input documents and summaries versus how the summaries were annotated. Related Work Summarization CIs and hypothesis testing were applied for summarization evaluation metrics over the years in a relatively inconsistent manner -if at all. To the best of our knowledge, the only instances of calculating CIs for summarization met-rics is at the system-level using a bootstrapping procedure equivalent to BOOT-SYSTEMS (Rankel et al., 2012;Davis et al., 2012). Some works do perform hypothesis testing, but it is not clear which statistical test was run (Tratz and Hovy, 2008;Giannakopoulos et al., 2008). Others report whether or not the correlation itself is significantly different from 0 (Lin, 2004), which does not quantify the strength of the correlation nor allow for comparisons. Some studies apply Williams' test to compare summarization metrics. For instance, Graham (2015) use it to compare BLEU (Papineni et al., 2002) and several variants of ROUGE, and Bhandari et al. (2020) compares several different metrics at the system-level. However, our experiments demonstrated in §5.2 that Williams' test has lower power than the suggested methods due to the lower correlation values. As an alternative to comparing metrics' correlations, Owczarzak et al. (2012) argue for comparison based on the number of system pairs in which both human judgments and metrics agree on statistically significant differences between the systems, a metric also used in the TAC shared-task for summarization metrics (Dang and Owczarzak, 2009, i.a.). This can be viewed similarly to Kendall's τ in which only statistically significant differences between systems are counted as concordant. However, the differences in discriminative power across metrics was not statistically tested itself. More broadly in evaluating summarization systems, Rankel et al. (2011) argue for comparing the performance of summarization models via paired t-tests or Wilcoxon signed-rank tests (Wilcoxon, 1992). They demonstrate these tests have more power than the equivalent unpaired test when used to separate human and model summarizers. Machine Translation The summarization and machine translation (MT) communities face the same problem of developing and evaluating automatic metrics to evaluate the outputs of models. Since 2008, the Workshop on Machine Translation (WMT) has run a shared-task for developing evaluation metrics (Mathur et al., 2020, among others). Although the methodology has changed over the years, they have converged on comparing metrics' system-level correlations using Williams' test (Graham and Baldwin, 2014). Since Williams' test assumes the input data is normally distributed and our experiments show it has low power for summarization, we do not recommend it for comparing summarization metrics. However, human annotations for MT are standardized to be normally distributed, and the metrics have higher correlations to human judgments, thus Williams' test will probably have higher power when applied to MT metrics. Nevertheless, the methods proposed in this work can be directly applied to MT metrics as well. Conclusion In this work, we proposed several different methods for estimating CIs and hypothesis testing for summarization evaluation metrics using resampling methods. Our simulation experiments demonstrate that assuming variability in both the systems and input documents leads to the best generalization for CIs and that permutation-based hypothesis testing has the highest statistical power. Experiments on several different evaluation metrics across three datasets demonstrate high uncertainty in how well metrics correlate to human judgments and that QA-Eval and BERTScore do achieve higher correlations than ROUGE in some settings. A Normality Testing To understand if the normality assumption holds for summarization data we ran the Shapiro-Wilk test for normality (Shapiro and Wilk, 1965), which was reported to have the highest power out of several alternatives (Razali and Wah, 2011;Dror et al., 2018Dror et al., , 2020. The results of the tests for the ground-truth responsiveness scores and automatic metrics are in Table 2. Most of the p-values are significant, i.e., applying a statistical test which assumes normality is incorrect in general. For r SUM , the percent of the per-input document tests which had a significant result at α = 0.05. A significant p-value means H 0 (the data is distributed normally) is rejected. For r SUM , the larger the percentage the more the data appears to be not normally distributed. correlation level, and metric shown in Fig. 5. The results are overall very similar with only a handful of results now becoming not significant.
8,507.8
2021-04-01T00:00:00.000
[ "Computer Science" ]
Autonomous Vision-Based Aerial Grasping for Rotorcraft Unmanned Aerial Vehicles Autonomous vision-based aerial grasping is an essential and challenging task for aerial manipulation missions. In this paper, we propose a vision-based aerial grasping system for a Rotorcraft Unmanned Aerial Vehicle (UAV) to grasp a target object. The UAV system is equipped with a monocular camera, a 3-DOF robotic arm with a gripper and a Jetson TK1 computer. Efficient and reliable visual detectors and control laws are crucial for autonomous aerial grasping using limited onboard sensing and computational capabilities. To detect and track the target object in real time, an efficient proposal algorithm is presented to reliably estimate the region of interest (ROI), then a correlation filter-based classifier is developed to track the detected object. Moreover, a support vector regression (SVR)-based grasping position detector is proposed to improve the grasp success rate with high computational efficiency. Using the estimated grasping position and the UAV?Äôs states, novel control laws of the UAV and the robotic arm are proposed to perform aerial grasping. Extensive simulations and outdoor flight experiments have been implemented. The experimental results illustrate that the proposed vision-based aerial grasping system can autonomously and reliably grasp the target object while working entirely onboard. Introduction There is increasing interests in unmanned aerial vehicles (UAVs) within both the industrial and academic communities. Vertical takeoff and landing (VTOL) unmanned rotorcrafts with onboard lightweight visual sensors have broad applications including surveillance, monitoring, rescue and search, traffic control, etc. [1,2]. With the high 3-D mobility, UAVs act like smart flying cameras in passive observation applications. A UAV equipped with a robotic arm can perform aerial manipulation tasks like grasping, placing and pushing objects [3]. Integrating the high mobility of UAVs as well as the manipulation skills of robotic arms, UAVs mounted with robotic arms will actively interact with environments and have widely potential applications in transportation, building, bridge inspection, rotor blade repairing, etc. [4]. Vision-based aerial manipulation for micro UAVs poses challenges due to the inherent instability of the UAVs, limited onboard sensing and computational capabilities, and aerodynamic disturbances in close contact. Modeling and control, motion planning, perception, and mechanism design are crucial for aerial manipulations [5][6][7]. There are some challenges for UAVs to perform autonomous vision-based aerial grasping. These challenging problems mainly come from the following aspects: (1) the limitation imposed by the high-order underactuated control systems; (2) the limited onboard vision-based sensing; (3) highly computational efficiency of visual detection, estimation of grasping points of the target object, and control of the UAV equipped with a robotic arm are required for onboard implementation using a low-cost embedded controller; (4) coupling between perception and control of the aerial manipulation system. Motived by the challenging problems, we systematically investigate a vision-based strategy to perform aerial grasping by an UAV. The contributions of this paper are presented as follows: 1. A new learning module is proposed for real-time target object detection and tracking. Concretely, the proposed scheme extends the kernelized correlation filters (KCF) algorithm [8] by integrating the frequency-tuned (FT) salient region detection [9], the K-means and the correlation filter algorithms, which is able to detect the target object autonomously before tracking without human involvement. 2. To increase the success rate of grasp, a computationally efficient algorithm based on support vector regression (SVR) is proposed to estimate appropriate grasping positions of the visually recognized target object. 3. A control strategy is proposed to perform aerial grasping, which consists of approaching and grasping phases. During the approaching phase, a nonlinear control law is presented for an UAV to approach the target object stably; while during the grasping phase, simple and efficient control schemes of the UAV and the robotic arm are presented to achieve the grasping based on the estimated relative position between the UAV and the target object. 4. A computationally efficient framework implemented on an onboard low-cost TK1 computer is presented for UAVs to perform aerial grasping tasks in outdoor environments. The proposed visual perception and control strategies are systematically studied. Simulation and real-world experimental results verify the effectiveness the proposed vision-based aerial grasping method. The rest of the paper is organized as follows. Section 2 describes the related work. In Section 3, the system configuration is described. In Section 4, detection and recognition of target object, as well as an estimation of its grasping points, are proposed. The grasping strategy and control of the aerial grasping system is presented in Section 5. Experimental results are presented in Section 6. Concluding remarks and future work are discussed in Section 7. Related Work Aerial manipulation is a challenging task, and some of the pioneering works in this area appeared in the literature [10][11][12][13][14][15]. Visual perception, control and motion planning of UAVs, and mechanism design of the end-effector, are essential for an aerial manipulation system. Real-time target object detection is vital to perform autonomous grasping of a target object. Currently, deep learning-based algorithms [16][17][18] achieve excellent detection performance, which usually require high computational complexities and power consumptions. However, the computational capacities of an onboard computer are limited due to the payload of the micro UAVs, and the deep learning-based approaches are not suitable for real-time aerial grasping. Traditional manual feature detection algorithms [19] are highly computational efficiency, but it is still not enough to run in real time on the low-cost onboard computer of an UAV. Estimating grasping points of the target object is beneficial to improving the grasping performance. In [20], a target pose estimation algorithm is proposed to estimate the optimal grasping points using the manual threshold. Pose estimation helps to estimate the grasping points, but the manual threshold brings difficulties when applying it to various target objects. In [21][22][23], different markers are used to perform real-time target detection, while target objects cannot be detected in the absence of artificial markers. To guide the UAV to autonomously perform grasping of the target object, with the target object detection information, the relative position between the UAV and the target object should be continuously estimated to guide the motion of the UAV and the onboard robotic arm. In [24][25][26][27], various aerial grasping approaches are presented, where the relative position of the target object is obtained by high performance indoor positioning systems. It hinders the aerial grasping in environments without positioning systems. Real-time target tracking need to be performed during the aerial grasping process. Discriminative correlation filter (DCF)-based approaches as well as deep learning-based methods [28] are two major categories of visual object tracking . The computational efficiency of the DCF-based approaches is much higher than that of the deep learning-based algorithms. In our previous work [29], the Kernelized Correlation Filter (KCF) tracker [8] is adopted for an UAV to track the moving target, where the object of interested region is chosen manually at the first frame. In this paper, the KCF tracker is applied for visual tracking of the autonomously detected target for its computational efficiency and impressive performance. Stable control of the UAV is important for an aerial grasping system. In [21], the traditional PID controller is modified by adding nonlinear terms which usually require experimental or accurate measurements. The parameters of the proposed controller are difficult to set, also it is difficult to adapt the controller to different mechanical structures. In [24], a PID controller is employed for the UAV to follow the planned path. However, the parameters tuning of the PID controller is difficult for high-order underactuated UAV control systems. In this paper, a nonlinear and computationally efficient controller is proposed to guide the UAV stably approaching the target object based on the estimated relative position information. In this paper, using onboard sensing and computational capabilities, we aim to investigate the problem to autonomous grasp the target object without manually choosing the object of interested region in advance. A visual-based aerial grasping scheme is presented, where computationally efficient approaches are proposed for target detection, grasping points estimation and relative position estimation. Moreover, efficient control laws are presented for the UAV and the onboard robotic arm to perform stably aerial grasping. Figure 1 illustrates the configuration of an autonomous vision-based aerial grasping system. The yellow box is the hardware part of the system, and the green box is the software part of the system. A DJI Matrice 100 is used as an experimental platform, which is equipped with a DJI Manifold embedded Linux computer, a monocular gimbal camera and a 3-DOF robotic arm. The gimbal camera provides the video stream for the embedded computer. The target object is detected, recognized and tracked in real time. The grasping points of the recognized target object are then estimated to increase the grasping success rate. To perform stably aerial grasp, using the relative position between the UAV and the target object, the grasping process is divided into the approaching and the grasping phases. In these two phases, different control strategies are developed for the aerial grasping system. Vision System In this section, a computationally efficient visual object detection and tracking scheme is presented to continuously locate the target position in the image. Moreover, a novel real-time algorithm is proposed to estimate the grasping positions of the target object to improve the grasping performance. Object Detection To reduce the computational complexity, the visual object detection scheme is separated into two steps, i.e., region proposal as well as classification. Firstly, all regions of interest (ROIs) are detected in the image using the region proposal algorithm. Then the target object in all ROIs is recognized with the designed classifier. Region Proposal Algorithm Because of high computational efficiency in the Fourier domain, the Frequency-Tuned (FT) saliency detection [9] is adopted to obtain the saliency map, which can be used to extract ROIs. The quality of the image captured from the onboard gimbal camera is affected by factors such as illumination, unstable hovering of UAV and so on. It deteriorates the robustness of the method combining the FT and the K-means in outdoor applications. In this paper, an improved region proposal algorithm integrating by the FT and the K-means is presented. Firstly, summing continuously n frames of the saliency map to obtain the cumulative image I RSsum , i.e., where I RS i is the output of the FT algorithm for the ith frame. Denote I RSBW the binarization of I RSsum as I RSBW . I RSBW represent the contours and the centroids of the connected components, and are calculated to obtain the initial model of the current scene. The model M s is represented as where C e are the contours of the connected components and C c are the centroids of the connected components. These steps are implemented repeatedly at every n frames of the saliency map. The old model M s of the current scene is updated with the new models at every n frames, and the convolution is used for the update. Specifically, K candidate contours in the new model are employed to update the old model by convolution. The candidate contours are chosen by the nearest neighbor between the new model and the old model. The contours and centroids are updated simultaneously according to Define a set B = {C e , C c ∈ M s } describing contours and centroids to denote the region of all possible target objects. Algorithm 1 describes the flow of the region proposal algorithm. Algorithm 1: Region Proposal Algorithm Input: image: I, frames: n. Output: The set B which may contains the target object. Classification The computationally efficient KCF algorithm [8] is applied for tracking the target when it is detected. It is obvious that the efficiency of combination between the target detection and the KCF algorithm should be considered. Therefore, a KCF-based target classifier is presented in this section. The training and classification process of the algorithm are shown in Figure 2. The framework of the algorithm is similar to [30]. Firstly, we train a model in the same way for each class. These models are used to classify new samples. Response values represent the evaluation of new samples by these models. As shown in Figure 2, the depth of the font "response" color represents the strength of the response. For example, a new sample through model A∼N. The response I is the strongest response value, thus the new sample is classified to class I. The algorithm of classification is described as follows. where λ is a regularization parameter that controls overfitting, as in the Support Vector Machines (SVM) method [31]. Mapping the inputs of a linear problem to a non-linear feature-space φ(x) with the kernel trick, the ω can be calculated [32] by where φ is the mapping to the a non-linear feature-space induced by the kernel κ, defining the inner Thus, the variables under optimization are α, instead of ω. The coefficients α in Equation (5) can be calculated by where F is the DFT (Discrete Fourier Transform) operator, Y is the DFT of y, U x is the DFT of u x and u x = κ( f (x m,n ), f (x)) is the output of the kernel function κ. For the off-line training, the model is trained according to Equation (6) for each sample. All models of one class are stitched into a vector: where F is a filter vector whose element f i is a filter which obtained by training the ith sample, and n p is the number of samples. Each filter f i is applied for evaluating the other positive sample by correlation operation beside the sample which is trained for itself. The evaluation matrix is shown below where f i (x j ) is the correlation evaluation of the ith sample and the jth sample. There are n p − 1 evaluation values for each filter, and they can be written as a vector. All the elements of the vector are summed as the evaluation value for the filter. Thus, there are n filters so that the number of the evaluation values is n. Finally, all the evaluation values of each filter can be written as a normalized vector and all the elements of this vector are called the weight coefficient of the corresponding filter. Its vector form is Then the final model of target is written as: Algorithm 2 describes the training flow of the correlation filter based on ridge regression. Algorithm 2: The training algorithm of the KCF-based target classifier Input: Training set, the size of the training set n p Output: The model of correlation filter f cls for i ∈ n p , do do Grasp Position Estimation In this section, a real-time estimation algorithm of the grasping position is presented based on support vector regression (SVR). A grasping position estimate is beneficial to improve the grasping performance because of the significant shape feature of the target object. Lenz et al. show that the feature of grasping position can be easily described by the depth image provide by the RGB-D camera [33]. However, the performance will degenerate greatly in outdoor environments as the RGB-D camera is accessible to the lighting interference. In this paper, RGB images are used for grasping position estimation because (1) the HOG features [19] can represent the magnitude and direction of the gradient at the same time, (2) the feature of symmetry is apparent in the HOG features, and (3) the consumption of computation in the HOG features can be ignored, the HOG features are extracted for grasping position estimation from RGB images. Figure 3 shows the flow of the grasping position detection algorithm. According to the symmetry of gradient value and direction of the grasping point of the target, the model training can be divided into two parts, one part is to learn a root model from the whole points of the grasping position, while another part is to train a side model from the edge feature of the target object. The same training method is used for the root model and the side model. The root and size models are denoted as S and R, respectively. They can be trained to optimize Equation (11) with SVR: where C is the penalty factor, ξ i and ξ * i are used to construct soft margin, and l is the number of the samples. The HOG feature map of the input image, which is part of the whole image, is denoted as G. The edges information and the response map T about the shape information of the target object can be obtained as follows: where is the size of the soft margin of SVR and F is edge response map. Then the response map T is split into two components which are represented as {z p1 } and {z p2 }, according to the character of symmetry. Every component is also split into n parts and written as a set z pi , i = 1, 2. The combinations between the elements {z p1 } and {z p2 } are evaluated as follows: where z i p1 is the ith part in the set z p1 ; z j p2 is the jth part in the set z p2 ; F sum (z i p1 ) is the sum of the z i p1 in the respone map. The response strength of the side model F sum and the Euclidean distance between two elements are considered to be the evaluation metric. It is obvious that the grasping position algorithm is more likely to locate in two elements which provide a high response through the side model and shorter distance. According to their evaluation scores in S side (z i p1 , z j p2 ), the largest m(m ≤ n) combination is obtained. All these combinations apply the operation of dot product with the root model R to obtain the combination with the maximum score as the grasping positions: Grasping Strategy and Control In this section, an autonomous grasping strategy and control laws of the grasping system are proposed to perform the aerial grasping task. The center of mass of the UAV with the manipulator changes when the robotic arm moves, it makes the UAV unstable. To achieve stable grasping performance of the aerial grasping system, the grasping process is divided into the approaching phase and the grasping phase. The main task of the approaching phase is to control the UAV quickly and stably reach above the target object. In the grasping phase, the UAV equipped with the 3-DOF robotic arm perform autonomous target grasping. Approaching Phase The approaching phase aims to guide the UAV to move the target object quickly. In this phase, the 3-DOF robotic arm remains stationary. The gimbal is controlled by the PD controller [29]. The controller of the UAV is designed according to the Lyapunovs second theory. The position relationship between the UAV and the target on the two-dimensional plane is shown in Figure 4, where four circles denote the UAV, whose position can be written as P t = [x,ŷ] T . The position P t of UAV can be estimated by Equation (27). Letd be the estimation of the distance between the target object and the UAV, it can be calculated bŷ Let ψ d be the desired rotation angle of the yaw, it can be calculated by Then the estimation of velocity˙d and the angular velocityψ d can be written as: In real-world applications, there exists an error between the actual velocity and the desired velocity of the UAV. The error consists of two parts, one is the error between the desired linear velocity and the actual linear velocity in the horizontal direction v , while another is the angle error between the desired yaw angle and the actual yaw angle ψ . In addition, let d denote the error between the actual distance and the desired distance. According to Figure 4, it can be obtained by: where v rx and v ry are the actual velocities of the UAV in the X and Y directions, respectively. The time derivative of Equation (18) is where ψ r is yaw rotation angle and ω d is the yaw angular velocity of the UAV. In the approaching phase, the velocity v x , v y and angular velocity ω d of UAV are controlled to ensure that the distance error d , velocity error v and angular error ψ converge to zero. The control law of the UAV is designed as: where k 1 and k 2 are coefficient less than zero, v crx and v cry are the actual velocities of the current moment of the UAV in the X and Y directions, respectively. The stability of the system can be proved using Lyapunovs second theory. The Lyapunov function candidate can be formulated as: The acceleration of X and Y directions can be calculated by: Using Equations (20), (22) and (23), we simplify the time derivative of V(x) as Equation (24) ensures thatV(x) ≤ 0, while k 1 , k 2 ≥ 0. Thus, the control system is Lyapunov stable with the designed control law. Grasping Phase When the pitch angle of the gimbal is 90 • , it means that the UAV is just above the target. The grasping phase works. At this phase, we control the height of the UAV and the robotic arm to grasp the target object vertically. Figure 5 shows the relationship among the UAV, the camera and the target, where F b denotes the body frame of UAV with axes X b , Y b and Z b , and F c denotes the camera's reference frame with axis X c , Y c and Z c . The rotation matrix R bc from F c to F b can be calculated by: where R wb is a transformation matrix from the world frame to the body frame; R wc is a transformation matrix from the world frame to the camera's reference frame. ks when f max (z) is ht experiments, the (0, 0.5) in outdoor ntally set as 0.17. ch the target when cal algorithms [14] rch the target, and reference frame with axis X c , Y c and Z c . The relationships between the ground target T and the UAV can be shown in Fig. 5. Thus, the transformation of a vector from F C to F B can be represented by a rotation matrix R BC . The target is considered as a point T on the ground and is represented by a position vector p B = (x t , y t , z t ) T in body frame F B of UAV. According to standard pinhole imaging model, p B can be written as: where the homogenous coordinate (u, v, 1) T indicates the position of the target on the image plane, and K is the intrinsic matrix of the camera. Hence, the relative distance The position of target object in F b can be calculated by: where T = [x b , y b , z b ] is the position of target object in F b ; K is the intrinsic matrix of the camera; P is the permutation matrix; A = [u, v, 1] indicates the position of the target on the image plane. According to standard pinhole imaging model, the position of target object P = [x, y, z] can be estimated by: where h is the height of the UAV. It can be detected by the ultrasonic sensor. PID controller is used to control the position and height of the UAV. The position error can be calculated by: where e x and e y are error in X and Y directions respectively, x b and y b are position of target in F b respectively. The desired height of the UAV can be calculated by: where h d is the desired height of the UAV, l is the maximum distance of the robotic arm, and h is the height of the UAV. It can be detected by the ultrasonic sensor. The joints of the arm are controlled to keep the robotic arm vertical. The gripper at the end of the robotic arm grasp the target object when the UAV hovers at the desired height. Experimental Results To verify the autonomous vision-based aerial grasping system, extensive flight experiments are performed in outdoor environments. First, the performance of the target object detection and recognition scheme is verified and analyzed. Second, the elapse time and performance of the grasping position detection algorithm is examined. The designed control laws are then verified by the simulation and real-world flight experiments. Finally, experimental results of the autonomous vision-based aerial grasping in real-world are presented. Experimental Setup A DJI Matrice 100 UAV is used as an experiment platform, as shown in Figure 6. Airborne equipment includes a DJI Manifold embedded Linux computer (NVIDIA Tegra TK1 processor, an NVIDIA 4-Plus-1 quad-core A15 CPU of 1.5 GHz), a GPS receiver, a 3-DOF robotic arm, a monocular Zenmuse X3 gimbal camera, a barometer, an Inertial Measurement Unit (IMU) and a DJI Guidance visual sensing system. Object Detection and Recognition Experiment The purpose of this experiment is to test the performance of the computationally efficient object classification correlation filter based on ridge regression. The dataset used in this experiment is the extended ETHZ dataset [34] that is extended from five classes to six classes. The new dataset includes six classes, of which the classes toy cars is entirely and newly collected by ourselves. The sample number of each category is shown in Table 1. The reason for adopting the small dataset is that the KCF algorithm learning module has the feature of increasing samples through circular displacement. The evaluation criteria of the experiment is the average correlation value of the model to the positive and negative samples after performing 10 times a 5-fold cross validation for each category model. Figure 7 shows the experiment results. As shown in Figure 7, each class model obtained by training has a higher response value to the positive samples in the test set, and the response value is basically much larger than the response value to other categories. It shows that this type of classifier has better classification performance for simple objects. At the same time, correlation detection is performed in the frequency domain. Thus, its detection operation time is also fast with the help of fast Fourier transform (FFT). In the experiment, the average detection time of each sample is 0.02s. Grasping Position Detection Experiment The purpose of this experiment is to verify the accuracy and elapse time of the grasping position detection algorithm. The dataset is from the research of [35]. The resolution of the root model is set to 80 × 80 × 31. Furthermore, separating the resized image into two components for training the side model. Therefore, the resolution of the side model is set to 80 × 40 × 31. The results of the grasping position detection experiment is shown in Tables 2 and 3. As shown in Table 2, the accuracy of the grasping position model, which is the combination of side model and root model, is acceptable. As shown in Table 3, the algorithm of grasping position detection is real-time within the range of 0.3 million. The largest computational cost is to use the side model to detect the shape of object. Therefore, it is necessary to restrict the resolutions of input image for real-time grasping position detection. According to Figure 8a,b, we can see that the adjustment trend of the control law become more obvious when we set higher parameter value. The velocity of the UAV gradually converges to the desired value, and the errors between the desired values and the simulated values also gradually converges. The parameter k 2 is adjusted in simulation by the same method. We set k 2 = −0.1, k 2 = −0.3 and the desired of UAV yaw angle is 90 • . The error of the simulation angular velocity is shown in Figure 8c. Similar to the error of the velocity control, when the parameter value is larger, the initial desired angular velocity of the UAV controller is larger as well. As the rotation angle reaches the target angle, it gradually converges. The greater the parameter is, the faster the convergence velocity is. Experiments of Flight Tests In the flight experiments, we select two parameters k 1 = −0.2 and k 2 = −0.3. The maximum speed of the aircraft is restricted to 1m/s, and the attitude data of the UAV are measured by the onboard IMU module. The flight experimental results are shown in Figure 8d-f. The experimental results show that the actual velocity values converge to the desired velocity in 0.5s and follows the desired velocity very well. The error curve of the yaw angular velocity in actual flight test is shown in Figure 8f. The yaw angle errors decrease gradually from a relatively large value to the desired zero value. Autonomous Aerial Grasping Experiments The proposed algorithms and the developed aerial grasping system are systematically investigated in flight experiments. In the experiments, as shown in Figure 9, the target object, a toy car, will be detected among some other objects within the visual view of the gimbal camera. The parameters of PID controller is shown in Table 4. Snapshots of the grasping process are illustrated in Figure 9, where Figure 9a is the approaching phase, Figure 9b,c are the grasping phase, and Figure 9d is the UAV to complete the grasping task and ascent to the specified height. A demo video of the proposed aerial grasping system in outdoor environments can be seen in the supplementary video. Limitation and discussion: to examine the grasping performance, 10 successive grasping experiments are conducted in outdoor environments. The achieved success rate of the aerial grasping of the toy car is 50%. Vision-based autonomous aerial grasping is a systematically work, and the performance of each part of the visual perception as well as control of the UAV and the robotic arm will affect the grasping performance. For the visual perception part, according to Figure 7, the trained classifier has good performance; however, the accuracy of the grasping point estimate algorithm is 74.1%. It is noted that in the grasping phase, there is a lag in the position control of the UAV. Moreover, mechanical instability and low response of the robotic arm and the end gripper also deteriorate the grasping performance. In future work, the grasping points estimate will be further studied, and the mechanical design of the robotic arm will also be considered to improve the grasping performance. (d) the UAV to complete the grasping task and ascent to the specified height. Conclusions In this paper, an autonomous vision-based aerial grasping system for a rotorcraft UAV is presented, where the target object is fully autonomously detected and grasped. The proposed visual perception and control strategies are systematically studied. An efficient object detection and tracking method is addressed to improve the KCF algorithm. A grasping positions estimate of the target object is proposed based on the edge and root model thereof, to increase the grasping success rate. Based on the estimated relative position between the target object and the UAV as well as the grasping points of the target object, control laws of the UAV and the robotic arm are proposed to guide the UAV to approach to and grasp the target. The visual perception and control are implemented on an onboard low-cost computer. Experiment results illustrate that the proposed autonomous vision-based aerial grasping system achieves stable grasping performance. In future work, the grasping points estimate will be further studied to improve the estimate accuracy. Mechanical design of a stable and light weight robotic arm will be considered. Autonomous grasping of a moving target object is also worth investigation.
7,403.2
2019-08-01T00:00:00.000
[ "Engineering", "Computer Science" ]
An Innovation Perspective to Explore the Ecology and Social Welfare Efficiencies of Countries This study aims to measure the ability of 29 countries in producing competitive products and services that fulfill individual needs and improve the level of welfare with less utilization of natural resources. We build a two-stage network production process model to investigate the ecology efficiency and social welfare efficiency of the countries and then further discriminate the efficient countries in post-analysis. The two-stage network directional distance function is applied to assess the efficiencies of countries, and the network-based ranking approach is used to further discriminate the efficient countries following the panel data between the years 2013 and 2016. Results show that Poland and Spain are strongly referenced by other countries in the ecology stage, whereas Bulgaria, the United States, and Sweden are leaders in the social welfare stage. A remarkable observation is an absence of countries’ efficiency in both ecology and social welfare efficiencies. Most of the 29 countries have lower efficiency in the social welfare stage than in the ecology stage. This study suggests the strengths and highlights the weaknesses of the countries to help the governments efficiently improve and operate their countries. Introduction Over the last few decades, we have seen a participatory tendency in both environmental governance and knowledge production [1]. Environmental awareness is an essential component of both public and private decision-making [2]. Capturing the most economic gains while utilizing the fewest resources and resulting in the least damage to the environment is a critical issue for social development [3]. As society becomes ever more developed, units from different levels, that is, human beings, companies, and government, all are starting to pay attention to the importance of the environment and social welfare. Many cities throughout the world have set climate change mitigation targets, but activities to implement these targets have proven ineffective thus far. There may be confusion about who is accountable for acting, how to connect with a diverse variety of stakeholders, how to define goals, and how to measure performance [4]. Recently, Jones, Donaldson [5] vigorously encourage researchers related to management to consider social welfare in their empirical research. The idea of ecology efficiency and social welfare efficiency offers a comprehensive view for policymakers and government to achieve better national performance with the sustainable development goal [3]. The study of ecology efficiency has been previously performed on a national scale [6][7][8][9]. Moraes, Wanke [10] have recently studied social welfare and labor efficiency at a regional level. Remarkably, Lefebvre, Perelman [11] assess the overall welfare state performance of the 28 European Union countries based on eight-year (2005-2012) period data. Although efficiency measurement in the public sector is traditionally long, and there is an immense number of researchers who publish the results of productivity comparisons of countries, it is not easy to identify and correctly evaluate the outcomes [11]. Balancing ecology efficiency and social welfare efficiency can better attain equilibrium and sustainable development [12]. Whereas ecology efficiency refers to the ability of countries to produce goods and services with less effect on the environment and lower levels of natural resources consumption [13], social welfare efficiency refers to poverty reduction and inequality alleviation, and protection against disease, unemployment, and ignorance [11]. Management performance evaluation is a difficult task because it involves multiple inputs and outputs [14]. Designing, evaluating, and monitoring activities, programs, and policies aimed at improving countries' growth at both the national and international levels is a difficult process that necessitates the use of a range of instruments. The requirement to measure economic, social, and environmental dimensions adds to the complexity of progress assessment [15]. Measurements of ecology efficiency and social welfare efficiency have been performed by many previous authors using the ratio approach, stochastic frontier analysis (SFA), or data envelopment analysis (DEA) [3,6,7,11,12,[16][17][18][19]. Robaina-Alves, Moutinho [16] measured the environmental and resource efficiency of European countries by using data from two separate periods that can perceive the difference in the efficiency level before and after the achievement of the Kyoto protocol in 2005. Robaina-Alves, Moutinho [16] used the stochastic frontier approach in their study. However, DEA seems to be the most widely applied method because of the advantages of processing multiple inputs and outputs. Moreover, the previous studies measured the efficiency of countries without considering and analyzing the intermediate products and linking activities [8,12,13]. Unlike traditional DEA, which treats a system as a "black box," network DEA considers its underlying structure to get more insightful conclusions [20]. Quality development is not the objective pursued by economic development, but an instrument to accomplish sustainable economic and social development [12]. A multi-stage DEA model that links the ecology efficiency and social welfare efficiency to measure the overall efficiency of a country is suggested, as the overall efficiency can be obtained only when all subsequent processes work well [21,22]. For the conventional DEA model, if decision-making units (DMUs) are simultaneously effective, no differentiation exists for efficient leaders [23]. As noted in [16], a suggestion for future research is to uncover factors that are the reasons for efficient or inefficient countries. To further measure and explore the merits of efficient leaders, previous authors have applied different ranking methods including the super-efficiency DEA model [12], cross efficiency evaluation method [24,25], TOPSIS technique [26], rough set approach [27], and network-based ranking approach [28,29]. Especially, Liu, Lu [30] have suggested a network-based ranking approach as a useful and powerful efficiency ranking tool to distinguish the benchmark and highlight the strengths and weaknesses of DMUs (Liu et al. 2009). Perceived from the current literature review, this study aims to measure the capacity of the countries to produce competitive products and services that satisfy individual needs and improve the level of well-being with less use of natural resources. We explore the ecology efficiency and social welfare efficiency of countries as two subsequent processes of a network production process structure to determine the best nation for benchmarking by applying a directional distance function (DDF) based model for efficiency measurement in two-stage network DEA. Inefficient countries may learn from pioneers to improve their efficiency. In addition, this study combines a network-based ranking approach [28][29][30] to further distinguish the benchmark countries. At a macro level for the countries, the findings are of great relevance to help policymakers set policies and plan budgets to implement these policies and achieve better performance. In summary, the current study contributes to the related literature review as follows: First, a novel network production process framework in two-stage network DEA is produced for measuring the ecology efficiency and social welfare efficiency of countries by using DDF based model with consideration of undesirable outputs. Second, this study is the first to use a network-based approach, which is a unique and powerful method, to further discriminate the benchmark countries in the context of ecology efficiency and social welfare efficiency. The results suggest the strengths and highlight the weaknesses of the countries that help the government efficiently improve and operate their countries. Literature Review Climate change is one of the most difficult issues confronting the globe today, and it is critical to have effective policies in place to handle its consequences [31,32]. Countries in the world are seriously dealing with the challenges and pressures from creating waste and pollution by many firms [9]. The governments need to consider integrating the economic, environmental, and social dimensions in their policy-making process to reach sustainable development, which requires minimizing the environmental concerns and maximizing economic and social indicators [9,33]. Economic efficiency together with environmental efficiency create ecological efficiency [7]. Tena Medialdea, Prieto Ruiz [34] recognized the requirement for ecological studies that address the role of humans as ecosystem members. Ecological efficiency (abbreviated eco-efficiency) has aroused increasing attention from the government, practitioners, and scholars in recent years [3,6,9,16,35]. Schaltegger and Sturm [36] proposed the concept of "eco-efficiency" as "a business link to sustainable development," and the World Business Council revealed the term in 1992 as the index of economic and environmental efficiency, namely as a management strategy that links financial and environmental performance to create more value with less ecological impact [37]. According to Dyckhoff and Allen [38], the best-known definition of eco-efficiency is from World Business Council for Sustainable Development (WBCSD) "Eco-efficiency is achieved by the delivery of competitively priced goods and services that satisfy human needs and bring the quality of life, while progressively reducing ecological impact and resource intensity throughout the life-cycle to a level at least in line with the Earth's estimated carrying capacity ". Zhou, Ang [39] provide a nonradial DDF approach to evaluate the energy and CO 2 performance of electricity production by using data in 2005 from 126 countries. In terms of CO 2 performance, OECD countries surpassed non-OECD countries, and OECD countries were equivalent to non-OECD countries in terms of energy performance [39]. Robaina-Alves, Moutinho [16] measured the eco-efficiency of European countries by applying the stochastic frontier approach using data in two separate periods including before (2000)(2001)(2002)(2003)(2004) and after (2005-2011) the Kyoto Protocol. The efficiency levels of European countries between two periods before and after the creation of environmental targets are compared in the study [16]. Liu and Liu [40] measured the low carbon economy efficiency with a three-stage model to compare the largest 20 CO 2 emitting nations from 2000 to 2012. First, they applied DEA, using energy consumption, capital stock, and labor force as input factors, and GDP and CO 2 emissions as (undesirable) output factors, to get efficiency for each nation and compute the slack at the input and output, then applied SFA to remove the influence of external environmental variables on the slack. Finally, they recalculated the efficiency using updated input and output components to reflect the government's ability to establish a low-carbon economy. According to their results, during the studied period, the performance was getting worse in these low carbon economies. Wu, Yin [41] used a two-stage DEA model to assess environmental efficiency for China's 30 provinces and eight regions, with the production subsystem as the first stage and the pollution treatment subsystem as the second. Interestingly, both of the papers included undesirable outputs in their models. The recently published article that is related to ecology efficiency of Yang and Zhang [6] suggested an extended DEA approach, which incorporates global benchmark technology, DDF, and a bootstrapping method to explore the dynamic trends of Chinese regional eco-efficiency in the 2003-2014 period. Pais-Magalhães, Moutinho [17] applied the DEA approach to measure the eco-efficiency of 15 European countries by using data in the 2001-2015 period. The countries, including Belgium, Luxembourg, Sweden, the Netherlands, and the United Kingdom, show better ecology performance in comparison with the other European countries. The connection between ecology and human social welfare have gained visibility in the past few years [5,10,12,17]. It is important to put emphasis on human welfare at the social level and integrate social and economic objectives in the research [42]. However, the context of social welfare is rather complex. The satisfaction of basic and secondary needs experienced by individuals in a community is referred to as social welfare [43]. Social welfare is a normative term that various persons or social groups use to reflect on the ends-the "greater good"-that public policy should pursue to better society's status quo. Importantly, when it comes to many issues of public policy, people mean different things based on their self-and other-regarding preferences, as well as socio-demographic variables such as education, income, wealth, and influence [44]. As noted in the research work of Hall, Giovannini [45], the ecosystem is equally important as the human well-being system, as the resources and services of human activities are provided by the ecosystem. Nissi and Sarra [46] based their research work on Hall, Giovannini [45], and address the measure of well-being in the context of Italian urban areas using an integrated DEAentropy approach. Their findings show significant dualism between northern and southern cities, revealing significant variations in many facets of human and ecological well-being. Lefebvre, Perelman [11] provide a definition and a technique to evaluate the efficiency of the public sector. The authors then measure the efficiency of European welfare countries and their development over time by applying the DEA approach. Wang and Feng [12] used super-efficiency DEA and Malmquist index approach to measure the ecology welfare efficiency of China in the 2006-2018 period. Recently, Moraes, Wanke [10] reveal the endogeneity between labor efficiency and social welfare by applying a two-stage network DEA approach using data from 2013 to 2016 in Brazil. Table 1. The selection of input, intermediate, and output variables is based on the related research listed in the social science citation index (SSCI). The initial selection of the variables is explained as follows. For the first stage, ecology efficiency, a nation requires land, capital, and labor and will consume energy to generate gross domestic product (GDP) and undesirable gas emissions (i.e., CO 2 ). For the second stage, social welfare efficiency, government expenditure on general public services, economic affairs, health, and education along with the first stage output, GDP, as intermediate to generate outputs including employment population, population age above 65, and tertiary school enrollment population. Figure 1 shows the two different stages to examine the internal structure, namely, ecology efficiency and social welfare efficiency stages. The operational definition of each of the variables is shown in Table 1. Inputs for stage 1 Land Land area is the overall area of a country, excluding inland water bodies, national claims to the continental shelf, and exclusive economic zones. In most situations, significant rivers and lakes are included in the concept of inland water bodies. Million USD WB and IMF Output for stage 1 CO2 emission (undesirable) Greenhouse gases emitted by the combustion of fossil fuels. Million tons BP Additional input for stage 2 Government expenditure on general public services Government spending on executive and legislative bodies, financial and fiscal affairs, external affairs, public debt transactions, general services, foreign economic aid, R&D, basic research, general public services, and transfers of a general nature between different levels of government. Million USD IMF Government expenditure on economic affairs Government spending covers general economic, commercial, and labor affairs, agriculture, forestry, fishing and hunting, fuel and energy, mining, manufacturing and construction, transportation, communication, other industries, R&D economic affairs, and economic affairs. Million USD IMF Government expenditure on health Medical products, appliances, and equipment, outpatient services, hospital services, public health services, R&D health, and health are all examples of government spending. Million USD IMF Government expenditure on education Total general (local, regional, and national) government education spending (current, capital, and transfers), expressed as Million USD IMF Inputs for stage 1 Land Land area is the overall area of a country, excluding inland water bodies, national claims to the continental shelf, and exclusive economic zones. In most situations, significant rivers and lakes are included in the concept of inland water bodies. Total general (local, regional, and national) government education spending (current, capital, and transfers), expressed as a percentage of GDP. It includes government spending funded by transfers from international sources. Variables Definitions Units Sources Outputs for stage 2 Employment population The employment to population ratio denotes the percentage of a country's population that is employed. Employment is defined as persons of working age who were engaged in any activity to produce goods or provide services for pay or profit during a short reference period, whether at work during the reference period (i.e., who worked in a job for at least one hour) or not at work due to temporary absence from a job or working-time arrangements. Working-age people are generally considered to be those aged 15 and up. (Table 3), which followed the isotonic condition employed to determine the efficient level. Table 2 indicates that most of the variables have a non-normal distribution (Kolmogorov-Smirnov test significant). This finding shows that using the DEA technique is the right option because the method requires no assumption of normality for data [47]. Notes: **, * correlations are significant at level 0.05, 0.01, respectively. X1 is land; X2 is capital; X3 is labor; X4 is energy; Z1 is GDP; UEY1 is CO 2 emission; EX1 is government expenditure on general public services; EX2 is government expenditure on economic affairs; EX3 is government expenditure on health; EX4 is government expenditure on education; Y1 is employment population; Y2 is population age above 65; Y3 is tertiary school enrollment population. Research Method This article uses the multivariate evaluation approach that simultaneously measures various dimensions of countries' efficiency to overcome the single-dimension shortcoming of the traditional approach. This article uses the two-stage network DDF in evaluating the internal network production structures to understand the countries' ecology and social welfare efficiencies [13,35]. To examine the merits of each country under different circumstances, this article incorporates multiple DEA specifications and a social network approach to determine the strengths and weaknesses of the countries [30]. The linear programming issues are shown below. Let us consider a set of n countries (k = 1, . . . , m). For a decision-making unit k, m inputs x ak (a = 1, . . . , m) are used to produce z bk (b = 1, . . . , l), intermediate outputs in the first stage, and then z bk plus a new set of factors z ck (c = 1, . . . , g) produce h outputs in the second stage (y dk , d = 1, . . . , h). Assume that the set of production possibilities for both inputs and outcomes is convex. The DDF two-stage network is defined as follows: DDF x, z, y; g x , g y = Max δ + β : x − δg x , z, y + βg y ∈ T(x, z, y) . (1) The following is a definition of the technology set: T(x, z, y):x ak can produce the intermediate outputs z bk in the first process; z bk and z ck can produce the final y dk in the second process. According to Fried, Lovell [48], the direction vector g = g x , g y should be chosen by the researcher before evaluating the DDF. In this paper, we consider the direction to be g = g x = x, g y = y . As a result, the following linear programs can describe the inefficiency measure of the target country of the technology set under convex constraints: where λ ko and µ ko are the intensity variables corresponding to the first and second processes for a given country. The best solution λ * ko for an observed country demonstrates if a country k serves as a role model for the observed country in the first stage. The optimal solution µ * ko is the same definition in the second stage. As a result, the first stage's production efficiency, EE o = 1 − δ o , which is ecology efficiency. Ecology efficiency ranges between 0 and 1. The efficiency of the second stage in these sets is defined as SE o = 1/(1 + β o ), which is the social welfare efficiency. The social welfare efficiency is between 0 and 1. These variables indicate that the target country is efficient in the first and second stages if the EE o and SE o are equal to unity. The concept of the reference-share measure is introduced below. With high probability, many DEA specifications are used in the efficiency evaluation. Using a variety of DEA specifications allows for examining the merits of each DMU under different situations, thus laying the foundation for further differentiation. For any DEA specification t, the linear programming problem (2) is represented as follows: Each specification t can be thought of as a competition game round. As a result, the initial DEA issue has been expanded from a one-round competition to a multi-round competition as a result of this action. Because the efficiency score is tied in the first round of this competition, extra game rounds may be requested to allow each DMU to demonstrate its worth in a variety of conditions. The champion is then determined based on the cumulative results. The efficiency calculation accounts for all conceivable input/output combinations. The values of λ t * ko and µ t * ko denote the optimal solution in Model 3. In the DEA setting, small efficient countries with lower input/output levels are likely to achieve higher λ t * ko and µ t * ko than large efficient countries. Normalizing the λ t * ko and µ t * ko could remove the effect of country size and render the approach applicable to both the constant and variable returns to scale models. Let E t be the index set for the observed country's reference set. Under DEA specification t, the contribution of the kth country's ath input to the oth country in the reference set is specified as Similarly, with DEA specification t, the contribution of the kth country's bth intermediate to the oth country in the reference set is defined as Under DEA specification t, the contribution of the kth country's cth additional input to the oth country in the reference set is specified as Iz t cko = µ t * ko z t ck ∑ k∈E t µ t * ko z t ck , 0 < Iz t cko ≤ 1, c = 1, . . . , g. Under DEA specification t, the contribution of the kth country's dth output to the oth country in the reference set is defined as The total of the input and output components of a normalized reference weight is obtained by averaging them out. MIz t bko (8) Oy t dko (9) The value T = (2 m − 1) 2 l − 1 (2 g − 1) 2 h − 1 is the number of combinations tested by the DEA model, whereas A1 and A2 are square matrices of size n × n. Matrix elements A1 and A2 represent the combined power of the oth unit supporting the kth unit or the cumulative effect of the oth unit endorsing the kth unit. We observe that A1 and A2 can be viewed as adjacency matrices of a directed and weighted network, where nodes are DMUs, and links express the amount of endorsement from one unit to the other. Bonacich and Lloyd [49] proposed alpha-centrality, an eigenvector-like metric, to distinguish the significance of nodes in a directed network. The significance of each node is embedded in the following formulation's solutions I1 and I2: where e is a unit vector and α is an arbitrary constant indicating the relevance of endogenous versus exogenous influences. Each vector element, I1 k and I2 k , provides the scores used to distinguish the efficient units in the first and second stages, respectively. The efficient units for each I/M/O factor can also be differentiated. When Formula (10) is rearranged, the result is Ix t ako , a = 1, . . . , m; MIz t bko , b = 1, . . . , l; MIz t bko , b = 1, . . . , l; Iz t cko , c = 1, . . . , g; Oy t dko , d = 1, . . . , h. It is worth noting that A1I a , A1M b , A2I c , A2O d are square matrices of order n. Given that A1M b and A2M b are the aggregated reference matrices for the same intermediate factors in the first and second stages, the actual contribution of each intermediate component should be averaged. One can define The matrices A1I a , AM b , A2I c , A2O d are thus the reference matrices for each I/M/O. Each matrix member indicates the aggregated endorsement of an observed unit to the kth unit in the reference set via a specific I/M/O factor. When the alpha centrality notion is applied to these matrices, the following results are obtained: where the column vectors I1I a , I M b , I2I c and I2O d hold the centrality scores of each unit since each I/M/O factor is regarded as the standard for that specific factor among all units. Internally, the unit strength of these I/M/O factors can also be compared. The sum of each row element of the matrices A1I a , AM b , A2I c , A2O d indicates the overall endorsement a unit obtains from its peers as a result of the contribution of a specific I/M/O factor. As a result, the endorsement from all other units' overall specifications to an efficient unit k via a specified I/M/O factor w equals Oy t dko , d = 1, . . . , h f or w = m + l + g + 1, . . . , m + l + g + h. where w is a combined I/M/O factor index in this case. For an efficient unit k, the higher the I MOS w k larger the contribution of the wth factor to the unit's efficiency. To simplify comparison, the relative intensity of an I/M/O factor w defined in formula (17) is magnified using the formula: As a result, I MO w k denotes the relative strength of an I/M/O factor w among all factors within an efficient unit k. Ecology Efficiency and Social Welfare Efficiency for Countries Initially, this study conducts a preliminary analysis of the ecology efficiency and social welfare efficiency for countries by running on full specifications including inputs/intermediate/undesirable output/additional inputs/outputs. Table 4 shows the efficiencies of each country at each stage. There is a total of six efficient countries at the ecology efficiency stage (Czech Republic, New Zealand, Poland, Spain, Switzerland, and Turkey) and four at the social welfare efficiency stage (Bulgaria, Italy, Sweden, United States). No country is efficient at both stages, and 19 countries are inefficient in two stages. As shown in Table 4, the average values of efficiency scores are 0.8315 and 0.4949 for ecology efficiency and social welfare efficiency, respectively, which emphasize a potential improvement of the social welfare efficiency for the countries. These preliminary results present an overview of the efficiencies of each country, but further differentiation is required to determine the best performer. Analysis of Benchmarking of Production Factors Next, this study used a network-based ranking approach to discover the most efficient country in each stage and each factor (inputs, intermediate, undesirable output, additional inputs, and outputs). The strengths of each country are also confirmed. Figures 2-4 show the analysis results are aggregated from a total of 1575 DEA runs. In Figure 2, the ecology efficiency of countries is visibly presented by the accrued reference networks. The endorsing connections are identified by the thickness and the darkness of the lines in the figure. Typically, if a country delegated by a node in the figure has more lines approach, its ranking is higher. Poland and Spain (Bulgaria, United States, and Sweden) are strongly referenced by other countries in the ecology stage (social welfare stage) (Figures 2 and 3). Our findings are different from the findings of [16], which considers the ecology efficiency in two distinct periods (2000-2004 and 2005-2011). However, the switch in the position of countries in terms of ecology efficiency due to the considered period in our study is different from the study of Robaina-Alves, Moutinho [16]. Our study is conducted using data when Kyoto Protocol was adopted for a second commitment period, whereas the study of Robaina-Alves, Moutinho [16] was performed using data in periods of the first commitment (2005-2011) and before Kyoto Protocol entered into force (2000)(2001)(2002)(2003)(2004). These findings show that there is an evolution of the ecology efficiency ranking of some European countries among periods. One example of such awareness is the adoption of the Kyoto Protocol. Kutlu [50] demonstrated that the Kyoto Protocol's adoption and implementation aided the environment by reducing GHG emissions relative to (real) GDP. Therefore, it can be explained that the Kyoto Protocol helped in improving the ecology efficiency of the European countries [50]. Similarly, top benchmarks for overall efficiency are assigned to Bulgaria, the United States, and Sweden (Figure 4). (Tables 5 and 6). In terms of energy and CO 2 factors, our study confirms the results of the authors [39] in which China and the United States are found to have low performance. In their research [39], the authors explained the relatively poor electric power generation efficiency and the coaldominated fuel input in electric power generation of these large countries [39]. Our results show that the inefficient countries in energy and CO 2 factors have enormous potential to decrease consumption of energy and CO 2 emission. Finland, New Zealand, and Spain are the best performers in the intermediate factor of GDP (Table 6). At the social welfare stage, nine countries, including Belgium, Finland, France, New Zealand, Norway, Poland, Spain, Sweden, and the United Kingdom are the leaders in the input and output factors of government expenditure on general public services, government expenditure on economic affairs, government expenditure on health, government expenditure on education and population age above 65, employment population, tertiary school enrollment population, respectively (Tables 7 and 8). Conclusions Ecology efficiency and social welfare efficiency improvement are the the most important policy options for countries. This study aims to measure the ability of the 29 countries in producing competitive products and services that fulfill individual needs and improve the level of welfare with less utilization of natural resources. This study builds a two-stage network production process model to investigate the ecology efficiency and social welfare efficiency of the countries. At the preliminary analysis of efficiencies, efficiency scores obtained from the twostage network DDF function show six efficient countries at the ecology stage and four efficient countries at the social welfare stage. No country is efficient at both stages. The findings also demonstrate that most of the 29 countries have lower efficiency in the social welfare stage than in the ecology stage. The empirical results provide policymakers with a better awareness of the ecology efficiency and social welfare efficiency of the countries. Furthermore, we examine efficient countries and confirm leading countries for learners. The findings suggest the strengths and highlight the weaknesses of the countries in terms of input/undesirable output/intermediate/additional input/output factors that assist the governments to improve and operate their country efficiently. It may be argued that nations producing large unfavorable outputs may not function eco-efficiently and hence have a great possibility to save the maximum amount of energy. Furthermore, nations with low energy consumption may be more eco-efficient and have a lower capacity to minimize undesired outputs. For example, countries such as Norway, Switzerland, China, the United States, Italy, and the United Kingdom have considerable opportunities for reducing energy usage and CO 2 emissions. Norway, the United States, Lithuania, Turkey, and China still have enormous potential to increase GDP. At the social welfare stage, countries including Turkey, Lithuania, China, Bulgaria, and Denmark have room to improve their social welfare efficiency by reducing the quantity of government expenditure on general public services, government expenditure on economic affairs, government expenditure on health, government expenditure on education while increasing population age above 65, employment population, and tertiary school enrollment population. From a macro perspective, both the ecology efficiency and social welfare efficiency determine a country's overall efficiency level. According to the empirical findings, policy makers' varying degrees of interest preference affect the ecology efficiency and social welfare efficiency. Following the limitations of this study, we suggest helpful guidance for future research. First, this study takes the data from various sources, although cross-database is our contribution in this study, our sample focuses on countries from different regions. Future research may consider studying the countries in the same region. Second, although this study uses panel data to investigate the ecology efficiency and social welfare efficiency of the countries, this does not allow us to compare the efficiency levels of countries in different distinct periods. Future research may consider dividing into distinctive periods (i.e., before the Kyoto Protocol entered into force period, the first commitment of the Kyoto Protocol period, and the second commitment of the Kyoto Protocol period).
7,491.4
2022-04-22T00:00:00.000
[ "Economics", "Environmental Science" ]
The Roles of Coenzyme A Binding Pocket Residues in Short and Medium Chain Acyl-CoA Synthetases Short- and medium-chain acyl-CoA synthetases catalyze similar two-step reactions in which acyl substrate and ATP bind to form an enzyme-bound acyl-adenylate, then CoA binds for formation of the acyl-CoA product. We investigated the roles of active site residues in CoA binding in acetyl-CoA synthetase (Acs) and a medium-chain acyl-CoA synthetase (Macs) that uses 2-methylbutyryl-CoA. Three highly conserved residues, Arg193, Arg528, and Arg586 of Methanothermobacter thermautotrophicus Acs (AcsMt), are predicted to form important interactions with the 5′- and 3′-phosphate groups of CoA. Kinetic characterization of AcsMt variants altered at each of these positions indicates these Arg residues play a critical role in CoA binding and catalysis. The predicted CoA binding site of Methanosarcina acetivorans Macs (MacsMa) is structurally more closely related to that of 4-chlorobenzoate:coenzyme A ligase (CBAL) than Acs. Alteration of MacsMa residues Tyr460, Arg490, Tyr525, and Tyr527, which correspond to CoA binding pocket residues in CBAL, strongly affected CoA binding and catalysis without substantially affecting acyl-adenylate formation. Both enzymes discriminate between 3′-dephospho-CoA and CoA, indicating interaction between the enzyme and the 3′-phosphate group is important. Alteration of MacsMa residues Lys461 and Lys519, located at positions equivalent to AcsMt Arg528 and Arg586, respectively, had only a moderate effect on CoA binding and catalysis. Overall, our results indicate the active site architecture in AcsMt and MacsMa differs even though these enzymes catalyze mechanistically similar reactions. The significance of this study is that we have delineated the active site architecture with respect to CoA binding and catalysis in this important enzyme superfamily. Introduction Acetyl-CoA synthetase (Acs) plays fundamental roles in the metabolism and physiology of cells from all three domains of life [1,2], and its regulation by acetylation is well studied [3].Acs and other short and medium chain acyl-CoA synthetases catalyze a two-step reaction in which the first step Equation (1) requires acyl substrate and ATP but not CoA for formation of an enzyme-bound acyl-AMP intermediate with release of inorganic pyrophosphate (PP i ) as a product.In the second step Equation (2), the acyl group is transferred to the sulfhydryl group of CoA and the acyl-CoA and AMP products are released. Introduction Acetyl-CoA synthetase (Acs) plays fundamental roles in the metabolism and physiology of cells from all three domains of life [1,2], and its regulation by acetylation is well studied [3].Acs and other short and medium chain acyl-CoA synthetases catalyze a twostep reaction in which the first step Equation (1) requires acyl substrate and ATP but not CoA for formation of an enzyme-bound acyl-AMP intermediate with release of inorganic pyrophosphate (PPi) as a product.In the second step Equation (2), the acyl group is transferred to the sulfhydryl group of CoA and the acyl-CoA and AMP products are released. Introduction Acetyl-CoA synthetase (Acs) plays fundamental roles in the metabolism and physiology of cells from all three domains of life [1,2], and its regulation by acetylation is well studied [3].Acs and other short and medium chain acyl-CoA synthetases catalyze a twostep reaction in which the first step Equation (1) requires acyl substrate and ATP but not CoA for formation of an enzyme-bound acyl-AMP intermediate with release of inorganic pyrophosphate (PPi) as a product.In the second step Equation (2), the acyl group is transferred to the sulfhydryl group of CoA and the acyl-CoA and AMP products are released. These structures indicate that the C-terminal domain rotates 140 • toward the N-terminal domain in the transition between the two steps of the reaction.This domain alternation has been proposed to form the complete active site for proper positioning of CoA for nucleophilic attack on the acyl group of the intermediate during catalysis of the second half-reaction Equation (2) [5,6]. The 2.1 Å crystal structure of Macs Ma (PDB ID 3ETC), a medium chain acyl-CoA synthetase from Methanosarcina acetivorans [7], revealed that in the absence of substrate this enzyme is in a similar conformation to that for thioester formation.This was surprising, given that the Acs Se structure in this same conformation was obtained from enzyme crystallized in the presence of adenosine 5 propylphosphate, which mimics the acetyladenylate intermediate and CoA [5].Recently, the structure of the Lathyrus sativus oxalyl-CoA synthetase was solved in the presence of ATP and oxalate but not CoA and was also found to adopt the thioester forming conformation [8]. Our characterization of the Methanothermobacter thermautotrophicus Acs (Acs Mt ) and Archaeoglobus fulgidus Acs (Acs Af ) revealed that these enzymes are more diverse in substrate utilization than previously thought [9].Whereas the acyl substrate range for Acs Mt is limited to acetate and propionate with a strong preference for acetate, Acs Af has a broader acyl substrate range that includes butyrate, valerate, and the branched-chain isobutyrate, and has only a slight preference for acetate over propionate.The Pyrobaculum aerophilum Acs likewise has an expanded acyl substrate range [10]. Characterization of Macs Ma revealed that the preferred acyl substrate is the branched chain 2-methylbutyrate [11].The enzyme has a broad acyl substrate range for the acyladenylate forming step of the reaction, with the ability to utilize propionate (C 3 ) to octanoate (C 8 ) as well as certain branched chain substrates; however, the acyl-adenylate formed with many of these substrates was not suitable for the thioester-forming second step of the reaction and was released in the absence of CoA.CoA inhibited acyl-AMP release and instead promoted its breakdown to AMP and the acyl group, which were released along with PP i [11].In the presence of 2-methylbutyrate, Macs Ma did not release the acyl-AMP intermediate in the absence of CoA and in the presence of CoA completed the two-step reaction and released 2-methylbutyryl-CoA, AMP, and PP i as products [11]. As Acs and Macs catalyze similar two-step reactions that differ only in the acyl substrate, it was expected that these enzymes would have similar active site architecture in which the acyl substrate binding pocket is expanded to accommodate larger substrates.We have shown that Trp 416 in Acs Mt (Trp 414 in Acs Se ) plays an essential role in determining acyl substrate range and preference [12].This Trp in almost completely conserved among Acs sequences but is replaced by Gly in medium chain acyl-CoA synthetases.Based on our results, other labs have engineered the acyl substrate pocket of Acs to utilize novel substrates to generate alternative acyl-CoA substrates for metabolic engineering [13][14][15]. Inspection of the Acs Se and Macs Ma crystal structures [5,7] and our analysis of sitedirected variants altered in the acyl substrate pocket of Macs Ma and Acs Mt [11,12] indicate fundamental differences in the active site architecture of the two enzymes.Trp 416 of Acs Mt is replaced by Gly in Macs Ma , as would be expected, and an alternate Trp residue, Trp 259 , occupies a position similar to that of Trp 416 and was shown to be critical for substrate binding and catalysis [11,12]. ATP binding site determinants have been investigated in Acs [16,17] but not Macs.However, signature motif III (YXXGD) of the acyl-adenylate-forming enzyme superfamily [18], shown by Ingram-Smith et al. [16] to play a key role in ATP binding and catalysis in Acs, is well conserved in Macs Ma as 431 YHTGD 435 .The Asp at the last position in motif III is invariant among superfamily members and interacts with one or both hydroxyl groups of the ribose moiety of ATP in all of the structures available thus far, including that of Macs Ma [7], suggesting that residues in this motif may serve similar roles in ATP binding in both Acs and Macs. Short-and medium-chain acyl-CoA synthetases are widespread in the archaea [9] and have provided a rich background for studying the structural and biochemical diversity within this family.Here we report our investigation of the CoA binding sites of Macs Ma and Acs Mt .As previously shown for acyl substrate binding and catalysis of the first step of the reaction, our results indicate that key residues involved in CoA binding and catalysis of the second step of the reaction in Acs Mt are dispensable in Macs Ma .Instead the CoA binding site of Macs Ma more closely resembles that of 4-chlorobenzoate CoA ligase (CBAL), which catalyzes the formation of 4-chlorobenzoyl-CoA [19][20][21][22][23]. Site-Directed Mutagenesis Site-directed alteration of the Macs Ma and Acs Mt gene was accomplished with the QuickChange kit (Stratagene, cat.200519) and the altered sequences were confirmed by sequencing.Oligonucleotides for site-directed mutagenesis were purchased from Integrated DNA Technologies (www.idtdna.com). Purification of Macs Ma and Acs Mt Enzymes The Macs Ma and Acs Mt enzymes were heterologously produced in Escherichia coli Rosetta Blue (DE3) placI (EMD Millipore) as described previously [11,12].Clarified cell lysate was applied to a 5 mL His-Trap column and purified protein was eluted using a linear gradient of increasing imidazole concentration in buffer.The purified enzymes were dialyzed against 25 mM Tris, 10% glycerol [pH 7.5], aliquoted, and stored at −20 • C. Protein concentrations were determined by the Bradford method [24] using Bio-Rad Protein Assay Kit II (Bio-Rad, cat.5000002) according to the manufacturer's instructions. Assay for Acyl-CoA and Acyl-Adenylate Production The hydroxamate assay [25,26] measures production of activated acyl groups, including both acyl-CoA and acyl-adenylate.Reaction mixtures (0.3 mL containing 100 mM Tris-HCl [pH 7.5] (Fisher Scientific, cat.BP152-5) and 600 mM hydroxylamine-HCl (Acros, cat.270100010) [pH 7.0]) with varied concentrations of acyl substrate, MgATP (Fisher Scientific, cat.BP413-25), and CoA (Fisher Scientific, cat.BP25101).Reactions were stopped by the addition of two volumes (0.6 mL) stop solution [1 N HCl, 5% trichloroacetic acid (Acros, cat.152130010), 1.25% FeCl 3 (Fisher Scientific, I88-500)].The change in absorbance at 540 nm was measured and product formation was calculated by comparison to a standard curve.Reactions were performed at the optimal temperature for each enzyme (55 • C for Macs Ma and 65 • C for Acs Mt ).For ethanol-soluble acyl substrates, the concentration of the stock solutions were adjusted such that the final ethanol concentration in the reaction was kept constant at 2%.All reactions were performed in triplicate. For determination of apparent kinetic parameters, the concentration of each substrate was varied individually while the concentrations of the other substrates were held constant at a saturating level (~5-10 times the K m for that substrate).The apparent kinetic parameters with their standard errors were calculated using non-linear regression to fit the data to the Michaelis-Menten equation.All reactions were performed in triplicate.Values are the mean ± standard deviation. Assay for Acyl-CoA Thioester Bond Formation Acyl-CoA thioester bond formation was measured as previously described [27].Briefly, reactions (0.5 mL) were performed at 55 • C in 100 mM Tris-HCl (pH 7.5) with a range of substrate concentrations.Acyl-CoA thioester bond formation was measured spectroscopically at 233 nm.All reactions were performed in triplicate.Values are the mean ± standard deviation. Conserved Arg Residues in Acs Interact with CoA Inspection of the Acs Se structure reveals interaction between the negatively charged phosphate groups of CoA and two conserved Arg residues, Arg 191 and Arg 584 , with Arg 191 interacting with both the 5 -diphosphate and 3 -phosphate groups and Arg 584 interacting with just the 3 -phosphate of CoA [5].An additional highly conserved Arg residue, Arg 526 , interacts with the phosphate group of the acyl-adenylate intermediate and has been predicted to play a role in stabilizing the thioester-forming conformation [5].These three Arg residues are conserved in Acs Mt as Arg 193 , Arg 528 , and Arg 586 , respectively, and occupy similar positions relative to CoA (Figure 1). Life 2023, 13, x FOR PEER REVIEW 4 of 13 963 mM sodium meta-bisulfite (Fisher Scientific S244-500)].The absorbance at 580 nm was measured after 10 min and compared to a PPi standard curve.All reactions were performed in triplicate.Values are the mean ± standard deviation. Assay for Acyl-CoA Thioester Bond Formation Acyl-CoA thioester bond formation was measured as previously described [27].Briefly, reactions (0.5 mL) were performed at 55 °C in 100 mM Tris-HCl (pH 7.5) with a range of substrate concentrations.Acyl-CoA thioester bond formation was measured spectroscopically at 233 nm.All reactions were performed in triplicate.Values are the mean ± standard deviation.Each of these Arg residues was individually altered to Ala, Lys, and Gln in AcsMt and kinetic parameters were determined for the purified enzyme variants.Overall, alterations at Arg 193 had the most severe effect on the Km value for CoA.The Km values for CoA for the Arg 193 Lys and Arg 193 Gln variants increased 18.9-and 41.0-fold, respectively, and the Arg 193 Ala variant was unsaturable for CoA (Table 1).The Arg 586 and the Arg 528 variants generally showed much less of an effect on the Km for CoA, with increases ranging from less than two-fold up to 8.8-fold except for the Arg 528 Ala variant, which was rendered unsaturable for CoA (Table 1).Each of these Arg residues was individually altered to Ala, Lys, and Gln in Acs Mt and kinetic parameters were determined for the purified enzyme variants.Overall, alterations at Arg 193 had the most severe effect on the K m value for CoA.The K m values for CoA for the Arg 193 Lys and Arg 193 Gln variants increased 18.9-and 41.0-fold, respectively, and the Arg 193 Ala variant was unsaturable for CoA (Table 1).The Arg 586 and the Arg 528 variants generally showed much less of an effect on the K m for CoA, with increases ranging from less than two-fold up to 8.8-fold except for the Arg 528 Ala variant, which was rendered unsaturable for CoA (Table 1).The K m values for ATP and acetate were also determined for each variant.Alterations at the targeted Arg residues had only minor effects on the K m for ATP (Supplemental Table S1).The K m for acetate for most of the variants was similar to that for the wild-type enzyme with the exception of the Arg 193 Ala, Arg 528 Ala, and Arg 586 Gln variants which were unsaturable for acetate even at concentrations as high as 800 mM (Supplemental Table S1). Interaction between Arg 586 and the 3 -Phosphate Group of CoA Is Important for Substrate Binding and Catalysis Based on the Acs Se structure, Arg 586 of Acs Mt is predicted to interact with the 3 phosphate group of CoA.To examine the contribution and nature of this interaction in CoA binding and catalysis, we examined whether the unaltered enzyme and the Arg 586 Ala and Arg 586 Lys variants could discriminate between CoA and 3 -dephospho CoA.The wild-type enzyme had over 10-fold higher K m for 3 -dephospho CoA than for CoA but catalysis was not greatly reduced.The resulting 26.5-fold higher catalytic efficiency with CoA versus 3 -dephospho CoA (Table 2) indicates that the interaction between the enzyme and the 3 -phosphate group plays an important role in CoA binding. The Arg 586 Ala variant had a 6-fold higher K m value for CoA but similar K m value for 3 -dephospho CoA as the wild-type enzyme.Catalysis was greatly reduced with either substrate, resulting in just 2.2-fold difference in catalytic efficiency with CoA versus 3 -dephospho CoA (Table 2), indicating this variant can no longer discriminate well between the presence and absence of the 3 -phosphate group.Retention of a positive charge at position 586 in the Arg 586 Lys variant was not sufficient to restore discrimination between CoA and 3 -dephospho CoA.The K m for CoA was less than 2-fold increased versus that of the wild-type enzyme.This variant had a lower K m for 3 -dephospho CoA than the wild-type enzyme or the Arg 586 Ala variant, but k cat was still greatly reduced resulting in only 3.7-fold preference for CoA versus 3 -dephospho CoA (Table 2). Electrostatic Interaction between Macs Ma and the 3 -Phosphate Group of CoA Is Important To examine whether Macs Ma also makes an electrostatic interaction with the 3phosphate group of CoA, the ability of wild-type enzyme to discriminate between CoA and 3 -dephospho CoA was determined.The enzyme displayed very low 2-methylbutyryl-CoA synthetase activity with 3 -dephospho CoA even at a concentration of 10 mM, whereas the activity observed with 10 mM CoA was over 5-fold higher (Figure 2A).Kinetic parameters could not be determined with 3 -dephospho CoA, so the level of discrimination could not be ascertained. Electrostatic Interaction between MacsMa and the 3′-Phosphate Group of CoA Is Important To examine whether MacsMa also makes an electrostatic interaction with the 3′-phosphate group of CoA, the ability of wild-type enzyme to discriminate between CoA and 3′dephospho CoA was determined.The enzyme displayed very low 2-methylbutyryl-CoA synthetase activity with 3′-dephospho CoA even at a concentration of 10 mM, whereas the activity observed with 10 mM CoA was over 5-fold higher (Figure 2A).Kinetic parameters could not be determined with 3′-dephospho CoA, so the level of discrimination could not be ascertained.In the absence of CoA, wild-type MacsMa catalyzes synthesis and release of an acyladenylate when less favorable acyl substrates such as propionate are used, and the presence of CoA inhibits this activity [11].Inhibition of the acyl-adenylate synthetase activity by CoA versus 3′-dephospho CoA was examined as another means for determining whether interaction between the enzyme and the 3′-phosphate group of CoA is important.The acyl-adenylate synthetase activity was inhibited by both CoA and 3′-dephospho CoA to a similar extent (Figure 2B), suggesting that interaction with the 3′-phosphate group is important for CoA binding for the second step of the reaction but does not play a role in interaction between CoA and the enzyme for the first step of the reaction or when the second step cannot occur.In the absence of CoA, wild-type Macs Ma catalyzes synthesis and release of an acyladenylate when less favorable acyl substrates such as propionate are used, and the presence of CoA inhibits this activity [11].Inhibition of the acyl-adenylate synthetase activity by CoA versus 3 -dephospho CoA was examined as another means for determining whether interaction between the enzyme and the 3 -phosphate group of CoA is important.The acyl-adenylate synthetase activity was inhibited by both CoA and 3 -dephospho CoA to a similar extent (Figure 2B), suggesting that interaction with the 3 -phosphate group is important for CoA binding for the second step of the reaction but does not play a role in interaction between CoA and the enzyme for the first step of the reaction or when the second step cannot occur. The CoA Binding Pocket in Macs Ma Resembles That in CBAL The CoA nucleotide binding pocket in the Acs Se and CBAL structures differs but the pantetheine tunnel is similar [5,20].In CBAL, the aromatic residues Phe 473 and Trp 440 play key roles in CoA binding and catalysis by accommodating the adenine moiety of CoA.Alterations of these residues greatly reduced catalytic efficiency for the second step of the reaction while having little effect on first step [22].Arg 475 interacts with the CoA 3'-phosphate [5,20] and alteration reduced catalytic efficiency [22]. Comparison of the Macs Ma , Acs Se , and CBAL structures revealed that the CoA binding site of Macs Ma more closely resembles that of CBAL [21].In Macs Ma , Tyr 525 and Arg 490 replace Phe 473 and Trp 440 of CBAL, respectively (Figure 3), although Tyr 460 of Macs Ma is also positioned such that it could function similarly to Trp 440 of CBAL, which interacts with the adenine moiety of CoA [5,20].Tyr 527 of Macs Ma occupies a similar location to Arg 475 of CBAL but the side chain is positioned away from the 3 -phosphate and may instead interact with the ribose group of CoA via its benzoyl group [21].Gly 459 , located in close proximity to the putative CoA binding site of Macs Ma , is highly conserved among all members of the adenylate-forming enzyme superfamily and has been proposed to be necessary to open the pantetheine tunnel in the thioester-forming conformation [21]. Life 2023, 13, x FOR PEER REVIEW The CoA Binding Pocket in MacsMa Resembles That in CBAL The CoA nucleotide binding pocket in the AcsSe and CBAL structures differs b pantetheine tunnel is similar [5,20].In CBAL, the aromatic residues Phe 473 and Trp 4 key roles in CoA binding and catalysis by accommodating the adenine moiety o Alterations of these residues greatly reduced catalytic efficiency for the second step reaction while having little effect on first step [22].Arg 475 interacts with the CoA 3' phate [5,20] and alteration reduced catalytic efficiency [22]. Comparison of the MacsMa, AcsSe, and CBAL structures revealed that the CoA b site of MacsMa more closely resembles that of CBAL [21].In MacsMa, Tyr 525 and Ar place Phe 473 and Trp 440 of CBAL, respectively (Figure 3), although Tyr 460 of MacsMa positioned such that it could function similarly to Trp 440 of CBAL, which interact the adenine moiety of CoA [5,20].Tyr 527 of MacsMa occupies a similar location to A CBAL but the side chain is positioned away from the 3′-phosphate and may instead act with the ribose group of CoA via its benzoyl group [21].Gly 459 , located in close imity to the putative CoA binding site of MacsMa, is highly conserved among all me of the adenylate-forming enzyme superfamily and has been proposed to be neces open the pantetheine tunnel in the thioester-forming conformation [21].Based on these structural comparisons, we investigated the role of MacsMa re Gly 459 , Tyr 525 , Tyr 460 , Arg 490 and Tyr 527 in CoA binding and catalysis.Alterations were at each of these residues and the recombinant enzyme variants were produced and fied.The Tyr 460 Ala and Arg 490 Ala variants were insoluble and were not characteriz netic parameters were determined for the purified enzyme variants to examine the of the alterations on acyl-CoA synthetase activity.Alteration of Gly 459 to Ala had li fect on enzymatic activity.The Km and kcat values showed just slight changes from for the wild-type enzyme for the 2-methylbutyryl-CoA synthetase activity (Table Supplemental Table S2).Based on these structural comparisons, we investigated the role of Macs Ma residues Gly 459 , Tyr 525 , Tyr 460 , Arg 490 and Tyr 527 in CoA binding and catalysis.Alterations were made at each of these residues and the recombinant enzyme variants were produced and purified.The Tyr 460 Ala and Arg 490 Ala variants were insoluble and were not characterized.Kinetic parameters were determined for the purified enzyme variants to examine the impact of the alterations on acyl-CoA synthetase activity.Alteration of Gly 459 to Ala had little effect on enzymatic activity.The K m and k cat values showed just slight changes from those for the wild-type enzyme for the 2-methylbutyryl-CoA synthetase activity (Table 3 and Supplemental Table S2). Alterations at Tyr 460 , Tyr 525 , Tyr 527 , and Arg 490 proved to be very deleterious to the acyl-CoA synthetase activity of Macs Ma , with little 2-methylbutyryl-CoA synthetase activity observed even at high CoA concentrations.These variants displayed 15-to 80-fold reduced specific activity relative to the wild-type enzyme (Figure 4), and kinetic parameters could not be determined due to the low activity.These variants also had reduced propionyl-adenylate synthetase activity, with k cat values reduced 4.5-to 21-fold (Supple-Life 2023, 13, 1643 8 of 13 mental Table S3).The K m values for propionate and ATP were not substantially affected in these variants except for the Tyr 525 Ala variant for which the K m value for propionate increased 6.0-fold and that for ATP decreased 14.7-fold (Supplemental Table S3).Alterations at Tyr 460 , Tyr 525 , Tyr 527 , and Arg 490 proved to be very deleterious to the acyl-CoA synthetase activity of MacsMa, with little 2-methylbutyryl-CoA synthetase activity observed even at high CoA concentrations.These variants displayed 15-to 80-fold reduced specific activity relative to the wild-type enzyme (Figure 4), and kinetic parameters could not be determined due to the low activity.These variants also had reduced propionyl-adenylate synthetase activity, with kcat values reduced 4.5-to 21-fold (Supplemental Table S3).The Km values for propionate and ATP were not substantially affected in these variants except for the Tyr 525 Ala variant for which the Km value for propionate increased 6.0-fold and that for ATP decreased 14.7-fold (Supplemental Table S3).To examine whether these alterations affected just the second step of the reaction in which CoA binding occurs or affected catalysis of the first step of the reaction as well, kinetic parameters were determined for the CoA-independent propionyl-adenylate synthetase activity of the enzyme.Except for the Tyr 525 Ala variant, the Km values for propionate and ATP showed ~2-fold or less change from the values observed for the unaltered enzyme although the kcat values were ~2-10 fold decreased (Supplementary Table S3).These results suggest that although the first step of the reaction is affected, the impact is not enough to account for the near lack of acyl-CoA synthetase activity and that CoA binding and/or catalysis of the second step of the reaction are specifically affected. Because CoA inhibits the propionyl-adenylate synthetase activity of the wild-type enzyme [11], we examined the effect of a high concentration of CoA on this activity in the To examine whether these alterations affected just the second step of the reaction in which CoA binding occurs or affected catalysis of the first step of the reaction as well, kinetic parameters were determined for the CoA-independent propionyl-adenylate synthetase activity of the enzyme.Except for the Tyr 525 Ala variant, the K m values for propionate and ATP showed ~2-fold or less change from the values observed for the unaltered enzyme although the k cat values were ~2-10 fold decreased (Supplementary Table S3).These results suggest that although the first step of the reaction is affected, the impact is not enough to account for the near lack of acyl-CoA synthetase activity and that CoA binding and/or catalysis of the second step of the reaction are specifically affected. Because CoA inhibits the propionyl-adenylate synthetase activity of the wild-type enzyme [11], we examined the effect of a high concentration of CoA on this activity in the variants as a means for determining whether CoA can still bind even though the variants cannot catalyze the second step of the reaction.The presence of 15 mM CoA reduced activity of the wild-type enzyme by nearly 40% but had little to no inhibitory effect on activity of the variants (Figure 5).The presence of 15 mM CoA stimulated propionyladenylate synthetase activity of the Arg 490 Lys variant by nearly 25%.The reason for this is unknown and was not investigated further. variants as a means for determining whether CoA can still bind even though the variants cannot catalyze the second step of the reaction.The presence of 15 mM CoA reduced activity of the wild-type enzyme by nearly 40% but had little to no inhibitory effect on activity of the variants (Figure 5).The presence of 15 mM CoA stimulated propionyl-adenylate synthetase activity of the Arg 490 Lys variant by nearly 25%.The reason for this is unknown and was not investigated further. The Corresponding Lys Residues in MacsMa Do Not Play a Major Role in CoA Binding To provide further confirmation that CoA binding in MacsMa more closely resembles that in CBAL than Acs, we also examined the role of Lys 461 and Lys 519 , which are positioned similarly to Arg 528 and Arg 586 of AcsMt (Figure 6).These Lys residues were individually altered to Arg and Ala and the purified variants were characterized.Kinetic parameters were determined using 2-methylbutyrate, the preferred substrate for the acyl-CoA synthetase activity of these enzyme variants.The Lys 461 Ala and Lys 461 Arg variants showed just 1.9-fold and 3.0-fold decrease, respectively, in the Km for CoA and a modest (less than 10-fold) decrease in the Km value for 2-methylbutyrate (Table 3).The Km values for 2methylbutyrate and ATP were not substantially affected (Supplementary Table S2).These results suggest that Lys 461 in MacsMa does not play a role similar to the corresponding Arg in Acs as there was little impact on CoA binding and catalysis. The Lys 519 Arg alteration resulted in less than 2-fold change in the Km for any substrate or the turnover rate for the 2-methylbutyryl-CoA synthetase (Table 3 and Supplementary Table S2) or the propionyl-adenylate synthetase (Supplementary Table S3) activities.In contrast, the Lys 519 Ala variant had too little activity to determine kinetic parameters for either activity (Table 3 and Supplementary Tables S2 and S3). The Corresponding Lys Residues in Macs Ma Do Not Play a Major Role in CoA Binding To provide further confirmation that CoA binding in Macs Ma more closely resembles that in CBAL than Acs, we also examined the role of Lys 461 and Lys 519 , which are positioned similarly to Arg 528 and Arg 586 of Acs Mt (Figure 6).These Lys residues were individually altered to Arg and Ala and the purified variants were characterized.Kinetic parameters were determined using 2-methylbutyrate, the preferred substrate for the acyl-CoA synthetase activity of these enzyme variants.The Lys 461 Ala and Lys 461 Arg variants showed just 1.9-fold and 3.0-fold decrease, respectively, in the K m for CoA and a modest (less than 10-fold) decrease in the K m value for 2-methylbutyrate (Table 3).The K m values for 2-methylbutyrate and ATP were not substantially affected (Supplementary Table S2).These results suggest that Lys 461 in Macs Ma does not play a role similar to the corresponding Arg in Acs as there was little impact on CoA binding and catalysis. Discussion We have previously investigated substrate binding and catalysis in the shor medium-chain acyl-CoA synthetases and identified residues important for acyl sub binding in Acs and Macs and ATP binding in Acs.Here we examined CoA bind AcsMt and MacsMa.The Lys 519 Arg alteration resulted in less than 2-fold change in the K m for any substrate or the turnover rate for the 2-methylbutyryl-CoA synthetase (Table 3 and Supplementary Table S2) or the propionyl-adenylate synthetase (Supplementary Table S3) activities.In contrast, the Lys 519 Ala variant had too little activity to determine kinetic parameters for either activity (Table 3 and Supplementary Tables S2 and S3). Discussion We have previously investigated substrate binding and catalysis in the short-and medium-chain acyl-CoA synthetases and identified residues important for acyl substrate binding in Acs and Macs and ATP binding in Acs.Here we examined CoA binding in Acs Mt and Macs Ma . Inspection of the Acs Sc and Acs Se structures [4,5] revealed two conformations for the enzyme.In the first step of the reaction, the C-terminal domain is positioned out and away from the active site but then swings in toward the N-terminal domain for catalysis of the second step of the reaction.Three Arg residues, Arg 191 , Arg 526 , and Arg 584 (Arg 193 , Arg 528 , and Arg 586 of Acs Mt , respectively) were proposed to play an important role in CoA binding and catalysis of the second step.Arg 191 interacts with both the 5 -diphosphate and the 3 -diphosphate groups of CoA.As this residue is on the N-terminal domain and already present in the active site before domain alternation, it may play an important role in initial binding of CoA.Arg 584 enters the active site after domain alternation to interact with the 3 -phosphate group, and Arg 526 , also on the C-terminal domain, was proposed to stabilize the thioester-forming conformation through interaction with the phosphate group of the acyl-adenylate intermediate [5].Although this residue was not proposed to directly interact with CoA, it may influence CoA binding and catalysis by locking the enzyme in the thioester-forming conformation and thus encasing CoA in the active site. We altered each of these Arg residues individually in Acs Mt and assessed each variant's kinetic abilities.All the variants were impaired in catalysis, with k cat values reduced by 34 to 680-fold.The effect of these alterations on the K m for CoA varied, with alterations at Arg 193 being the most detrimental and replacements at Arg 528 and Arg 586 having more variable effects.As might be expected, substitution with Ala at each position was the most deleterious, likely due to loss of both side chain charge and size.In fact, the Arg 193 Ala and Arg 528 Ala variants were not saturable for CoA or, surprisingly, for acetate. Replacement of Arg 586 had a lesser effect than replacement at Arg 193 , most likely because Arg 586 only contacts CoA at the 3 -phosphate rather than at both the 5 -diphosphate and the 3 -phosphate as for Arg 193 .However, this single point of contact between Arg 586 and CoA is important in CoA recognition and/or binding as shown by the fact that the Arg 586 variants were unable to distinguish between CoA and 3 -dephosphoCoA. Alteration of Arg 528 increased the K m for CoA and decreased k cat despite Arg 528 appearing to contact the phosphate group of the acyl-adenylate intermediate rather than direct contact with CoA, thus stabilizing the thioester-forming conformation of the enzyme.Substitution at this position would then be expected to reduce the ability of the enzyme to maintain proper positioning of the acyl-adenylate intermediate in the active site, thus affecting catalysis and influencing the ability to bind CoA as well.Our results suggest these three Arg residues are essential for CoA binding and catalysis, directly or indirectly.These residues may also influence acetate binding in the first step of the reaction, perhaps through an inability to fully control domain alternation. In contrast to our results, Reger et al. [6] reported that alteration of Arg 526 and Arg 584 of Acs Se resulted in just a 2-fold decrease in catalysis.The K m for CoA increased for each of the variants, ranging from 4-fold for the Arg 526 Ala variant to 7 to 8-fold for the Arg 584 Ala and Arg 584 Glu variants [6].However, no alterations were made at Arg 191 , the equivalent to Arg 193 of Acs Mt .Reger et al. [6] determined the kinetic parameters for CoA and ATP using 20 mM acetate in the reaction mixture for all enzyme variants.Given that the wild-type enzyme has a reported K m for acetate of 6.05 mM, the kinetic constants for ATP and CoA may have been determined at subsaturating acetate concentrations.These inconsistencies between our results and those of Reger et al. [6] may thus reflect differences between the two Acs enzymes, which share only 49% sequence identity at the amino acid level, or the experimental conditions.Such differences among Acs enzymes were already noted for acyl substrate selection [9,10]. In the Macs Ma structure, the enzyme was found to be in a similar conformation to that observed for Acs Se , as if poised for the second step of the reaction even in the absence of substrates [7].Comparison of the CBAL [21] and Acs Se structures [5] in the conformation for the second step of the reaction revealed that the binding pockets for CoA nucleotide moiety in these enzymes are significantly different [21], with more interactions with the N-terminal domain in Acs but with the C-terminal domain for CBAL.Superposition of the Macs Ma structure with the CBAL and Acs Se structures indicates that the CoA binding site more closely resembles that of CBAL [7].The recent structure of a 2-hydroxyisobutyric acid CoA ligase shows CoA binding in a substantially bent conformation in the thioester conformation [28].This contrasts with the more stretched conformations of CoA observed in the thioester forming conformations of CBAL, Macs, and Acs. We investigated five residues in Macs Ma predicted to interact with CoA based on comparison with the CBAL structure.Tyr 460 , Arg 590 , Tyr 525 , or Tyr 527 variants displayed greatly reduced 2-methylbutyryl-CoA synthetase activity, and propionyl-adenylate synthetase activity was also reduced but to a much lesser extent.Alteration at Gly 459 , which is strictly conserved in the acyl-adenylate-forming superfamily [7] reduced the turnover rate for both enzymatic activities but did not substantially affect the K m values for substrates. In order to examine whether the alterations in the putative CoA binding pocket residues affected just catalysis or also affect CoA binding, we took advantage of the fact that CoA inhibits the propionyl-adenylate synthetase activity of Macs Ma [11].In each case, the variant showed less inhibition of the propionyl-adenylate synthetase activity by CoA than for the wild-type enzyme, suggesting that CoA cannot bind as well and supporting that these four residues play a key role in CoA binding as well as catalysis by Macs Ma . Macs Ma lacks each of the three Arg residues investigated in Acs Mt .However, Arg 528 and Arg 586 of Acs Mt are replaced by Lys residues at the corresponding positions (residues 461 and 519) in Macs Ma .Structurally, although these residues are in the vicinity of the predicted CoA binding pocket of Macs Ma , they are more remote from CoA than the Arg residues of Acs.Our kinetics results for Macs Ma variants altered at these Lys residues suggest that Lys 461 does not play a role in CoA binding or catalysis.Although Lys 519 may play some role, maintenance of the positive charge at this position is sufficient.Alterations at these positions (with the exception of a Lys 519 Ala alteration) resulted in only mild reductions in k cat (5-fold or less) for either the acyl-CoA synthetase activity or the propionyl-adenylate synthetase activity.Per residue binding free energy decomposition had previously identified Lys 461 as a residue important in 2-methyl butyrate binding and catalysis [29].Our alteration of Lys 461 to alanine and arginine had only 7.5-fold and 9-fold reduction on K m , respectively. Overall, although Acs and Macs have similarities in active site architecture for substrate binding and catalysis of the first step of the reaction, our results strongly suggest that the active site architecture for CoA binding and catalysis of the second step has diverged greatly.Although structural comparison between Acs Se and Macs Ma revealed distinct differences in the CoA binding pocket [7], it appears that electrostatic interaction with the 3 -phosphate group of CoA is important for both enzymes; however, this interaction occurs with disparate residues in each enzyme.Differences in acyl substrate binding sites among acyl-CoA synthetase family members is not surprising as the enzymes must accommodate substrates of different lengths that may be branched or unbranched.However, the diversity in CoA binding sites among family members was unexpected. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/life13081643/s1,Table S1: K m values for acetate and ATP for Acs Mt wild-type and variant enzymes; Table S2: K m values for 2-methylbutyrate and ATP for wild-type Macs Ma and the Lys 461 , Lys 519 , and Gly 459 variants; Table S3: Kinetic parameters for the propionyl-adenylate synthetase activity of wild-type Macs Ma and the Lys 461 , Lys 519 , Gly 459 , Tyr 460 , Tyr 525 , Tyr 527 , and Arg 490 variants. Interact with CoA Inspection of the AcsSe structure reveals interaction between the negatively charged phosphate groups of CoA and two conserved Arg residues, Arg 191 and Arg 584 , with Arg 191 interacting with both the 5′-diphosphate and 3′-phosphate groups and Arg 584 interacting with just the 3′-phosphate of CoA [5].An additional highly conserved Arg residue, Arg 526 , interacts with the phosphate group of the acyl-adenylate intermediate and has been predicted to play a role in stabilizing the thioester-forming conformation [5].These three Arg residues are conserved in AcsMt as Arg 193 , Arg 528 , and Arg 586 , respectively, and occupy similar positions relative to CoA (Figure 1). Figure 1 . Figure 1.CoA binding region of AcsSe and AcsMt.The AcsMt structure (right) was modeled on AcsSe (left; PDB ID 2P2F).CoA is shown in magenta, with the 3′-phosphate group in orange.Corresponding Arg residues in each structure (AcsSe/AcsMt) are displayed as follows: Arg 526/528 in red, Arg 191/193 in blue, and Arg 584/586 in aqua. Figure 1 . Figure 1.CoA binding region of AcsSe and AcsMt.The Acs Mt structure (right) was modeled on AcsSe (left; PDB ID 2P2F).CoA is shown in magenta, with the 3 -phosphate group in orange.Corresponding Arg residues in each structure (Acs Se /Acs Mt ) are displayed as follows: Arg 526/528 in red, Arg 191/193 in blue, and Arg 584/586 in aqua. Figure 2 . Figure 2. Effect of CoA and 3′-dephospho CoA on the acyl-CoA synthetase and propionyl-adenylate synthetase activities of MacsMa.(A) Acyl-CoA synthetase activity of MacsMa with CoA or 3′dephospho CoA.Activity was measured at increasing concentrations of either CoA (red) or 3'dephospho CoA (blue).Specific activities shown are the mean ± standard deviation of three replicates.(B) Inhibition of the propionyl-CoA synthetase activity of MacsMa by CoA (red) versus 3′dephospho CoA (blue).Activities shown are the percent activity measured in the absence of CoA (100%) versus presence of CoA and are the mean ± standard deviation of three replicates. Figure 2 . Figure 2. Effect of CoA and 3 -dephospho CoA on the acyl-CoA synthetase and propionyl-adenylate synthetase activities of Macs Ma .(A) Acyl-CoA synthetase activity of Macs Ma with CoA or 3dephospho CoA.Activity was measured at increasing concentrations of either CoA (red) or 3'dephospho CoA (blue).Specific activities shown are the mean ± standard deviation of three replicates.(B) Inhibition of the propionyl-CoA synthetase activity of Macs Ma by CoA (red) versus 3 -dephospho CoA (blue).Activities shown are the percent activity measured in the absence of CoA (100%) versus presence of CoA and are the mean ± standard deviation of three replicates. Figure 3 . Figure 3.The CoA binding region of CBAL and MacsMa.The CBAL structure (left; PDB ID has CoA bound (in magenta with the 3′-phosphate group in orange).Residues shown to important role in CoA binding and catalysis are indicated.Residues predicted to be impor CoA binding in MacsMa (right; PDB ID 3ETC) are shown in the same color as the corresp residues in CBAL. Figure 3 . Figure 3.The CoA binding region of CBAL and Macs Ma .The CBAL structure (left; PDB ID 3CW9) has CoA bound (in magenta with the 3 -phosphate group in orange).Residues shown to play an important role in CoA binding and catalysis are indicated.Residues predicted to be important for CoA binding in Macs Ma (right; PDB ID 3ETC) are shown in the same color as the corresponding residues in CBAL. Figure 4 . Figure 4. 2-methylbutyryl-CoA synthetase specific activity of wild-type Macs Ma and the Tyr 460 , Arg 490 , Tyr 525 , and Tyr 527 variants determined in the presence of 15 mM CoA. Figure 5 . Figure 5.Effect of CoA on propionyl-adenylate synthetase activity of MacsMa wild-type and variants.Activities in the presence of 15 mM CoA were normalized as percentages relative to the specific activity observed for each enzyme in the absence of CoA.Reactions were performed in triplicate and values are the mean ± SD. Figure 5 . Figure 5.Effect of CoA on propionyl-adenylate synthetase activity of Macs Ma wild-type and variants.Activities in the presence of 15 mM CoA were normalized as percentages relative to the specific activity observed for each enzyme in the absence of CoA.Reactions were performed in triplicate and values are the mean ± SD. Table 1 . [9]etics parameters for Acs Mt wild-type and variant enzymes.Values are taken from[9].bTheenzymewas not saturable for CoA at concentrations up to 25 mM and kinetic parameters could not be determined.The turnover rates for all the Arg variants were significantly impaired (Table1), with 34-to 38-fold reductions in k cat observed for the Arg 586 Lys and Arg 586 Ala variants, 160to 291-fold reductions for the Arg 193 Lys and Arg 193 Gln variants, and 130-to 326-fold reductions for the Arg 528 Gln and Arg 528 Lys variants.The most severe reduction in catalysis was observed for the Arg 586 Gln variant, which displayed a 680-fold reduced k cat .The effects on the overall catalytic efficiency with CoA ranged from a 58-fold reduction for the Arg 586 Lys variant to a nearly 12,000-fold reduction for the Arg 193 Gln variant.Even the more conservative Arg 193 Lys alteration resulted in ~3000-fold reduced catalytic efficiency, suggesting Arg 193 plays a critical role in catalysis as well as CoA binding. a Table 2 . Discrimination between CoA and 3 -dephospho CoA (deCoA) for wild-type Acs Mt and the Arg 586 variants. Table 3 . Kinetic parameters for the 2-methylbutyryl-CoA synthetase activity for wild-type Macs Ma and the Gly 459 , Lys 461 , and Lys 519 variants. * Activity was too low for determination of kinetic parameters. * Activity was too low for determination of kinetic parameters.
10,260.6
2023-07-28T00:00:00.000
[ "Biology", "Chemistry" ]
Quantum Monte Carlo simulation of a particular class of non-stoquastic Hamiltonians in quantum annealing Quantum annealing is a generic solver of the optimization problem that uses fictitious quantum fluctuation. Its simulation in classical computing is often performed using the quantum Monte Carlo simulation via the Suzuki–Trotter decomposition. However, the negative sign problem sometimes emerges in the simulation of quantum annealing with an elaborate driver Hamiltonian, since it belongs to a class of non-stoquastic Hamiltonians. In the present study, we propose an alternative way to avoid the negative sign problem involved in a particular class of the non-stoquastic Hamiltonians. To check the validity of the method, we demonstrate our method by applying it to a simple problem that includes the anti-ferromagnetic XX interaction, which is a typical instance of the non-stoquastic Hamiltonians. in the non-diagonal elements 29,30 .However, this task is highly nontrivial and there are no generic solutions.The negative sign problem belongs to the class of the NP-hard problem 31 . In the present study, we show a method to simulate a particular class of the non-stoquastic Hamiltonians including the anti-ferromagnetic XX interaction in the quantum Monte Carlo simulation.The numerical scheme will be demonstrated below.The present study paves the way to simulate quantum annealing with an elaborate driver Hamiltonian.The future development of hardware devices to perform quantum annealing aims at implementing the non-stoquastic Hamiltonian such as the anti-ferromagnetic XX interaction beyond the classically simulatable world.Our contribution in the present study is to establish a test bed to simulate a particular class of non-stoquastic Hamiltonians in order to verify the performance of future hardware.Moreover, our study will enable invention of many types of algorithms inspired by quantum annealing in the classical computer but with the non-stoquastic Hamiltonian. In standard QA, we employ a system with the transverse field as where Γ represents the strength of the transverse field, and σ i x is the x component of the Pauli matrix.The classical Ising model to be solved is H 0 (σ), where σ σ σ σ =  ( , , , ) . The standard QA procedure decreases the strength of the quantum driver Hamiltonian to find the ground state of the cost function to be optimized.The adiabatic theorem ensures that we obtain the ground state after sufficient slow sweep of the quantum driver Hamiltonian.The computational time of QA can be evaluated by the energy gap between the ground state and the first excited state during the decrease in the transverse fields 8,9 .As has been extensively studied, the quantum phase transition hampers the efficient computation of QA.In particular, the first-order phase transition characterized by the exponential closure of the energy gap depending on the number of spins signals the long-time computation for optimization via QA.Seki and Nishimori successfully avoided the first order phase transition by utilizing additional fluctuations, anti-ferromagnetic XX interactions. ∑ ∑ However, their quantum fluctuation yields the negative sign problem in a naive application of the Suzuki-Trotter decomposition in the quantum Monte Carlo method 25 .Seki and Nishimori avoid the negative sign problem without rotating the basis to represent the Hamiltonian.They utilize the mean-field analysis to compute the free energy and the thermodynamic quantities such as the magnetization, energy, etc.A recent study on the performance in the optimization problem by use of the anti-ferromagnetic XX interaction is carried out by the numerical diagonalization for a limited size of the system 32 . We then elucidate the essential part of the mean-field study to avoid the negative sign problem and apply its scheme to simulate the non-stoquastic Hamiltonian with a more generic form.The scheme involves the following form of the non-stoquastic Hamiltonian The classical Hamiltonian here is not restricted to the all-to-all connections as in the previous studies carried out by Seki and Nishimori 22,23 .The proposed method in the present study is applicable to all kinds of the classical Hamiltonian.For instance, the spin-glass Hamiltonian with all-to-all connections as and the finite-dimensional spin-glass Hamiltonian with limited connections as where, ij stands for the summation over the subset of the interactions, which are located at each bond on the specific lattice.The dimensionality is not limited because our method is based on the quantum Monte Carlo simulation rather than the mean-field analyses as in the previous studies 22,23 .In addition, the various forms of the Hamiltonian representing the optimization problem with many-body interactions can be within the scope of application of our method.On the other hand, the quantum fluctuation term is limited to the specific form, in which its argument is given by the summation over all-components of the x-component of the Pauli operators.This is a kind of extension of the quantum fluctuation used in QA.The case of the standard QA is described by f(m x ) = Γ m x .The case of the QA with the anti-ferromagnetic XX interaction and transverse field is included in this form through 2 .In our scheme, we utilize the quantum Monte Carlo simulation while avoiding the negative sign problem for a particular class of the non-stoquastic Hamiltonian through the simple calculations by replacement of the quantum fluctuation with the adaptive-changing transverse field.As discussed above, there is no generic solution for the negative sign problem.To avoid the obstacle in the numerical computation, one needs to find an adequate basis in which the negative sign problem disappears.Contrasting with the ordinary approach overcoming the difficulty involved in the negative sign problem, our method avoids the obstacle via the simple transformation and calculations rather than finding the proper basis to represent the Hamiltonian.We propose two approaches to simulate a particular class of the non-stoquastic Hamiltonian.The first one is called the adaptive quantum Monte Carlo simulation and the other is the data-analysis approach.These methods are detailed below. The following section demonstrates the validity of our proposed method for a simple model: the (ferromagnetic) infinite-range model with the transverse field and anti-ferromagnetic XX interaction defined as 2 .The infinite model has the second-order phase transition at the critical point Γ c = 1 without any (longitudinal) magnetic field and anti-ferromagnetic XX interactions.The model we dealt with here is solvable by mean-field analysis and thus adequate for validation of our method.We emphasize, however, that the application range of our method is beyond that of the mean-field analysis.When we perform the naive application of the quantum Monte Carlo simulation through the standard approach via the Suzuki-Trotter decomposition, the numerical simulations suffer from the negative sign problem to obtain precise estimations of the physical quantities for the case away from f(m x ) = Γ m x .However, we do not encounter the negative sign problem by the use of the replacement of the non-stoquastic terms in Eq. ( 3) via the saddle-point method and integral representation of the delta function as detailed below. Results In this section, we propose our first method: the adaptive quantum Monte Carlo simulation.We set the spin systems with N = 4, 8, 16 and 32.In order to perform the quantum Monte Carlo simulation, we consider the replicated system at extremely low temperatures via the Suzuki-Trotter decomposition.We set the Trotter number as τ = 128 and the inverse temperature (temperature) as β = 50 (T = 0.02).To check the precision of the computation, we estimate various thermodynamic quantities: the internal energy, (longitudinal) magnetization, and the transverse magnetization including the order parameters of the system.In Fig. 1, we show estimations for three thermodynamic quantities by computing 80000 MCS time average after 20000 MCS equilibration.For comparison, we show the exact solution carried out using the mean-field analysis, because this model is tractable.The case with γ = 1 is for the anti-ferromagnetic XX interaction, while that with γ = 0 is for absence of the anti-ferromagnetic XX interaction.The results obtained by the adaptive quantum Monte Carlo simulation are close to the exact result with γ = 1 rather than that with γ = 0.This fact is evidence that the adaptive quantum Monte Carlo simulation can estimate correctly the thermodynamic quantities even in the non-stoquastic Hamiltonian.Aside from the finite-size effect, the obtained results asymptotically coincide with the exact solutions.We remark that the convergence of the estimations is relatively slow compared to the ordinary quantum Monte Carlo simulation because we have to estimate the expectation of the thermodynamic quantities as the effective transverse field f ′ (m x ) depending on the tentative value of the transverse magnetization.Although substantial slowing down of the equilibration is not observed in the simple model, the computational time to correctly estimate the thermodynamic quantities might become longer depending on the model that is related to its complexity. Another method to estimate the thermodynamic quantities of the non-stoquastic Hamiltonian is the data analysis approach.Similar to the previous check, we test our method in the simple infinite-range model defined by Eq. ( 6).We again set the spin systems with N = 4, 8, 16 and 32, the Trotter number as τ = 128, and the inverse temperature (temperature) as β = 50 (T = 0.02).We here perform the "standard" quantum Monte Carlo simulation for the model only with the transverse field while changing the value of the transverse field from Γ = 0 to Γ = 4 by Δ Γ = 0.05.For validation of our method, we estimate the internal energy, (longitudinal) magnetization, and the transverse magnetization as in Fig. 2. The computational time is short because we perform the "standard" quantum Monte Carlo simulation.We take 80000 MCS time average after 10000 MCS equilibration. Similar to the results of the adaptive quantum Monte Carlo simulation, we confirm the validity of our method within the finite-size effects of the simulation.The obtained results become closer to the exact solution with γ = 1, which is the non-stoquastic Hamiltonian, rather than with γ = 0.In the results obtained using the data-analysis approach, we find several adjacent points in the different value of γ.This means that the same results are obtained in several cases after estimation of the thermodynamic quantities for the non-stoquastic Hamiltonian.We can remove lack of numerical precision by increasing the step size of the transverse magnetic field in the "standard" quantum Monte Carlo simulation.We confirm that both the proposed methods based on the quantum Monte Carlo simulation can estimate the thermodynamic quantities. Discussion We propose two methods to simulate the non-stoquastic Hamiltonian on the basis of the quantum Monte Carlo simulation, which is often affected by the negative sign problem due to the computational basis.Although we usually consider the rotation of the basis to remove the negative sign problem, we avoid the negative sign problem by introducing the auxiliary variable and its conjugate one m x and ∼ m x .The model which we deal with is non-stoquastic but the negative sign problem is not insurmountable.Thus, we avoid the negative sign problem though the model is in the definition of the non-stoquastic Hamiltonian. We list the possible questions and answers on our proposed method below: • Our stand point. In the present study, we simply check the validity of our proposed methods by comparing the numerical results and the exact solution via the mean-field analysis.Since, the proposed method considers a toy model, we do not obtain any nontrivial result.However, this means readily that our work is trivial.This is the first step to establish a systematic approach for the simulation of the non-stoquastic Hamiltonian.In the numerical simulation of the non-stoquastic Hamiltonian with a large number of components, efficient methods to compute the thermodynamic quantities are the exact diagonalization of the Hamiltonian and the renormalization group analysis.Both the methods often suffer from size limitation.We propose an alternative choice to approach the non-stoquastic Hamiltonian.• Limitation of our method. Our method is not a generic solution to the negative sign problem.The applicable scope of our method is limited to the case with the quantum fluctuation determined by the collection of the x-component of the Pauli operators.The form of the function f(m) is very flexible.The condition of the function is that P(m x ) → 0 as m x → ± ∞ .Furthermore, when the spin operators characterizing the quantum fluctuation can be reduced to a few quantities, our proposed method can be applied.For instance, our method can be generalized to the case with , where Λ p is a subset of the indices (locations) of the spin operators.This means that the inhomogeneous XX interactions can be dealt with.The combination of the inhomogeneous XX interaction is directly related to the computational cost.We would expect that the boundary between the classical and the quantum computational capacity lies in the inhomogeneity of the XX interactions.This problem is presently not solvable. We employ the saddle-point method for determining the value of ∼ m x as detailed below.Thus, in order to enhance the numerical precision, we take the system size to be relatively large.However, we may simulate the value of m x using the Langevin stochastic process.Then, we need to choose the direction of m x for stabilization of its behavior depending on the form of f(m x ). • Capability of acceleration.Most of the ordinary simulations by the Langevin stochastic process and the Markov-chain Monte Carlo simulation require detailed balance condition.Various studies have ensured that the violation of the detailed balance condition exhibits acceleration of the convergence to the pre-determined steady state [33][34][35][36] .We may also utilize the exchange Monte Carlo simulation to accelerate the convergence to the steady state 3 .In the present study, we employ this method to obtain our results. • Avoiding the first-order phase transition. Seki and Nishimori demonstrated the possibility to remove the first-order phase transition by using the anti-ferromagnetic XX interaction.The effect of a kind of the quantum fluctuation can be recast as the modified transverse field depending on the tentative value of the transverse magnetization.This different approach provides a deep understanding of the change of the phase transition by using the elaborate quantum fluctuations.As shown above, the resulting thermodynamic quantity can be characterized by the cross points between the transverse magnetization under the effective transverse field and the inverse function determined by the form of the quantum fluctuation f(m x ).The first-order phase transition is involved in the discontinuous jump of the thermodynamic quantity.The discontinuous jump emerges from co-existing phases at the same parameter.In other words, the thermodynamic quantity is described by a multivalued function.The thermodynamically stable point is selected by comparing the values of the free energy at the multiple solutions at the same parameter.When we employ a nontrivial function as f(m x ) to avoid multiple cross points, we convert the first-order phase transition to the second-order (continuous) phase transition.The relationship between the computational complexity of QA with non-stoquastic Hamiltonian and the limitation of our approach would be a very exciting realm and will be reported elsewhere as a future study. A recent study on the non-stoquastic Hamiltonian is carried out by numerical diagonalization for a kind of the spin glass model with long-range interaction with the anti-ferromagnetic XX interactions.However, if one implements our method, one can investigate the property of the model.In this sense, our methods open an alternate way to approach nontrivial aspects of the non-stoquastic Hamiltonian by performing the quantum Monte Carlo simulation for large-component systems as compared to the sizes considered in the numerical diagonalization.We will report such nontrivial results by use of our method in the near future. Methods In the standard approach of the quantum Monte Carlo simulation, we construct the replicated partition function via the Suzuki-Trotter decomposition to reduce the transverse field to the interactions between the replicated systems.For a class of the non-stoquastic Hamiltonian written as in Eq. ( 3), we replace the quantum fluctuation with the adaptively-changing transverse field via the integral representation of the delta function and saddle-point method.Thus, the method we detail below can be more precise when the number of degrees of freedom is large. The partition function of the present system is written as follows: We employ the Suzuki-Trotter decomposition to divide the exponentiated Hamiltonian into the diagonal and non-diagonal parts as Here, we utilize an identity through the delta function as The partition function is written as This is a kind of micro-canonical form of the present system.We further employ the Fourier integral form of the delta function as The resulting partition function is the same as the Ising model with the transverse field.In the thermodynamic limit, we may take the saddle point in the integral.The saddle point is evaluated by , where f ′ (m) is the derivative of the function f(m).In other words, the transverse field is determined by the auxiliary variable m xt , which corresponds to the transverse magnetization.The transverse field can be rewritten as the interaction between the different Trotter slices t and t + 1, according to the prescription of the quantum Monte Carlo simulation as below.We define the joint probability distribution for ({σ t }, m x ) as x t Simultaneously, we define the conditional probability as ( ) ( 14) where Z(m x ) is the normalization constant and the marginal distribution as In the standard procedure of the Markov-chain Monte Carlo simulation, we generate a series of realizations of the Ising variable following the probability distribution conditioned on some fixed parameters such as the temperature and transverse field.However, in our case, the effective transverse field can change the tentative value of the auxiliary variable as x .The argument of the function f′ (m x ) also fluctuates following the marginal distribution.We may then simulate the stochastic process governed by the Langevin equation to generate the auxiliary variable m x following the marginal distribution as where dW is the Weiner process.Here, we define where the bracket denotes the expectation with respect to the weight of the conditional probability P(σ|m x ).In the case of the number of the spin variables N → ∞ , the effect of the Weiner process is negligible in the Langevin equation.In other words, the fluctuation around the saddle point σ = m x i x can be ignored.We can then generate two strategies to employ our approach.The first strategy is to directly simulate the non-stoquastic Hamiltonian by using the adaptive transverse field: 1. Perform the quantum Monte Carlo simulation following Eq.( 14) and estimate the value of the expectation of the transverse magnetization.To estimate the expectation, we generally require long-time equilibration.In the case of actual application, we compute the approximate value of the expectation by the empirical average over the whole simulation after the first relaxation.2. Change the transverse field ∼ m x following the saddle-point solution as x x 3. Repeat until the physical quantities converge. The second strategy is performed by the data analysis for the results of the "standard" quantum Monte Carlo simulation of the Ising model, only with the transverse field.We generate the results obtained by the wide range of the values of the transverse field ∼ m x .Then, we plot the transverse magnetization vs. the transverse field, and we read out its cross point to the curve determined by is the inverse function of f(m).Then, the cross point identifies the realization of the transverse magnetization of the non-stoquastic Hamiltonian.In the case of the transverse field and the anti-ferromagnetic XX interaction, the inverse function is simply given by the linear function f ′ −1 (m x ) = (Γ − m x )/γ.In this example, for the given Γ and γ, we find the cross point and re-plot the realized transverse magnetization and other corresponding thermodynamic quantities vs. the values of Γ and γ, as shown in Fig. 3. Figure 1 . Figure 1.Results of the adaptive quantum Monte Carlo simulation.The dashed curves denote the exact solution for the case without antiferromagnetic interactions and solid ones represent those with antiferromagnetic interactions.The gradating red dots represent the results of the adaptive quantum Monte Carlo simulation (denser red point stands for larger size of system). Figure 2 . Figure 2. Results of data analysis.The same symbols are used as those in Fig. 1. Figure 3 . Figure 3. Data analysis for the non-stoquastic Hamiltonian.The red dots are the results of the Ising model, only with the transverse field and the blue dots represent the inverse function as = ∼ − m f m ( ) x x 1 .
4,692.8
2016-12-14T00:00:00.000
[ "Physics" ]
Estimation of the Critical Temperatures of Some More Deep Eutectic Solvents from Their Surface Tensions Deep eutectic solvents are binary mixtures of a hydrogen bond accepting component (HBA), typically quaternary ammonium or phosphonium salt, and a hydrogen bond donating component (HBD), typically polyol, at a definite molar ratio. +ese mixtures are liquid at room temperature and freeze at a temperature considerably below the freezing points of the components, and hence, they are eutectics. Mirza et al. [1] reported a group additivity method for the estimation of the critical temperatures Tc (also the boiling points and densities) of deep eutectic solvents. An alternative path for the estimation of the critical temperatures is described here for deep eutectic solvents that for most of them no previous estimates were reported. +e surface tensions σ of liquids over a temperature range are related to their critical temperaturesTc according to either of two relationships. One relationship, according to Eötvös [2], is Introduction Deep eutectic solvents are binary mixtures of a hydrogen bond accepting component (HBA), typically quaternary ammonium or phosphonium salt, and a hydrogen bond donating component (HBD), typically polyol, at a definite molar ratio.ese mixtures are liquid at room temperature and freeze at a temperature considerably below the freezing points of the components, and hence, they are eutectics.Mirza et al. [1] reported a group additivity method for the estimation of the critical temperatures T c (also the boiling points and densities) of deep eutectic solvents.An alternative path for the estimation of the critical temperatures is described here for deep eutectic solvents that for most of them no previous estimates were reported. e surface tensions σ of liquids over a temperature range are related to their critical temperatures T c according to either of two relationships.One relationship, according to Eötvös [2], is where V � M/ρ is the molar volume of the liquid, M is its molar mass, and ρ is its density.e other relationship, according to Guggenheim [3], is ese relationships may be inverted in order to deduce the critical temperatures from σ(T) and ρ(T) data that are available in the literature.In order to apply these expressions, it is necessary to determine the parameters A of (1) and σ 0 of (2).e experimental functions σ(T) and ρ(T) are linear over a wide temperature range: e molar volume V � M/ρ is therefore also linear with the temperature (because b ≪ a).erefore, extrapolation to the nominal temperature T � 0 yields according to (1) and (2), respectively, A 0 � σ(0)V(0) 2/3 and σ 0 � σ(0).us, the critical temperature T E c according to (1) is and T G c according to (2) is The Data Employed and the Results Table 1 presents the surface tension data σ(T � 298.15 K) and their temperature coefficients (zσ/zT) p as well as the molar masses M and the density coefficients of (4), a and b, for obtaining the molar volumes.e derived critical temperature values T E c and T G c for those deep eutectic solvents are also included in Table 1, for which the required data have been reported.Table 1 also shows the values of T M c according to the group contribution estimates; the first entries are from Mirza et al. [1] and the second ones are from Mjalli et al. [4].e following abbreviations are used for the HBA components of the solvents: ChCl � choline chloride; DEANCl � diethylethanolammonium chloride; Pr 4 NBr � tetrapropylammonium bromide; Bu 4 NCl � tetrabutylammonium chloride; MePh 3 PBr � methyltriphenylphosphonium bromide; BzPh 3 PBr � benzyltriphenylphosphonium bromide; and AllPh 3 PBr � allyltriphenylphosphonium bromide.e HBD components are EG � 1,2-ethanediol, Gly � glycerol, Fru � fructose, Glu � glucose, TEG � triethylene glycol, Mea � monoethanolamine, Asa � aspartic acid, Gla � glutamic acid, and Arg � arginine, and the molar ratios for the eutectics are also shown.e resulting T E c estimates of the critical temperatures according to the Eötvös relationship are on the average by 50% larger than the T G c estimates according to the Guggenheim relationship.However, the T G c estimates are nearer the values of T M c from the group contributions according to Mirza et al. [1] than are the T E c ones.On the whole, the T G c values appear to be the more trustworthy. Discussion e normal boiling points T b of deep eutectic solvents are generally not relevant for their applications but represent the upper limit of their usage, if they do not decompose below these T b .erefore, the critical temperatures T c , which are on the average about 4/3(T b /K) [1], are not the quantities that are relevant to their applications but have found use for the estimation of other properties that have not been measured as functions of the temperature [1,4].Still, the critical temperatures are physical properties that ought to be known; hence, the present estimates for two dozens of deep eutectic solvents of which only eight had their T c estimated previously make sense.e values are based on the nominal extrapolation of the experimental surface tension data to T � 0 for the T G c estimates (to obtain σ 0 ) and of both these and the densities for the T E c estimates (to obtain A 0 ), but these parameters do not have any real significance. Previous estimates of the critical temperatures of deep eutectic solvents were conducted according to two paths.One was the use of the modified Lydersen−Joback−Reid [17] in Table 1 for those also studied here) as well as to noneutectic compositions of some of them.Some disagreements between the results of the application of this method are noted in Table 1.e other path was the application of the Eötvös [4,15] and the Guggenheim [15] expressions, but in a different manner than done here.e former expression was recast in the linear form: and from the intercept and slope of its plots T c � −A ′ /B resulted [4].e agreement with the values of T M c from the group contributions is poor.Better agreement was obtained in [15] between values derived from the Eötvös and Guggenheim expressions. Table 1 : e surface tension σ at 298.15 K and its temperature coefficient (zσ/zT) p ; the molar masses M and the density coefficients of ρ � a − b(t/ °C) of deep eutectic solvents; and the derived critical temperatures: T E [1,4]17]n first values of T b and from them the values of T c , which is applicable to organic liquids for which at least T b is known and then extended[1,4,17]to the deep eutectic solvents.It was applied to twenty different deep eutectic solvents (shown as T M c[1,4]and T H c
1,562.4
2018-05-02T00:00:00.000
[ "Materials Science" ]
A statistical analysis of the effect of confining pressure on deformation characteristics of HMA mixtures in the modified wheel track testing Permanent deformation in the form of rutting is a critical mode of failure observed in flexible pavements. While several research have been conducted to develop multiple tests for the characterisation of permanent deformation, little information is obtainable from the existing literature on how the factors or the interaction of the factors from these tests affect the permanent deformation behaviour in a simulative test, such as the wheel tracker. This research focuses on statistical analysis and ruggedness testing of the factors affecting the permanent deformation behaviour of asphalt mixes tested in the Modified Wheel Tracker (MWT). The analysis involved five factors in total, each at two levels. These factors are binder type, voids in total mix (VTM %), nominal maximum aggregate size, temperature, and confining pressure. The study utilised half-fractional factorial design in accordance with ASTM E1169-20. Significant parameters were determined through statistical analysis and regression models were proposed. The contour plots provided various combinations of the most significant factors for the corresponding responses. Based on the statistical analysis of the experiments conducted without any confinement, temperature, amongst all other factors, was found to impose the greatest effect on the permanent deformation behaviour based on vertical and horizontal FN Indices. Experiments with controlled confinement show that “confining pressure” is the most significant factor for the rutting parameters. The sensitivity analysis points out that to maintain the vertical deformation at 2000 cycles to be in the range ± 25% from the model, the confining pressure and temperature should be controlled within the range ± 25% and ± 5%. The use of the MWT shows that long-term rutting development can well be predicted from rutting at 2000 cycles using linear and exponential models. Permanent deformation in hot mix asphalt (HMA) Permanent deformation in the form of rutting presents itself as longitudinal depressions along the wheel paths. According to Tarefder et al. [1], this critical failure in flexible pavements is mainly caused by repeated loading, which induces progressive movement of materials. Saleh [2], and Roy-Chowdhury et al. [3] indicated that the composition of asphalt mixture along with the degree of compaction, mix stiffness, temperature and loading rate are the contributory factors for rutting resistance of asphalt concrete pavements [2,3]. Saleh [2] concluded that the combination of densification and shear deformation causes rutting, and that, the latter causes the severe form of the distress. In other words, rutting is caused by both vertical and horizontal permanent deformations. Therefore, the characterisation of rutting must consider susceptibility of the asphalt mixtures to shear deformation. As pointed out by Tarefder et al. [1], the individual effect of aggregate and asphalt binder, along with their interaction with each other in a mixture have significant contribution on the rutting characteristics of Hot Mix Asphalt (HMA). There can be occasions when the asphalt pavements with very stiff binder and adequate aggregates may still fail to exhibit low rutting, which is mainly due to other properties, such as incorrect volumetric properties. Moreover, it can also be said that the mixture properties solely may not be sufficient for the purpose of ensuring low rutting, but the external factors such as temperature should also be considered [1]. Temperature is one of the main factors that affects the rutting resistance of asphalt pavements. The change of temperature in the pavement changes the viscoelastic properties of the asphalt mix, which in turn induces permanent deformation (rutting) in the pavement [4]. Souza and Castro [5] concluded that the deformation in asphalt concrete is the result of the individual and combined effect of temperature, and the susceptibility of the asphalt mix to temperature. Roy-Chowdhury et al. [6] statistically analysed the factors that have significant effect on rutting behaviour. It was shown that the individual effect of air voids and test temperature, and the combined effect of binder type and air voids have the greatest influence on the rutting resistance of the asphalt mixtures. It is generally well-established that stiffer binder is preferred over softer binder for increasing the rutting resistance of the asphalt pavements [6]. As pointed out by Roy et al. [7], the densification that occurs as a result of reduction of air voids after construction, is a primary cause of rutting during initial traffic loading. The mix later undergoes shear flow when the material reaches the densest state in which the materials flow with volume change. The study by Rahmani et al. [8] indicated that the level of confinement has a significant effect on the nonlinear viscoelastic characteristics of asphaltic materials. Roy et al. [7] reported that the effect of confinement on the variation in rutting is more for the mixes with high air voids content than for the mixes with lower air voids content. It can be noted that NCHRP report 465 [9] involve the option of altering confinement only for the Simple Performance Tests (SPTs), such as the dynamic modulus test. However, no study until now addressed or analysed the effect of confinement in a simulative test such as the wheel tracker. The current research addresses this point, for which a modified setup of the wheel tracker was utilised. The details of this modified wheel tracker (MWT) are presented in the following sections. A laboratory and statistical investigation of the effect of the aforementioned factors on the rutting properties of HMA is the focus of this research. Furthermore, although there are a number of studies such as that by Garba [10], Muraya [11], and Souza and Castro [5], which investigated and analysed the effect of different factors on permanent deformation characteristics of asphalt mixtures using viscoelastic and viscoplastic models, it should be noted that these models are undeniably complex and cannot be readily implemented for routine analysis in industry. Therefore, this research focused on a statistical analysis, the results of which are expected to be readily understood and implemented by the practitioners as to the control of the factors influencing permanent deformation in laboratory, thereby, in-situ. The modified wheel tracker (MWT) The Wheel Tracking Test is the preferred test method for laboratory characterisation of rutting in asphalt mixtures because of its simplicity. However, Saleh [2] concluded that the fully confined assembly used in the conventional system of the wheel tracker creates unrealistic boundary conditions around the asphalt specimen, and thereby immobilises the lateral deformation. The research showed that the fully confined assembly of the device revealed no significant difference in rut depth in the samples, despite the samples being considerably different in both volumetric properties and mix composition. This fact was observed by as also discussed by Shami et al. [12], and Yildirim et al. [13]. Additionally, Saleh [2], Azari [14], and Roy-Chowdhury et al. [15] indicated that the primary phase and only a part of the secondary phase of the permanent deformation curve could be observed in the data produced by the fully confined setup of the wheel tracker. In other words, the fully confined setup of the wheel tracker is less likely to capture the shear deformation in the asphalt concrete mixes, thereby making it difficult to evaluate the true rutting behaviour of the mixes. Hence, a modified setup of the wheel tracker was proposed for use and was standardised under ASTM D8292-20 [16]. In the new setup, the lateral sides along the wheel tracking direction are unconfined or can be under full lateral pressure control, while the remaining two sides are fixed. Hence, in this way, both the rut depth or the vertical deformation and the horizontal deformation can be recorded with the loading cycles, which can be utilised to fully analyse the permanent deformation behaviour of the asphalt mixes. Additionally, the option of controlled confinement on the lateral sides was incorporated in this new setup, to simulate the confining stresses that flexible pavements experience in field. Roy-Chowdhury et al. [3,8] experimented with this new setup of the wheel tracker, and successfully concluded that it is capable of capturing the tertiary zone of the permanent deformation curve where shear deformation takes place. Moreover, the authors also concluded that the measurement of the horizontal deformation is crucial to fully analyse the permanent deformation behaviour of the asphalt mixes. Figure 1a shows the newly modified setup of the wheel tracker; while Fig. 1b and c depict the vertical deformation of 15 mm and the development of horizontal deformation of 10 mm in the lateral sides of a sample tested in the modified setup (unconfined) of the wheel tracker. Ruggedness testing and factorial design approach Ruggedness testing is a critical part of the development of a test method by designing robust and effective experimental designs. These designs are very efficient for evaluating the effect of changes in the factors on the chosen responses. For this type of statistical design, it is inherently assumed that each factor has an independent effect on the test results. Therefore, the observed effect resulting from simultaneous variation of several factors is essentially the sum of the individual effects. Since ruggedness testing is concerned with the evaluation of the effect of changes in testing conditions and not necessarily the form of the effect, each testing condition is usually evaluated at only two levels [17]. Problem statement The accurate and precise measurement of asphalt mixture properties is important for both selecting and designing appropriate mixture for the pavement projects, and Quality Assurance (QA) and Quality Control (QC) purposes. It has been discussed earlier how the MWT could be helpful in true characterisation of the asphalt mixtures by considering the lateral flow or shear deformation, thereby producing the tertiary zone in the permanent deformation curve. While several research have been conducted to develop multiple tests for the characterisation of permanent deformation, limited information is obtainable from the existing literature on how the factors or the interaction of the factors from these tests affect the permanent deformation behaviour in the field. This study deals with the ruggedness testing of the experimental factors affecting the permanent deformation behaviour of dense-graded asphalt mixtures tested in the MWT. It should be noted that the Simple Performance Test (SPT) candidates utilise a sophisticated test setup, and that, they are unlikely to be adopted for quality control or quality assurance purposes by the practitioners. In contrast to that, the wheel trackers are simple and common tool for routine testing of the asphalt mixtures. Therefore, the current study utilised the Modified Wheel Tracker (MWT) as the asphalt mixture testing device, and thereby, statistically investigate the effect of different factors on the permanent deformation characteristics on different asphalt mixtures. Finally, the objective of this research is not only to study the effect of the factors on rutting behaviour, but to find the ideal setting for these factors in the MWT. Methodology The current study focuses on the ruggedness test of the factors that affect permanent deformation to investigate the significance of each factor and their interactions when studied under the MWT. Moreover, it is also important to rank and distinguish these factors in terms of their significance on the permanent deformation behaviour of the asphalt mixes. This statistical analysis is expected to help in controlling and adjusting of the factors in the test method to evaluate the actual field response. The factors and their levels which were included in this research for each phase of the 2 k-p fractional factorial design ruggedness testing are presented in Table 1. Roy-Chowdhury et al. [3,15] reported the precision and repeatability of the MWT, and these were found to be comparable to the Hamburg Wheel Tracking Test (HWTT) repeatability reported by Azari [14], and Cox et al. [18], and within the tolerable limits for the Dynamic Creep or unconfined Flow Number test covered in AASHTO 378 [19]. Also, it is a common practice to use two replicates for wheel tracker tests because of the large sized specimens weighing in excess of 17,000 gm, compared to other tests that utilise small sized specimens. Therefore, two replicates per mix were considered in this study. The responses which were considered for each phase of experiments were fitted to a regression model and the factors were ranked. The statistical analysis was conducted according to ASTM E1169-20 [20], and the statistical software Minitab 19.2020.1 [21] was used to design the experiments and run the analysis. For the selection of confining pressures, initially, unconfined (0 kPa lateral pressure), and lateral pressures of 0.9 kPa, and 1.31 kPa were applied on AC 14 mixes with PG 70-16 binder, two air voids contents (4 and 7%) and one temperature 60°C. The vertical and horizontal permanent deformation results for these mixes were obtained and presented in Fig. 2. As can be observed from the deformation curves, the AC 14 mix with 4% air voids and with lateral pressure of 1.31 kPa behaved almost similarly to the AC 14 mix with 7% air voids and with lateral pressure of 0.9 kPa, while the mixes with zero confinement or lateral pressure behaved quite distinctly from each other and from the ones with confinement. It can be observed that the absence of confinement in the specimens resulted in higher vertical and horizontal deformations compared to others. Hence, the confining pressures for the subsequent work of ruggedness testing included 0 kPa (unconfined-lower end of the said pressures), and 1.31 kPa (upper end of the said pressures) to statistically investigate the effect of lateral pressures along with other factors on the permanent deformation characteristics of a series of asphalt mixtures with different combinations of mix and test conditions. Phase I will focus on the effect of test parameters on FN Index values, while Phase II will focus on different measures of rutting, i.e., permanent deformation at 2000 cycles, and creep slope. Experimental setup and preparation of asphalt mixtures The study utilised the modified wheel tracker, standardised under ASTM D8292, for the testing of 305 mm 9 305 mm 9 75 mm compacted HMA slab specimens. A vertical load of 0.7 kN is applied on the specimens with a wheel tracking rate of 26 cycles/min (52 passes per min). The test was conducted in dry condition, at specified temperatures. The test can be run in both unconfined and confined setups. The test is stopped at 50,000 cycles or when the total cumulative vertical permanent deformation (rut depth) reaches 15-mm, whichever occurs first. [22,23]. The gradation curves for AC 14 and AC 20 are shown in Fig. 3. As can be observed, the gradations differ only in the coarse fractions while the middle and fine fractions are very similar to each other. Both AC 14 and AC 20 are used as heavy-duty mixes in New Zealand, that is why both were considered in this study. Determination of vertical and horizontal indices As discussed earlier, the immediate result of using the modified setup of the wheel tracker (unconfined) is that, in most cases, the permanent deformation curve tends to show three distinct phases or zones similar to the results from other fundamental tests, such as the SPTs. These curves can be analysed using the Francken model, and as a result, a definite FN (vertical and horizontal) can be obtained, in contrast to only 'rut depth' that is produced by conventional wheel trackers. Therefore, in this study, the flow number based on vertical deformation (FN V ), and the flow number based on horizontal deformation (FN H ) for each specimen were determined by fitting the experimental vertical and horizontal deformation data to the Francken model, the details of which can be found in the study by Roy-Chowdhury et al. [3]. Figure 4 illustrates the three zones of a typical permanent deformation curve resulting from the MWT test with zero confinement, and the Francken model fitted to the experimental data. As pointed out by Zhang et al. [24] and Ali et al. [25], although the Flow Number (FN) has been widely used as a rutting parameter, the FN Index is a better indicator of rutting susceptibility than the FN approach. This is because FN Index considers both FN and the deformation at FN. Mathematically, FN index can be expressed as the ratio of the accumulated permanent deformation to the FN, presented in Eq. 1. where dðFÞ represents permanent deformation (in mm) and FN is the flow number. It can be seen that the higher FN produced lower FN Index and vice-versa ( Fig. 5a and b). Additionally, prior to the main analysis, the correlation of vertical and horizontal FN Indices was obtained to investigate if there exists any linearity between the said parameters. As depicted in Fig. 5c, an excellent correlation was found between the two with an R 2 value of 0.98. Moreover, these parameters showed promising correlation with each other, indicating that they are wellconnected and can serve as surrogate or alternative parameters for assessing the true permanent deformation characteristics of HMA. The null hypothesis for this study is that the response will not be affected by the change in test factors. The level of significance for this statistical analysis, ''a'', is equal to 0.05. Therefore, the acceptance or rejection of the null-hypothesis will depend on the P value, which is calculated and compared with ''a''. P value indicates the probability of getting a mean difference between the groups as high as what is observed by chance. Lower P value indicates higher significance between the groups. In this study, a P value lower than 0.05 indicates that a factor is significantly affecting the response. Analysis of Variance (ANOVA) method is used in order to evaluate the statistical significance among the responses [Log (Vertical FN Index) and Log (Horizontal FN Index)]. The results are presented in Table 3. As shown in Fig. 6, all the main factors-binder type, %VTM, test temperature, and mix NMAS were found to be significant. Among the two-way interactions or joint effects, ''Binder*VTM'', ''Binder*Temperature'' were found to be significant. The results demonstrate that the change in these factors will significantly affect the permanent deformation behaviour of the asphalt mixtures. The reason why the combinations such as ''VTM*Test Temperature'' and ''VTM*Mix NMAS'' are absent is essentially due to the fact that a fractional factorial design such as the one utilised in this research, is in fact a subset of a full-factorial design, which confounds some of the 2-way interactions and main effects. Hence, these combinations or interactions cannot be distinguished from the effects of other higher-order interactions [21]. Hence, this research should serve as a screening study to obtain the factors which are most significant, so that a full factorial design and analysis can be conducted in future to obtain a complete set of factors and their combinations affecting the response. As pointed out by Montgomery [26], the halfnormal probability plots are another approach to find the significant factors. The probability plot shows the effect of the factors against the percent probability of that effect. In this technique, the significance of the factors is proportional to their distance from the probability line (red-dotted line in Fig. 6). The factor deviating the most is the one with highest significance (i.e., temperature, in this phase of study). Based on the analysis, temperature was found to have the greatest influence on the permanent deformation behaviour. It is primarily attributed to the fact that the increase in test temperature reduces the binder viscosity, and the binder softens as a result, with the decrease in mix stiffness, leading to higher permanent deformation. The second most influencing factor was found to be %VTM, the reason being, increase in air voids in the asphalt mixture reduces the mix stiffness and weakens the structure, thereby making it more susceptible to permanent deformation. For the effect of bitumen binder, it is a well-established fact that softer binder generally induces more permanent deformation. The NMAS, amongst the individual factors, was found to have the least significance for both vertical and horizontal indices. This is primarily attributed to the fact that, the mix gradations of AC 14 and AC 20 in this study are quite similar to each other, as discussed earlier. The regression models for vertical and horizontal indices can be constructed as: Table 4. Determination of creep slope Yildirim et al. [13], and Izzo and Tahmoressi [27], defined the creep slope as the linear region of the curve after post-compaction, which represents the rutting susceptibility due to plastic flow. The MWT utilised in this study, utilised the dry test condition, and the resulting creep slope was determined for the mixes. Figure 7a shows the correlations of the vertical and hoizontal indices with the vertical and horizontal deformations at 2000 cycles of mixes/runs 1-4. As can be celarly observed, the high correlations validate the use of vertical and horiontal deformation at 2000 cycles for the statistical analysis presented in this section. Moreover, the correlations of vertical and horizontal deformations at 2000 cycles with higher cycle numbers, such as 10,000, 25,000, and 50,000 were investigated for mixes/runs 5-8, presented in Fig. 7b and c. The correlations show that long-term rutting development can well be predicted from rutting at 2000 cycles using linear and exponential models. The slightly lower linear correlation and exponential correlation for the relationship of vertical and horizontal deformation at 2000 cycles and that at 50,000 cycles could have been due to the combined effect of stress and remperature at higher loading cycles. The advantage of such models is that these can potentially minimise the time and cost of testing significantly, as also discussed by Javilla et al. [28]. Statistical analysis As can be seen from Table 5 and Fig. 8, the statistical analysis of this phase of experiments showed that all the main factors except ''binder'' were found to be significant for the vertical and horizontal deformations at 2000 cycles. This can be further supported by the fact pointed out by Montgomery [26], that, a significant interaction can often mask the significance of main effects. It indicates that the ultimate effect of binder type is evaluated as a result of its interaction with VTM and test temperature. Another important observation is that, ''confining pressure'' was found to be significant for all three parameters studied in this phase, with its distinction being the greatest for the creep slope, followed by horizontal deformation, and vertical deformation. This indicates that the creep slope, followed by horizontal deformation and vertical deformation is most sensitive to any change in the confining pressure. The findings clearly demonstrate that the increase of the lateral confining pressure increasingly immobilises the lateral permanent deformation, producing lesser vertical (shear) and horizontal deformation. For obvious reason that temeprature has a tremendous effect on the viscoelastic materials such as asphalt, it is the immediate next factor that imposed significant effect on the test results, where the increase in temeprature increased the permanent deformation in the mixes. The results also indicate that VTM is more important than ''binder'' to be maintained at specified limits with little deviation, because even little change in design VTM can result in unwarranted results. The residual values and their trends, which were calculated and examined during the ANOVA, were assessed to validate the accuracy of the models. As pointed out by Montgomery (2017), an adequate model does not bear any obvious pattern [26]. The model adequacy can be verified by studying the normal probability plot of the residuals, which should fall upon the equality line, indicating that the normal distribution assumption of the errors is satisfied. However, moderate departures from normality are usually observed and is generally accepted. Effects and interaction contour plots The effects and interaction plots, which show the effects of two factors interactions on the response were presented. The idea is to predict how much deviation in the factors from their design/target value produces a desired range of the corresponding response. This can help control and adjust the design values in the laboratory when conducting the test procedure. The analysis in this section covers the important findings of the experiments discussed and presented earlier. The combinations of the most significant factors for the corresponding responses were considered in the effects and interaction contour plots. The analysis of the contour plots for vertical deformation at 2000 cycles ( Fig. 10a and b) were substituted into the model and the output was compared with the initial model output for the responses. Figure 11a and b indicate that the vertical deformation at 2000 cycles to be within the range ± 25% from the model, the confining pressure and temperature should be controlled in the range ± 25% and ± 5%. This indicates that even minimal change in temperature and confinement would possibly increase the variability amongst the mix replicates and make the coefficient of variation (CV %) of the target permanent deformation parameters yield exceedingly high values, which is not ideal. Conclusion A total of 16 combinations were considered for the MWT test and the analysis was broken down into two phases. Permanent deformation parameters including Fig. 11 Sensitivity analysis for the responses for different factors the vertical and horizontal FN Indices, vertical and horizontal deformations at specific cycle number, and creep slope were determined. Two combinations of mix gradation AC 14 and AC 20, %VTM (4.0% and 7.0%), binder type (PG 64-16 and PG 70-16), test temperature, and lateral confinement were included in half-factorial design of two-level analysis. Based on the findings, the following can be concluded: 1. ANOVA analysis was used to determine the factors with significant effect on permanent deformation behaviour of HMA studied in the MWT. For both Log (Vertical FN Index) and Log (Horizontal FN Index), all the main factors-binder type, %VTM, test temperature, and mix NMAS were found to be significant. As for the joint effects, the two-way interactions, i.e., ''Binder*VTM'', ''Binder*Temperature'' were found to be significant. The results indicate that the change in any of these factors would potentially alter the rutting or permanent deformation behaviour of the asphalt mixtures studied under the MWT. 2. The use of the MWT shows that long-term rutting development can well be predicted from rutting at 2000 cycles using linear and exponential models. The advantage of such models is that these can potentially minimise the time and cost of testing significantly. 3. Initial results show that the AC 14 mix with 4% air voids and with lateral pressure of 1.31 kPa behaved almost similarly to the AC 14 mix with 7% air voids and with lateral pressure of 0.9 kPa, while the mixes with zero confinement or lateral pressure behaved quite distinctly from each other and from the ones with confinement. It was observed that the absence of confinement in the specimens resulted in higher vertical and horizontal deformations compared to others. Ruggedness matrix-based experiments with controlled confinement and unconfined mixes show that ''confining pressure'' is the most significant factor for the rutting parameters, with its influence being the greatest for the creep slope, followed by horizontal deformation, and vertical deformation. 4. The contour plots provided various combinations of the most significant factors for the corresponding responses. This can help control, adjust, and optimise the design values in the field. 5. The sensitivity analysis points out that even minimal change in temperature and confining pressure can possibly increase the variability among the mix replicates. The vertical deformation at 2000 cycles to be in the range ± 25% from the model, the confining pressure and temperature should be controlled within the range ± 25% and ± 5%.
6,378.4
2023-01-20T00:00:00.000
[ "Engineering", "Materials Science" ]
A Novel Four-Step Algorithm for Detecting a Single Circle in Complex Images Single-circle detection is vital in industrial automation, intelligent navigation, and structural health monitoring. In these fields, the circle is usually present in images with complex textures, multiple contours, and mass noise. However, commonly used circle-detection methods, including random sample consensus, random Hough transform, and the least squares method, lead to low detection accuracy, low efficiency, and poor stability in circle detection. To improve the accuracy, efficiency, and stability of circle detection, this paper proposes a single-circle detection algorithm by combining Canny edge detection, a clustering algorithm, and the improved least squares method. To verify the superiority of the algorithm, the performance of the algorithm is compared using the self-captured image samples and the GH dataset. The proposed algorithm detects the circle with an average error of two pixels and has a higher detection accuracy, efficiency, and stability than random sample consensus and random Hough transform. Introduction 1.Background Single-circle detection has many application scenarios, namely for use in automated inspection and assembly, the identification of weld joints and weld seams, PCB hole detection, and non-destructive testing [1][2][3][4][5].For example, only the single circle in the image needs to be detected when welding the inner diameter edges of tube heat exchanger bores utilizing machine vision.In the abovementioned fields, the circles to be detected are usually present in complex images.Complex images are images with multiple contours, intricate textures, mass noise, and various levels of brightness, generally containing a lot of information about edge and structure.Therefore, the task of circle detection in complex images and obtaining the localization and shaping parameters of the circle is more challenging.Widely used circle parameter detection methods include random sample consensus, stochastic Hough transform, and least squares, with good robustness and accuracy [6,7]. Literature Review Hough transform (HT) has received attention from a wide range of scholars due to its insensitivity to noise and ease of implementation in parallel computing.However, the HT algorithm has a long computation time and requires a large storage space, making circle detection inefficient.To solve this problem, Xu et al. [8] proposed the randomized Hough transform (RHT).The RHT algorithm maps multiple pixels on the edge to a single point in the parameter space.It determines the circle's parameters by randomly obtaining three points in the parameter space, which can significantly reduce the computation time and storage space.Consequently, many scholars have conducted studies based on the RHT algorithm [9,10].Nonetheless, the algorithm uses three points instead of all points along the circle edge in order to determine the circle parameters, which may reduce the detection accuracy.To enhance the probability that the three points belong to the same circle, Wang [11] proposed an improved RHT method integrated with a fitting subpixel circle detection algorithm, where he removed isolated points after edge extraction.The method effectively eliminates the noise and improves circle detection accuracy.To better remove noise and determine the fitted sample points, Jiang [12] proposed an efficient stochastic Hough transform circle detection algorithm based on probabilistic sampling and feature points, which optimizes the methods of determining sample points and finding candidate circles.Experimental results show that the algorithm improves the effectiveness of sampling the fitted sample points and prevents the fake circles from being regarded as candidate circles.Both the RHT algorithm and the improved algorithm based on RHT obtain the circle parameters via random sampling, which is challenging to apply to circle detection in complex images because the number of noise points in complex images is more than the number of feature points at the circle edge. Randomized sample consensus (RANSAC) was proposed by Fishler and Bolles in 1981 as an iterative uncertainty algorithm for estimating parameters from noise datasets; this has been used in various image processing and computer vision applications [13].When fitting a circle, a part of the sample points is randomly selected as the set of fitted sample points, and a circle is fitted.Kiddee et al. [14] used the RANSAC algorithm to determine the location of the edge feature points in circular weld tracking.Although the algorithm can estimate the parameters of the circle edges, it is only suitable for cases where there are fewer noise points outside of the circle.To solve the problem of excessive noise points outside the circle, Ma et al. [15] proposed a spatial circle center fitting method based on the RANSAC algorithm, which reduces the noise outside the circle and improves the robustness of the circle detection.However, circle detection in complex images is still unable to obtain an excellent fitting effect. Least squares fitting of circles (LSC) [16][17][18] is the method of fitting a circle by minimizing the sum of the squares of the distances between the sample points and the corresponding points on the fitted circle, which has a high fitting accuracy and faster detection speed compared with the RHT and RANSAC algorithms.However, the results obtained based on the LSC algorithm are easily affected by noise.Therefore, scholars have improved and compared the LSC algorithm.Zhou et al. [19] proposed the MFLS algorithm, which removes the noise points by establishing a mathematical model in polar coordinates and then uses the LSC algorithm for circle fitting.The method has high positioning accuracy.However, the detection results may be seriously affected when the noise points are not entirely removed.To detect the parameters of the punched circle quickly and accurately, Cao et al. [20] proposed a circle fitting method based on LSC and the mean shift algorithm.This algorithm concentrates the center of the fitted circle around the true circle's center in order to obtain the best actual circle.Experiments show that this algorithm detects circles faster than the RHT algorithm. In addition to the methods mentioned above, AI-based approaches, e.g., deep learning, have also been used in the literature to detect circle contours.Essentially, AI-based circular contour detection methods usually have high accuracy and robustness.However, their performance depends on the algorithms used and the quality of the training data.The recognition results are usually good if the algorithms and models are adequately trained and have good generalization capabilities.However, some false or missed detections may occur for complex or noisy image scenes. In summary, the increased complexity of images in which the target circles are located leads to some limitations of the detection algorithms.RHT and RANSAC algorithms are designed based on the random sampling method to obtain the fitted circle.They fit the circle by selecting some of the sample points (pixels at the edges) instead of the sample points of the whole circle, which may lead to the selection of sample points that are not representative enough, especially when the sample points contain noise or outliers.The LSC algorithm Sensors 2023, 23, 9030 3 of 23 has high accuracy but is extremely sensitive to noise.In addition, excessive sample points increase the complexity of the least squares' nonlinear optimization computation.Thus, a new single-circle detection algorithm is desired for its application in complex images. Organization The rest of the paper is organized as follows: Section 2 states the single-circle detection problem, Section 3 describes the proposed single-circle detection principle, Section 4 performs comparative experiments and results analysis, and Section 5 concludes the paper and discusses the future work. Problem Statement Although single-circle detection with a simple background is a typical computer vision problem, which has been well solved in the literature, single-circle detection with a complex environment requires a more efficient and accurate method.The expected detection method addresses the following four issues: • The removal of mass noise in the image edge preprocessing stage.Interfering points are an adverse factor affecting the accuracy and efficiency of single-circle detection.Mass noise increases the difficulty of de-noising and main contour detection; therefore, the noise needs to be removed accurately when detecting single circle with a complex background. • The selection of sample points for fitting circles.After image edge processing, interfering points affect the fitting results.These interfering points are scattered in a low-density region.In contrast, the sample points of the main contour are connected in an arc and are more tightly connected in a high-density area.Considering the characteristics of the interfering points and sample points, establishing a sample point selection method for fitting candidate circles is another challenge. • The iteration of candidate circles and determination of ideal circles.Overfitting and underfitting are prevented via suitable methods during the exact fitting of circles.We need to find an effective and fast iterative solution for the candidate circle, which in turn ensures the quality of the ideal circle. • The improvement of output circle detection accuracy.Despite reducing the frequency of overfitting and underfitting occurrences, there may still be an error between the ideal circle and the real-world circle due to the influence of various interfering points. To improve the accuracy of output circle detection, the effect of interfering points on output circle parameters needs to be further reduced. Definitions of terms used in the methods are given below. Definition 1. Candidate circle denotes the circle constructed by fitting during the process in the least squares fitting circles' iteration process.It cannot be directly output as the final result, and needs further analysis and screening. Definition 2. Ideal circle denotes the last circle fitted by the least square method in Section 3.3. Definition 3. Output circle denotes the final circle output, which is expected to be with high accuracy and stability. Definition 4. Edge detection denotes a method to extract the edges of an image with a large gradient, which includes the circle edges to be detected and the interference points. Definition 5. Main contour denotes the target edge to be detected in the image.Definition 6. Sample points denote pixels in an image.In this paper, the corresponding pixels are put into the coordinate system to explain the principle of each algorithm.Therefore, the pixels are called sample points. Definition 7. Data points in K-means algorithm represent the coordinate values of the center and radius of all candidate circles. Methods The algorithm is proposed for single-circle detection, which combines the DBSACN clustering algorithm, the least squares method (LS), and the K-means clustering algorithm.To briefly express our proposed algorithm, it is named as DBLSKCF algorithm.The four steps in the DBLSKCF algorithm respond to the abovementioned four challenges.The complete single-circle detection process is schematically shown in Figure 1.Definition 6. Sample points denote pixels in an image.In this paper, the corresponding pixels are put into the coordinate system to explain the principle of each algorithm.Therefore, the pixels are called sample points.Definition 7. Data points in K-means algorithm represent the coordinate values of the center and radius of all candidate circles. Methods The algorithm is proposed for single-circle detection, which combines the DBSACN clustering algorithm, the least squares method (LS), and the K-means clustering algorithm.To briefly express our proposed algorithm, it is named as DBLSKCF algorithm.The four steps in the DBLSKCF algorithm respond to the abovementioned four challenges.The complete single-circle detection process is schematically shown in Figure 1. • Step 2. Selection of sample points for fitting curves. • Step 3. Iteration of candidate circles and determination of the ideal circles. • Step 4. Accuracy improvement of the output circle detection. Image Edge Preprocessing Image edge preprocessing is the foundation for extracting target edges and aims to highlight real and valuable information.However, the images often contain noise due to the impact of uncertainties, such as acquisition equipment and lighting conditions.Therefore, image edge processing, which consists of two key steps, Canny edge detection and main contour screening, can significantly improve single-circle detection performance. Canny Edge Detection Edge detection is used in many object edge detection applications to observe image features based on a significant change in the gray level.In addition, it can reduce the amount of data in an image while preserving its structural properties [21].Therefore, the classical Canny edge detection algorithm is used for the extraction of edge features in images [22].The edge detection accuracy depends on the thresholds, and a series of preexperiments are conducted to determine the appropriate thresholds.By analyzing the preexperimental results, applicable high and low thresholds are selected to extract information about the target edges. Differentiation is the basis of gradient computation, which is very sensitive to the image's mutations (generally denotes noise).To improve the accuracy of the detection results, the image needs to be filtered before edge detection to remove interfering points and reduce pseudo edges.Gaussian filtering is effective in smoothing the image and reducing distracting points. • Step 2. Selection of sample points for fitting curves. • Step 3. Iteration of candidate circles and determination of the ideal circles. • Step 4. Accuracy improvement of the output circle detection. Image Edge Preprocessing Image edge preprocessing is the foundation for extracting target edges and aims to highlight real and valuable information.However, the images often contain noise due to the impact of uncertainties, such as acquisition equipment and lighting conditions.Therefore, image edge processing, which consists of two key steps, Canny edge detection and main contour screening, can significantly improve single-circle detection performance. Canny Edge Detection Edge detection is used in many object edge detection applications to observe image features based on a significant change in the gray level.In addition, it can reduce the amount of data in an image while preserving its structural properties [21].Therefore, the classical Canny edge detection algorithm is used for the extraction of edge features in images [22].The edge detection accuracy depends on the thresholds, and a series of pre-experiments are conducted to determine the appropriate thresholds.By analyzing the pre-experimental results, applicable high and low thresholds are selected to extract information about the target edges. Differentiation is the basis of gradient computation, which is very sensitive to the image's mutations (generally denotes noise).To improve the accuracy of the detection results, the image needs to be filtered before edge detection to remove interfering points and reduce pseudo edges.Gaussian filtering is effective in smoothing the image and reducing distracting points. The Gaussian kernel size and the standard deviation affect the filtering effect.The standard deviation in this algorithm uses the default parameter.To determine the optimal Gaussian kernel size, this section performs the filtering process by filtering three complex images with Gaussian kernel sizes of 5 × 5, 7 × 7, 9 × 9, 11 × 11, respectively.The filtered As shown in Figure 2, the Gaussian kernel size is 5 × 5, 7 × 7, and the corresponding edge detection results have many interfering points and pseudo edges around the detected edges.This phenomenon may reduce the single-circle detection accuracy.The filtering effect is better when the Gaussian kernel size is 9 × 9, 11 × 11.Nevertheless, excessive Gaussian kernel size may cause some target edges to be filtered out, which leads to significant deviations in the circle detection results.Thus, the Gaussian kernel size chosen in the DBLSKCF algorithm is 9 × 9. Sensors 2023, 23, x FOR PEER REVIEW 5 of 23 The Gaussian kernel size and the standard deviation affect the filtering effect.The standard deviation in this algorithm uses the default parameter.To determine the optimal Gaussian kernel size, this section performs the filtering process by filtering three complex images with Gaussian kernel sizes of 5 × 5, 7 × 7, 9 × 9, 11 × 11, respectively.The filtered images are then subjected to edge detection using the Canny edge detection algorithm.The results are as follows. As shown in Figure 2, the Gaussian kernel size is 5 × 5, 7 × 7, and the corresponding edge detection results have many interfering points and pseudo edges around the detected edges.This phenomenon may reduce the single-circle detection accuracy.The filtering effect is better when the Gaussian kernel size is 9 × 9, 11 × 11.Nevertheless, excessive Gaussian kernel size may cause some target edges to be filtered out, which leads to significant deviations in the circle detection results.Thus, the Gaussian kernel size chosen in the DBLSKCF algorithm is 9 × 9. Main Contour Screening Canny edge detection results show that the edges of the main contour are more tightly connected.Meanwhile, the interfering points are mostly irregularly distributed, making it difficult to form a complete edge.Even if these interfering points are connected to form an edge, the length of the edge will be much smaller than the length of the main contour.Based on this feature, the edge lengths are utilized to achieve the main contour screening.Precisely, we can calculate the length of each edge and arrange these lengths in descending order to obtain a sequence of edge lengths.We select a few longer edges to narrow the main contour range and determine the number of retained edges by setting a threshold.The calculation formula is as follows: where C m denotes the set of all edges before sorting, 1 , 2 , ⋯ , represent the edge in the image, C s 0 denotes the set of the first s long edges after sorting, 1 0 , 2 0 , ⋯ , 0 denote the first s long edges after sorting, C p s denotes the set of pixels of the edges in C s 0 , and 1 , 2 , ⋯ , denote the pixel at the edge in C p s .Please note that C p s is the pixel set output after the step of image edge preprocessing. Main Contour Screening Canny edge detection results show that the edges of the main contour are more tightly connected.Meanwhile, the interfering points are mostly irregularly distributed, making it difficult to form a complete edge.Even if these interfering points are connected to form an edge, the length of the edge will be much smaller than the length of the main contour.Based on this feature, the edge lengths are utilized to achieve the main contour screening.Precisely, we can calculate the length of each edge and arrange these lengths in descending order to obtain a sequence of edge lengths.We select a few longer edges to narrow the main contour range and determine the number of retained edges by setting a threshold.The calculation formula is as follows: where C m denotes the set of all edges before sorting, e 1 , e 2 , The threshold directly affects the accuracy and efficiency of single-circle detection.In the Canny edge detection algorithm, some irrelevant small edges may be detected due to noise, influencing the circle detection process.The DBLSKCF algorithm keeps a few edges with longer lengths to improve the accuracy and efficiency of circle detection.The length of the edges can be used to assess their continuity.Usually, longer edges are more representative of a part of the real-world circle.To determine the optimal number of retained edge, 4-8 long edges are kept for each of the images in Figure 2d, and the results are shown in Figure 3. The threshold directly affects the accuracy and efficiency of single-circle detection.In the Canny edge detection algorithm, some irrelevant small edges may be detected due to noise, influencing the circle detection process.The DBLSKCF algorithm keeps a few edges with longer lengths to improve the accuracy and efficiency of circle detection.The length of the edges can be used to assess their continuity.Usually, longer edges are more representative of a part of the real-world circle.To determine the optimal number of retained edge, 4-8 long edges are kept for each of the images in Figure 2d, and the results are shown in Figure 3.To better express the significance of retaining different numbers of edges, the denoising ratio is introduced in this paper.As shown in Formula ( 4), the denoising rate indicates the ratio of the number of removed interfering points to the number of detected edge pixels, revealing the denoising ability of the image.The algorithm's performance under different interfering points levels can be evaluated by retaining the comparison experiments with distinct edges.The computational expression is as follows: where means the number of pixels after edge detection. indicates the number of pixels in the set C p s and represents the denoising rate.Main contour screening aims to select the fitted sample points better.If the denoising rate is excessively low, it will lead to many interfering points in the fitted samples, which may reduce the accuracy and efficiency of the fitting.On the contrary, although raising the denoising rate reduces the number of interfering points, it may result in a lack of representative contour points in the fitted sample, adversely affecting the accuracy of the detecting results.The DBLSKCF algorithm integrates the final detection results while increasing denoising to ensure detection efficiency.Additionally, it chooses to retain six edges to obtain accurate fitting results.This strategy enables the DBLSKCF algorithm to obtain better results in real-world circle detection. Image edge preprocessing improves the accuracy and efficiency of edge detection and provides reliable input data for the subsequent circle detection stage.The steps involved in image edge preprocessing are given in Algorithm 1.To better express the significance of retaining different numbers of edges, the denoising ratio is introduced in this paper.As shown in Formula ( 4), the denoising rate indicates the ratio of the number of removed interfering points to the number of detected edge pixels, revealing the denoising ability of the image.The algorithm's performance under different interfering points levels can be evaluated by retaining the comparison experiments with distinct edges.The computational expression is as follows: where N m means the number of pixels after edge detection.N p s indicates the number of pixels in the set C p s and β represents the denoising rate.Main contour screening aims to select the fitted sample points better.If the denoising rate is excessively low, it will lead to many interfering points in the fitted samples, which may reduce the accuracy and efficiency of the fitting.On the contrary, although raising the denoising rate reduces the number of interfering points, it may result in a lack of representative contour points in the fitted sample, adversely affecting the accuracy of the detecting results.The DBLSKCF algorithm integrates the final detection results while increasing denoising to ensure detection efficiency.Additionally, it chooses to retain six edges to obtain accurate fitting results.This strategy enables the DBLSKCF algorithm to obtain better results in real-world circle detection. Image edge preprocessing improves the accuracy and efficiency of edge detection and provides reliable input data for the subsequent circle detection stage.The steps involved in image edge preprocessing are given in Algorithm 1. Selection of Fitting Sample Points The DBSCAN clustering algorithm separates the main contour sample points and interfering points.The algorithm clusters edges of arbitrary shapes and splits complex and irregularly shaped edges well using two parameters: the neighborhood radius and the minimum number of sample points within the circle determined by this neighborhood radius. The DBSCAN algorithm clusters sample points into different classes based on the size of their neighborhood density.The clustering principle is shown in Figure 4. Formula ( 5) is used to classify the sample points.If the number of sample points in the neighborhood of sample point A is greater than or equal to the minimum number of sample points, the sample point A is classified as a core point.If the number of the sample points in B's neighborhood are less than the minimum number of sample points, point B is classified as a boundary point.If the number of sample points in N's neighborhood is 0, point N is classified as an outlier point. where (x c , y c ) represent the coordinates of the sample point, C A denotes the set formed by the core point A, C B represents the set created by the boundary point B, and C N indicates the set created by the outlier point N. n ε indicates the number of sample points in a circle centered at (x c , y c ) with a radius of ε.Please note that after the screening of the fitted samples, the output is the set C A . 4: Calculate the C p s with (3). Selection of Fitting Sample Points The DBSCAN clustering algorithm separates the main contour sample point terfering points.The algorithm clusters edges of arbitrary shapes and splits comp irregularly shaped edges well using two parameters: the neighborhood radius and imum number of sample points within the circle determined by this neighborhood The DBSCAN algorithm clusters sample points into different classes based size of their neighborhood density.The clustering principle is shown in Figure 4.If the sample point is marked as the core point, the above clustering proces repeated for the sample points in the neighborhood until all sample points are m Two results occur after image preprocessing: the first is that the longest edg image includes only the main contour, as shown in Figure 5a; the other is that it both the main contour and the outer interfering points, as shown in Figure 5b.If cl results with fewer sample points selected for fitting, they may suffer from If the sample point is marked as the core point, the above clustering process will be repeated for the sample points in the neighborhood until all sample points are marked. Two results occur after image preprocessing: the first is that the longest edge in the image includes only the main contour, as shown in Figure 5a; the other is that it contains both the main contour and the outer interfering points, as shown in Figure 5b.If clustering results with fewer sample points selected for fitting, they may suffer from image interfering points or broken edges.Therefore, the algorithm proposed in this paper retains the class with the most sample points.However, it is possible that interfering points that are close to the main contour are incorrectly clustered into fitted sample points due to unreasonable values of ε and n minpts .The algorithm specifically related to the selection of the fitted sample points is shown in Algorithm 2. Candidate Circle Iteration and Ideal Circle Determination A set of sample points with a main contour is obtained in Section 3.2, and the sample points show certain distributional features. (1).Sample points on the main contour are connected into superior or inferior arcs.An arc with a central angle of less than or equal to 180° is called inferior arc, as shown in Figure 6a; an arc with a central angle of larger than 180° is called superior arc, as shown in Figure 6b.(2).Besides the sample points of the main contour, a few interfering points are distributed on the outer side of the main contour.Fitting candidate and ideal circles in such cases is studied in this section. Candidate Circle Iteration and Ideal Circle Determination A set of sample points with a main contour is obtained in Section 3.2, and the sample points show certain distributional features. (1).Sample points on the main contour are connected into superior or inferior arcs.An arc with a central angle of less than or equal to 180 • is called inferior arc, as shown in Figure 6a; an arc with a central angle of larger than 180 • is called superior arc, as shown in Figure 6b. Candidate Circle Iteration and Ideal Circle Determination A set of sample points with a main contour is obtained in Section 3.2, and the sample points show certain distributional features. (1).Sample points on the main contour are connected into superior or inferior arcs.An arc with a central angle of less than or equal to 180° is called inferior arc, as shown in Figure 6a; an arc with a central angle of larger than 180° is called superior arc, as shown in Figure 6b.(2).Besides the sample points of the main contour, a few interfering points are distributed on the outer side of the main contour.Fitting candidate and ideal circles in such cases is studied in this section.(2).Besides the sample points of the main contour, a few interfering points are distributed on the outer side of the main contour.Fitting candidate and ideal circles in such cases is studied in this section. Based on the comparison of the RHT algorithm, RANSAC algorithm, and LSC algorithm in the literature review, the LSC algorithm has a better fitting circle effect.To improve the accuracy and efficiency of single-circle detection, the algorithm uses the residual sum of squares to fit the circles. It can be seen from Figure 7a, when only the main contour sample points exist in the fitting sample, the fitting results are better.However, the traditional least squares method Sensors 2023, 23, 9030 9 of 23 is more sensitive to the interfering points, leading to errors in the results of the circle fitting, as shown in Figure 7b.To obtain the desired circle fitting results, this section proposes a method to remove the fitted failure points one by one based on the least squares method.This method introduces two parameters: the maximum number of iterations allowed and the critical residual sum of squares, based on the following principle (depicted in Figure 8). Based on the comparison of the RHT algorithm, RANSAC algorithm, and LSC algorithm in the literature review, the LSC algorithm has a better fitting circle effect.To improve the accuracy and efficiency of single-circle detection, the algorithm uses the residual sum of squares to fit the circles. It can be seen from Figure 7a, when only the main contour sample points exist in the fitting sample, the fitting results are better.However, the traditional least squares method is more sensitive to the interfering points, leading to errors in the results of the circle fitting, as shown in Figure 7b.To obtain the desired circle fitting results, this section proposes a method to remove the fitted failure points one by one based on the least squares method.This method introduces two parameters: the maximum number of iterations allowed and the critical residual sum of squares, based on the following principle (depicted in Figure 8).As shown in Figure 8 (take the example of three iterations), the upper left interfering points is biased relative to the other interfering points.The first candidate circle is biased to the top left, sensitive to the interfering points, and has a large residual sum of squares.Interfering points outside the candidate circle are removed by comparing the distance from the sample points to the circle's center with the radius.This sample point is kept if the distance is less than the radius.On the contrary, this sample point is deleted, as depicted in Figure 8a.Compared with Figure 8a, the distribution of sample points in Figure 8b are relatively uniform.As shown in Figure 8c, by removing the interfering points outside the candidate circle and fitting a third candidate circle, the fitted candidate circle gradually converges to the real-world circle.The specific iterative process is as follows: ∈ [1, ], ∈ (6) Based on the comparison of the RHT algorithm, RANSAC algorithm, and LSC algorithm in the literature review, the LSC algorithm has a better fitting circle effect.To improve the accuracy and efficiency of single-circle detection, the algorithm uses the residual sum of squares to fit the circles. It can be seen from Figure 7a, when only the main contour sample points exist in the fitting sample, the fitting results are better.However, the traditional least squares method is more sensitive to the interfering points, leading to errors in the results of the circle fitting, as shown in Figure 7b.To obtain the desired circle fitting results, this section proposes a method to remove the fitted failure points one by one based on the least squares method.This method introduces two parameters: the maximum number of iterations allowed and the critical residual sum of squares, based on the following principle (depicted in Figure 8).As shown in Figure 8 (take the example of three iterations), the upper left interfering points is biased relative to the other interfering points.The first candidate circle is biased to the top left, sensitive to the interfering points, and has a large residual sum of squares.Interfering points outside the candidate circle are removed by comparing the distance from the sample points to the circle's center with the radius.This sample point is kept if the distance is less than the radius.On the contrary, this sample point is deleted, as depicted in Figure 8a.Compared with Figure 8a, the distribution of sample points in Figure 8b are relatively uniform.As shown in Figure 8c, by removing the interfering points outside the candidate circle and fitting a third candidate circle, the fitted candidate circle gradually converges to the real-world circle.The specific iterative process is as follows: ∈ [1, ], ∈ (6) As shown in Figure 8 (take the example of three iterations), the upper left interfering points is biased relative to the other interfering points.The first candidate circle is biased to the top left, sensitive to the interfering points, and has a large residual sum of squares.Interfering points outside the candidate circle are removed by comparing the distance from the sample points to the circle's center with the radius.This sample point is kept if the distance is less than the radius.On the contrary, this sample point is deleted, as depicted in Figure 8a.Compared with Figure 8a, the distribution of sample points in Figure 8b are relatively uniform.As shown in Figure 8c, by removing the interfering points outside the candidate circle and fitting a third candidate circle, the fitted candidate circle gradually converges to the real-world circle.The specific iterative process is as follows: For each iteration k, where k represents the current number of iterations and K represents the maximum number of iterations allowed; Q k indicates the residual sum of squares of the kth iteration; a * k , b * k , r * k indicate the center coordinate and radius of the kth iteration, respectively; and n ip denotes the number of sample points used for the iteration.The DBLSKCF algorithm obtains the optimal single-circle parameters by minimizing the residual sum of squares.Formula (6) is used to set a range of values for the number of iterations.The residual sum of squares for each fit is calculated by Formula (7).From Formula (7), it can be seen that where Q * denotes the critical residual sum of squares.(x i , y i ) represent the coordinates of the sample points.r ki denotes the distance from the sample points to the center of the circle at the kth iteration.Please note that after the candidate-circle iteration and ideal-circle determination, the center coordinates and radius of all candidate circles are obtained. If the number of iterations is less than the maximum number of iterations allowed, or the residual sum of squares is greater than the critical residual sum of squares, then calculate the distance from each sample point to the center of the fitted circle using Formula (9).If the distance is less than the radius of the fitted candidate circle, the sample points are retained for the next fitting of the candidate circle.Instead, the sample points outside the fitted candidate circle are deleted.If K and Q * do not satisfy Formula (9), the iteration will be stopped, and the center coordinates and radius of the candidate circle are outputted.The corresponding algorithm for obtaining candidate and ideal circles are shown in Algorithm 3. K and Q * are introduced to reduce the frequency with which underfitting or overfitting occurs.However, the numerical settings of the two parameters may lead to underfitting or overfitting.Under the joint action of the two parameters, ideal circle fitting results are shown in Table 1 below: From Table 1, it can be seen that different values affect the ideal circle to various degrees; and therefore, the optimal value of the parameter needs to be determined.In this paper, twenty-six complex images of different types are randomly selected, including dials, wheels, traffic signs, etc., and the plot of the residual sum of squares versus the number of iterations determines the optimal combination. Theoretically, the result of circle fitting is the best when Q k is close to 0. To prevent overfitting, the value of Q k is set to 0.0005, as well as the value of K to 100.According to Figure 9f, it can be seen that the residual sum of squares in different types of complex images has an identical trend of change.Due to the irregular distribution of the interfering points, the residual sum of squares for the first iteration is large, and the second iteration has a significant decrease, with large changes in the center coordinates and radius of the circle.After four iterations, the residual sum of squares decreases to a relatively stable value.As shown in Figure 9g, only a few images are subjected to the seventh iteration, and the slopes on both sides changed less before and after the sixth iteration.With the increase in the number of iterations during the subsequent iterations, the residual sum of squares ceases to change or decline in a small range.Therefore, we set the maximum number of iterations allowed in the DBLSKCF algorithm to six.The minimum value of the residual sum of squares in the sixth iteration is 0.003, and it makes it most reasonable to set it to 0.003 to make more images with only six iterations in single-circle detection.It balances ideal circle detection accuracy and efficiency with and together.7) and ( 8).3: Calculate the r ki with (9).Theoretically, the result of circle fitting is the best when is close to 0. To prevent overfitting, the value of is set to 0.0005, as well as the value of to 100.According to Figure 9f, it can be seen that the residual sum of squares in different types of complex images has an identical trend of change.Due to the irregular distribution of the interfering points, the residual sum of squares for the first iteration is large, and the second iteration has a significant decrease, with large changes in the center coordinates and radius of the circle.After four iterations, the residual sum of squares decreases to a relatively stable value.As shown in Figure 9g, only a few images are subjected to the seventh iteration, and the slopes on both sides changed less before and after the sixth iteration.With the increase in the number of iterations during the subsequent iterations, the residual sum of squares ceases to change or decline in a small range.Therefore, we set the maximum number of iterations allowed in the DBLSKCF algorithm to six.The minimum value of the residual sum of squares in the sixth iteration is 0.003, and it makes it most reasonable to set it to 0.003 to make more images with only six iterations in single-circle detection.It balances ideal circle detection accuracy and efficiency with and together. Improvement of Output Circle Detection Accuracy Section 3.3 determines the ideal circle by Q * and K, reducing the frequency of occurrence of overfitting and underfitting.Error exists between the ideal circle and real-world circle due to various interfering points.To improve the accuracy of output circle detection, this section adopts the K-means clustering algorithm in machine learning to cluster the center coordinates and radius of all candidate circles to achieve the purpose of error compensation for output circle parameters.The clustering process of the K-means clustering algorithm is illustrated in Figure 10. Improvement of Output Circle Detection Accuracy Section 3.3 determines the ideal circle by * and , reducing the frequency of occurrence of overfitting and underfitting.Error exists between the ideal circle and realworld circle due to various interfering points.To improve the accuracy of output circle detection, this section adopts the K-means clustering algorithm in machine learning to cluster the center coordinates and radius of all candidate circles to achieve the purpose of error compensation for output circle parameters.The clustering process of the K-means clustering algorithm is illustrated in Figure 10.According to Figure 10, the data points are clustered into two clusters, and the clustering center is updated by calculating the distance from the data points to the clustering center.The distance is calculated using Formula (10), and the data points are assigned to the cluster with the closest distance. , ∈ (10) where denotes the jth cluster and denotes the clustering center; indicates the number of data contained in the jth cluster; represents the number of clusters; denotes the data point in ; and denotes the distance from the data point to the corresponding clustering center.From the clustering principle, it is necessary to minimize the distance from the data points in each cluster to the corresponding cluster center. is determined by taking partial derivatives of to .As shown in Formulas ( 11) and ( 12): The final clustering result is obtained by a continuous iteration of Formulas ( 11) and (12).In Section 3.3, we can determine that is 6 and * is 0.003.However, the final number of iterations may be less than 6.Therefore, the algorithm is discussed in terms of categorization based on the number of data points in the cluster results: (1) Different numbers of data points in the two clustering results. When the numbers of data points in the two clustering results differ, the algorithm proposed choose the cluster with more data points as the target cluster.The reason is as follows: the K-means clustering algorithm clusters data points based on their distance According to Figure 10, the data points are clustered into two clusters, and the clustering center is updated by calculating the distance from the data points to the clustering center.The distance is calculated using Formula (10), and the data points are assigned to the cluster with the closest distance. where S j denotes the jth cluster and B j denotes the clustering center; N j indicates the number of data contained in the jth cluster; n k represents the number of clusters; X i denotes the data point in S j ; and D j denotes the distance from the data point to the corresponding clustering center. From the clustering principle, it is necessary to minimize the distance from the data points in each cluster to the corresponding cluster center.B j is determined by taking partial derivatives of D j to B j .As shown in Formulas (11) and (12): The final clustering result is obtained by a continuous iteration of Formulas ( 11) and (12).In Section 3.3, we can determine that K is 6 and Q * is 0.003.However, the final number of iterations may be less than 6.Therefore, the algorithm is discussed in terms of categorization based on the number of data points in the cluster results: (1) Different numbers of data points in the two clustering results. When the numbers of data points in the two clustering results differ, the algorithm proposed choose the cluster with more data points as the target cluster.The reason is as follows: the K-means clustering algorithm clusters data points based on their distance from the clustering center.The fitting results for the first few iterations vary widely, and the clustering algorithm clusters these data points into one cluster.In the later iterations, the fitting results change stably.The clustering algorithm will cluster these data points into one cluster, and clustering centers are close to data points.Therefore, the target cluster can be obtained by filtering the number of data points.Clustering results with many data points are obtained through Formula (13).Calculate the mean value of the circle parameter of the cluster using Formula ( 14), which is the result of the error compensation for the output circle, as follows: where C 1 , C 2 indicate the set of data points in the results of the two clusters, respectively.f (•) is a function retaining the set with the most elements.C denotes the set of clustering results with more data points.n c represents the number of data points in set C. (x, y), r are the center coordinates and radius of the output circle, respectively. (2) Same number of data points in two clustering results. In this case, the algorithm selects clustering results based on the mean of the candidate circle radius.The reason is shown as follows: The fitting results for the first few iterations vary widely, and the clustering algorithm clusters these data points into one cluster.In the later iterations, the fitting results change stably.The mean of all candidate circles radius lie between the radius corresponding to the center of these two clusters.Formulas ( 15) and ( 16) are used to calculate the radius mean of the candidate circles in the two clustering results, respectively.Formula ( 17) are used to calculate the radius mean of all candidate circles.The target cluster is the cluster corresponding to the clustering result more minor than the mean by Formulas ( 18) and (19). where r 1 denotes the mean of the radius of the candidate circles in set C 1 , r 2 denotes the mean of the radius of the candidate circles in set C 2 ,and r mean denotes the mean of the radius of all candidate circles.R represents the minimum value in r 1 , r 2 .n 1 and n 2 denote the number of data points in the two clustering results, respectively, and n R denotes the number of data points in the target cluster.Please note that the center coordinates and radius of the output circle are obtained after the improvement of output circle detection accuracy.The corresponding algorithm to obtain a high precision output circle is proposed.Please see Algorithm 4. In this paper, the algorithm that does not use K-means clustering is called the DBLSCF algorithm.The method will be verified via specific experiments in Section 4 to improve the accuracy of output circle.Output: Center coordinates (x, y) and radius r of the output circle.1: Initialize n k = 2. 2: According to the method in Section 3.4, the center coordinates are clustered into C 1 and C 2 , respectively.3: if num(C 1 ) is not equal to num(C 2 ) then 4: Calculate C with (13).5: Calculate (x, y) and r with (14).6: else 7: Calculate r 1 , r 2 and r mean with ( 15), ( 16) and ( 17), respectively.8: if r 1 < r mean then 9: (x, y), r = mean((a end if 13: end if Experiments and Results This section compares the DBLSKCF algorithm with the RANSAC, RHT, and DBLSCF algorithms regarding detection accuracy, efficiency, and stability.Two groups of experiments are launched.The first experiments in Section 4.1 are conducted under laboratory conditions, using images captured with various lighting intensities.The experiments are designed to evaluate the stability of the four algorithms under different lighting conditions.Section 4.2 aims to verify the accuracy and efficiency of the DBLSKCF algorithm by the GH dataset [23].The comparative experimental setup is as follows: (1).All experiments are carried out using the same computer.The computer parameters are shown in Table 2. (2).To comparatively validate the detection speed, the four algorithms are terminated as soon as a circle was detected in the image. Comparison of Stability of Circle Detection We use a mean light intensity of 650 lx and a standard deviation of 50 lx to simulate variations in light intensity and randomly select twenty-four datasets.The stability of the four algorithms in practical applications is evaluated by comparing the detection results under various light intensity.The edges detected in the experiment are selected to be the inner diameter edges of the steel pipe, accompanied by rust, scratches, and strong reflectivity on the end face.The experimental platform consists of an industrial camera, a white ring light source, and a tube sheet.The experimental platform is shown in Figure 11. In the experiment, the center coordinates and the radius change are used as stability measures.Under various light intensities, if the center coordinates and radius only change in a small range, it indicates that the algorithm has high stability and can effectively resist external interference.On the contrary, the algorithm is less stable and less resistant to external interference.In the experiment, the center coordinates and the radius change are used as measures.Under various light intensities, if the center coordinates and radius only in a small range, it indicates that the algorithm has high stability and can effective external interference.On the contrary, the algorithm is less stable and less resista ternal interference. From Figure 12, for the RHT, RANSAC, and DBLSCF algorithms, the algorit cle detection results under different light intensities are highly differentiated, and tection results of the DBLSKCF algorithm maintain almost the same.All results ar in Figure 13.With the increase in light intensity, the circle detection results of t algorithm change unstably.In contrast, the circle detection results of the RANSA rithm tend to be stable.In addition, the circle detection results of the DBLSK DBLSCF algorithms vary within a small range.We use the standard deviation fro 3 to measure the algorithm's stability better.A significant standard deviation i greater variability in the circle detection results, more excellent dispersion, and lo bility.Both in terms of coordinates of x and y, and radius, the detection results of algorithm have the more significant standard deviation and the worst stability.Th tion results of the RANSAC algorithm show substantial variations in the relative phase of light intensity and gradually stabilize with light intensity enhancement.tection results of the DBLSCF and DBLSKCF algorithms change almost synchro But by calculating the standard deviation, it is found that the DBLSKCF algori better stability.Therefore, the DBLSKCF algorithm has better stability and resis external interference.From Figure 12, for the RHT, RANSAC, and DBLSCF algorithms, the algorithm's circle detection results under different light intensities are highly differentiated, and the results of the DBLSKCF algorithm maintain almost the same.All results are shown in Figure 13.With the increase in light intensity, the circle detection results of the RHT algorithm change unstably.In contrast, the circle detection results of the RANSAC algorithm tend to be stable.In addition, the circle detection results of the DBLSKCF and DBLSCF algorithms vary within a small range.We use the standard deviation from Table 3 to measure the algorithm's stability better.A significant standard deviation indicates greater variability in the circle detection results, more excellent dispersion, and lower stability.Both in terms of coordinates of x and y, and radius, the detection results of the RHT algorithm have the more significant standard deviation and the worst stability.The detection results of the RANSAC algorithm show substantial variations in the relatively weak phase of light intensity and gradually stabilize with light intensity enhancement.The detection results of the DBLSCF and DBLSKCF algorithms change almost synchronously.But by calculating the standard deviation, it is found that the DBLSKCF algorithm has better stability.Therefore, the DBLSKCF algorithm has better stability and resistance to external interference.In the experiment, the center coordinates and the radius change are used as stability measures.Under various light intensities, if the center coordinates and radius only change in a small range, it indicates that the algorithm has high stability and can effectively resist external interference.On the contrary, the algorithm is less stable and less resistant to external interference. From Figure 12, for the RHT, RANSAC, and DBLSCF algorithms, the algorithm's circle detection results under different light intensities are highly differentiated, and the detection results of the DBLSKCF algorithm maintain almost the same.All results are shown in Figure 13.With the increase in light intensity, the circle detection results of the RHT algorithm change unstably.In contrast, the circle detection results of the RANSAC algorithm tend to be stable.In addition, the circle detection results of the DBLSKCF and DBLSCF algorithms vary within a small range.We use the standard deviation from Table 3 to measure the algorithm's stability better.A significant standard deviation indicates greater variability in the circle detection results, more excellent dispersion, and lower stability.Both in terms of coordinates of x and y, and radius, the detection results of the RHT algorithm have the more significant standard deviation and the worst stability.The detection results of the RANSAC algorithm show substantial variations in the relatively weak phase of light intensity and gradually stabilize with light intensity enhancement.The detection results of the DBLSCF and DBLSKCF algorithms change almost synchronously.But by calculating the standard deviation, it is found that the DBLSKCF algorithm has better stability.Therefore, the DBLSKCF algorithm has better stability and resistance to external interference. Validation of Algorithm Detection Accuracy and Efficiency To verify the accuracy and efficiency of the algorithm in this paper, we validate it with the GH dataset.The images in the GH dataset cover a variety of scenes and backgrounds, including indoor and outdoor environments, different lighting conditions, and different levels of an object and background clutter.The GH dataset is therefore well suited for testing and training the robustness and generalization of circle detection algorithms.Forty-eight images with a single circle are selected from the GH dataset, each labeled with the corresponding circle parameter. As seen from Figure 14, the single-circle detection results of the RANSAC, RHT, and DBLSCF algorithms show varying degrees of fitting error, and only the results of the DBLSKCF algorithm are closest to the real-world circle parameters.The four algorithms' circle detection results and running times are specified below. Validation of Algorithm Detection Accuracy and Efficiency To verify the accuracy and efficiency of the algorithm in this paper, we validate it with the GH dataset.The images in the GH dataset cover a variety of scenes and backgrounds, including indoor and outdoor environments, different lighting conditions, and different levels of an object and background clutter.The GH dataset is therefore well suited for testing and training the robustness and generalization of circle detection algorithms.Fortyeight images with a single circle are selected from the GH dataset, each labeled with the corresponding circle parameter. As seen from Figure 14, the single-circle detection results of the RANSAC, RHT, and DBLSCF algorithms show varying degrees of fitting error, and only the results of the DBLSKCF algorithm are closest to the real-world circle parameters.The four algorithms' circle detection results and running times are specified below. After analyzing Figure 15 and Table 4, the single-circle detection results of the RHT algorithm have high error and low efficiency.Please find Tables A1-A4 in the Appendix A for the data results of the experiments, respectively.The RANSAC algorithm performs better in efficiency, but has a higher error than the DBLSCF and DBLSKCF algorithms because, in the RHT and RANSAC algorithms, the sample points for fitting the circle are chosen randomly, resulting in the selected sample points not being points on the main contour.Since the DBLSCF algorithm outputs an ideal circle as an output circle, the ideal circle may be affected by interfering points, resulting in a significant deviation of the circle parameters.Thus, the circle detection accuracy of the DBLSKCF algorithm is 3-5 times higher than that of the DBLSCF algorithm.The experimental results also justify the K-means clustering algorithm to improve the accuracy of circle detection.Considering the circle detection accuracy and efficiency, the DBLSKCF algorithm is significantly better than the other compared algorithms.After analyzing Figure 15 and Table 4, the single-circle detection results of the RHT algorithm have high error and low efficiency.Please find Tables A1-A4 in the Appendix A for the data results of the experiments, respectively.The RANSAC algorithm performs better in efficiency, but has a higher error than the DBLSCF and DBLSKCF algorithms because, in the RHT and RANSAC algorithms, the sample points for fitting the circle are chosen randomly, resulting in the selected sample points not being points on the main contour.Since the DBLSCF algorithm outputs an ideal circle as an output circle, the ideal circle may be affected by interfering points, resulting in a significant deviation of the circle parameters.Thus, the circle detection accuracy of the DBLSKCF algorithm is 3-5 times higher than that of the DBLSCF algorithm.The experimental results also justify the K-means clustering algorithm to improve the accuracy of circle detection.Considering the circle detection accuracy and efficiency, the DBLSKCF algorithm is significantly better than the other compared algorithms.After analyzing Figure 15 and Table 4, the single-circle detection results of the RHT algorithm have high error and low efficiency.Please find Tables A1-A4 in the Appendix A for the data results of the experiments, respectively.The RANSAC algorithm perform better in efficiency, but has a higher error than the DBLSCF and DBLSKCF algorithms because, in the RHT and RANSAC algorithms, the sample points for fitting the circle are chosen randomly, resulting in the selected sample points not being points on the main contour.Since the DBLSCF algorithm outputs an ideal circle as an output circle, the idea circle may be affected by interfering points, resulting in a significant deviation of the circle parameters.Thus, the circle detection accuracy of the DBLSKCF algorithm is 3-5 times higher than that of the DBLSCF algorithm.The experimental results also justify th K-means clustering algorithm to improve the accuracy of circle detection.Considering the circle detection accuracy and efficiency, the DBLSKCF algorithm is significantly better than the other compared algorithms. Conclusions This paper proposes a single-circle detection algorithm, i.e., the DBLSKCF algorithm, that combines Canny edge detection, two clustering algorithms, and improved least squares method.The proposed algorithm has proven to be an excellent solution to single-circle detection in complex images.Compared with RHT, RANSAC, and DBLSCF, DBLSKCF demonstrates clear advantages in detection accuracy and stability.The highlights (and also the core steps) of the detection methods are summarized below: 1. Image edge preprocessing removes as many interfering points as possible while retaining the main contour edge information.2. The DBSCAN algorithm is utilized to cluster the main contours and interfering points into different clusters, from which the cluster with more sample points is extracted as the fitting samples of the candidate circles.3.An improved least square fitting of the circle with the residual sum of squares is raised. Removing the fitting failure points one by one makes the circle fitting result gradually closer to the real-world circle.4. The K-means clustering algorithm is implemented to cluster the center coordinates and radius of all candidate circles to improve the accuracy of output circle detection. Performance of the DBLSKCF algorithm: (1).Stability: The standard deviation of the X-coordinate, Y-coordinate, and radius detection results are 2.7 pixels, 2.3 pixels, and 3.27 pixels, respectively.(2).Detection accuracy: The average errors of X-coordinate, Y-coordinate, and radius detection are 1.8 pixels, 1.4 pixels, and 1.9 pixels, respectively.(3).Running time: The average running time is 0.1 s. By comparing the detection performance with other algorithms, the proposed DBLSKCF algorithm outperforms in detection accuracy and stability.Future work will be carried out in two main perspectives: (1).Adaptively determining the neighborhood radius and the minimum number of sample points within the neighborhood radius in the DBSCAN clustering algorithm.( 2).An improvement of the proposed algorithm to enable multi-circle detection. subjected to edge detection using the Canny edge detection algorithm.The results are as follows. Algorithm 1 : Image Edge PreprocessingInput: The image with a circle outline, the Gaussian kernel size k s , the number of retained edges s, threshold value 1 is th 1 and threshold value 2 is th 2 in Canny edge detection algorithm.Output: Edge pixels under retention.1: Initialize k s = 9, s = 6, th 1 = 200, th 2 = 255.2: Calculate the C m by Canny edge detection algorithm and Formula (1).3: Calculate the C 0 s with (2).4: Calculate the C p s with (3). (5) is used to classify the sample points.If the number of sample points in the n hood of sample point A is greater than or equal to the minimum number of sampl the sample point A is classified as a core point.If the number of the sample poin neighborhood are less than the minimum number of sample points, point B is c as a boundary point.If the number of sample points in N's neighborhood is 0, p classified as an outlier point.{ ( , ) ∈ C A , ≥ ( , ) ∈ C B , 1 < < ( , ) ∈ C N , = 1 where ( , ) represent the coordinates of the sample point, C A denotes the set by the core point A, C B represents the set created by the boundary point B, and cates the set created by the outlier point N. indicates the number of sample po circle centered at ( , ) with a radius of .Please note that after the screenin fitted samples, the output is the set C A . Algorithm 2 :Figure 5 .Algorithm 2 : Figure 5. DBSCAN clustering results: (a) main contour; (b) main contour and outer interfering points.Note: Different-colored sample points in the figure represent different clustering results. Figure 5 . Figure 5. DBSCAN clustering results: (a) main contour; (b) main contour and outer interfering points.Note: Different-colored sample points in the figure represent different clustering results. Figure 5 .Algorithm 2 : Figure 5. DBSCAN clustering results: (a) main contour; (b) main contour and outer interfering points.Note: Different-colored sample points in the figure represent different clustering results. (a) Main contour sample points (b) Mian contour and interfering sample points (a) Main contour sample points (b) Mian contour and interfering sample points Figure 8 . Figure 8. Principle of the method for removing the fitted failure points one by one based on the least squares method: (a) fitting of the first candidate circle; (b) fitting of the second candidate circle; (c) fitting of the third candidate circle. Algorithm 3 : Fit Candidate Circles and Determine Ideal Circles Input: The edge set C A in Algorithm 2, the maximum number of iterations allowed K, the critical residual sum of squares Q * , the iteration number k. Output: Center (a * k ,b * k ) and radius r * k of the candidate circle.1: Initialize K = 6, Q * = 0.003, k = 1.2: Calculate the Q k , (a * k ,b * k ) and r * k with ( Figure 9 . Figure 9. Determination of the optimal critical residual sum of squares and the maximum number of iterations allowed; (a-e) denote the detection results of circles in different complex scenarios; (f) shows the plot of the residual sum of squares versus the number of iterations; (g) represents the localized zoomed-in view after four iterations in (f). Figure 9 . Figure 9. Determination of the optimal critical residual sum of squares and the maximum number of iterations allowed; (a-e) denote the detection results of circles in different complex scenarios; (f) shows the plot of the residual sum of squares versus the number of iterations; (g) represents the localized zoomed-in view after four iterations in (f). Figure 10 . Figure 10.Schematic diagram of clustering process of K-means algorithm.(a) Original data points.(b) Start of clustering.(c) Clustering result. Figure 10 . Figure 10.Schematic diagram of clustering process of K-means algorithm.(a) Original data points.(b) Start of clustering.(c) Clustering result. Algorithm 4 : Improve the Output Circle's Detection Accuracy Input: The k-means algorithm clustering number n k , center (a * k ,b * k ) and radius r * k of the candidate circle in Algorithm 3. Figure 12 . Figure 12.Circle detection results under various light intensities.From top to bottom: the light intensity is 598 lx, 645 lx, and 686 lx, respectively.(a) Original image; (b-e) denote the single-circle detection results of the RANSAC, RHT, DBLSCF, and DBLSKCF algorithms, respectively. Figure 12 . Figure 12.Circle detection results under various light intensities.From top to bottom: the light intensity is 598 lx, 645 lx, and 686 lx, respectively.(a) Original image; (b-e) denote the single-circle detection results of the RANSAC, RHT, DBLSCF, and DBLSKCF algorithms, respectively. Figure 12 .Figure 13 . Figure 12.Circle detection results under various light intensities.From top to bottom: the light intensity is 598 lx, 645 lx, and 686 lx, respectively.(a) Original image; (b-e) denote the single-circle detection results of the RANSAC, RHT, DBLSCF, and DBLSKCF algorithms, respectively. Figure 13 . Figure 13.Detection results of four algorithms under various light intensities.(a) X-coordinate change.(b) Y-coordinate change.(c) Radius change. Figure 15 . Figure 15.Detection results of different images for four algorithms.(a) X-coordinate error.(b) Ycoordinate error.(c) Radius error.(d) Running time.Note: Where the RHT algorithm has circle detection failures for a few images in the dataset, these are represented by breakpoints. Figure 15 . Figure 15.Detection results of different images for four algorithms.(a) X-coordinate error.(b) Ycoordinate error.(c) Radius error.(d) Running time.Note: Where the RHT algorithm has circle detection failures for a few images in the dataset, these are represented by breakpoints. Table 1 . Effect of K and Q * on the ideal circle fitting result. Table 2 . Specific parameters of the operating computer. Table 3 . Standard deviation of circle detection results for four algorithms. Table 3 . Standard deviation of circle detection results for four algorithms. Table 3 . Standard deviation of circle detection results for four algorithms. Table 4 . Comparison of circle detection mean value. Table 4 . Comparison of circle detection mean value. Note:The "-" in the table means that the algorithm did not detect the circle in the corresponding image. Note:The "-" in the table means that the algorithm did not detect the circle in the corresponding image. Note:The "-" in the table means that the algorithm did not detect the circle in the corresponding image.
14,961.8
2023-11-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Blank Language Models We propose Blank Language Model (BLM), a model that generates sequences by dynamically creating and filling in blanks. Unlike previous masked language models or the Insertion Transformer, BLM uses blanks to control which part of the sequence to expand. This fine-grained control of generation is ideal for a variety of text editing and rewriting tasks. The model can start from a single blank or partially completed text with blanks at specified locations. It iteratively determines which word to place in a blank and whether to insert new blanks, and stops generating when no blanks are left to fill. BLM can be efficiently trained using a lower bound of the marginal data likelihood, and achieves perplexity comparable to traditional left-to-right language models on the Penn Treebank and WikiText datasets. On the task of filling missing text snippets, BLM significantly outperforms all other baselines in terms of both accuracy and fluency. Experiments on style transfer and damaged ancient text restoration demonstrate the potential of this framework for a wide range of applications. Introduction Neural language models have been successfully applied to many sequence generation tasks, including machine translation (Bahdanau et al., 2014), summarization (Rush et al., 2015), and image captioning (Xu et al., 2015). Typically, sequences are modeled autoregressively from left to right, making the log-likelihood tractable and allowing efficient training and inference. While left to right models are effective, they are not well-suited for text completion or editing. In these tasks, we are given a partial draft of the text and the goal is to add new text to complete it. Models such as Masked Language Model (Devlin et al., 2018, MLM) and Insertion Transformer 1 Our code will be released soon. They also have which . They also have ice cream which is really good . are able to fill in words to complete partially written text. However, neither of them is tailored to rewriting/editing. MLM assumes that the length of the text to be inserted is known in advance. Insertion Transformer, on the other hand, does not explicitly control where insertions can take place. In this paper, we introduce Blank Language Model (BLM). The model exploits a special " " symbol to control where tokens can be placed. In each stage of generation, a blank can be replaced by any word, and potentially accompanied by a new blank on the left, right or both sides of the word to continue writing. As shown in Fig. 1, such models can be used to fill in missing words in incomplete sentences, generate a new sentence in between two given sentences, and so on. BLM can start with a single blank or partial text with blanks in specified locations. The model iterates through generation steps, replacing blanks with words and possibly adjoining blanks, until no blanks remain. Our BLM is based on a Transformer encoder that maps the input text containing blanks into a sequence of vector representations. The representations at blank locations are further processed to select a blank, word to fill in it, and whether to generate adjoining blanks. Since there are multiple trajectories through the actions in the BLM that all result in the same final text, we train the model by maximizing the marginal likelihood. To make training more efficient, and to introduce an inductive bias towards order independence, we maximize instead a lower bound on the marginal likelihood. At test time, BLM can in principle fill in any amount of text in any of the given blank positions. We test BLM on language modeling, and obtain perplexity comparable to left-to-right language models on Penn Treebank and WikiText datasets. We further evaluate our model on three text rewriting tasks: text infilling (Zhu et al., 2019), ancient text restoration (Assael et al., 2019) and style transfer (Shen et al., 2017). BLM achieves superior performance on all three tasks, demonstrating its flexibility to generate text in diverse conditions. Notably, on ancient text restoration, we reduce the previous state-of-the-art error rate from 44.9% to 41.6% when half of the characters are missing. customer service is awesome -End- Figure 2. An example trajectory that generates the sentence "customer service is awesome". Each action is a tuple (b, w, l, r), indicating the blank location b selected for expansion, the word w to fill in, whether to create a left blank l, and whether to create a right blank r. Related Work Alternatives to conventional left-to-right generation have previously been explored from multiple approaches. Part of these efforts was focused on finding an optimal generation order, including syntax-based approaches and methods for learning adaptive generation order (Emami & Jelinek, 2005;Zhang et al., 2015;Dyer et al., 2016;Ford et al., 2018;Zhou et al., 2019;Welleck et al., 2019;Gu et al., 2019a). These approaches are tailored to generation from scratch in a specific order. Our model instead is attuned for text rewriting, where the missing parts can be located anywhere in the input text, and the algorithm must flexibly complete them. Another stream of work focuses on generating sequences in a non-autoregressive fashion for fast decoding in machine translation (Gu et al., 2017;Lee et al., 2018;Stern et al., 2019;Gu et al., 2019b). The closest approach is the Insertion Transformer , which also supports a dynamic canvas growing with word insertions. However, none of these models provide explicit control over which part of the sequence to expand. Additional insertion control is provided by the masked language model where each mask corresponds to a single word (Fedus et al., 2018). MLMs are commonly used in representation learning (Devlin et al., 2018). To utilize them in rewriting tasks would require one to specify the insertion length in advance and heuristically determine a generation order among masks (Ghazvininejad et al., 2019;Wu et al., 2019). In contrast, a blank in our model can correspond to any number of words, thereby avoiding the problem of predicting length. BLMs provide a natural formulation for generative modeling that can dynamically accommodate insertions of various length. Finally, several works combine left-to-right language models with control codes or customized inference algorithms for more flexible generation (Keskar et al., 2019;Sun et al., 2017;Liu et al., 2019). Our model allows for straightforward decoding strategies and enables direct edits to the sentence to control generation. Blank Language Models A blank language model (BLM) generates sequences by creating and filling in blanks. Generation starts with a single blank and ends when there is no blank. In each step, the model selects a blank " ", predicts a word w, and fills the blank with "w", " w", "w ", or " w ". In this way, a blank can be expanded to any number of words. We define a canvas as a sequence of words interspersed with special " " tokens. The subsequent action is conditioned on this intermediate stage of generation. Different from the Insertion Transformer that can insert words anywhere in between existing tokens , the BLM will only place words on the specified blanks. Suppose the current canvas is c = (c 1 , · · · , c n ) with blanks located at indices b 1 , · · · , b k (i.e. c b l = " ", for l = 1, . . . , k). BLM maps this canvas to a distribution over actions specifying how the canvas is to be revised: where b ∈ {b 1 , · · · , b k } is a blank location; w is a word in the vocabulary V ; l, r ∈ {0, 1} denote whether or not to create a blank to the left and right of w; and θ are the model parameters. The action, defined as the tuple (b, w, l, r) uniquely specifies the next state of canvas (see Fig. 2). We can view the actions in BLM alternatively as production rules in a grammar. Each blank represents a nonterminal symbol (or the start symbol), and the terminal symbols come from the vocabulary V . The production rules are restricted to be of the form " " → " ?w ?" for w ∈ V , where "?" indicates that the preceding symbol is optional. In contrast to context free grammars, the probability distribution over production rules is conditioned on the entire canvas generated so far. Model Architecture To implement the model, we first encode (c 1 , · · · , c n ) into a sequence of representations (z 1 , · · · , z n ), and then take corresponding representations z = (z b1 , · · · , z b k ) where the blanks are located. Let d represent the dimension of z. We factorize the joint distribution into three parts (see Fig. 3 for an overview): Figure 3. Architecture of the Blank Language Model. In the first stage, an index is chosen among all current blank positions. For that location, a word is selected in the second stage. In the final stage, the blank representation is concatenated with the chosen word's embedding and fed into a multilayer perceptron (MLP) to determine the creation of the following blanks. 1. Choose a blank: where u ∈ R d is a parameter vector to project z's into one-dimensional logits. 2. Predict a word for the selected blank: where W ∈ R |V |×d is a parameter matrix to project z bi into the vocabulary. 3. Decide whether or not to create blanks to the left and right of the predicted word: where v w is the word vector of w, and MLP is a multilayer perceptron network with 4 output classes: Likelihood Now let us consider the probability p(x; θ) of generating a sentence/paragraph x under the BLM. We call the generating process from an initial blank to complete text a trajectory. The same final text x may be realized by multiple trajectories. However, if we specify the order in which the words in x are generated, the trajectory is also uniquely determined. This follows from the fact that BLM never results in a canvas with two (or more) consecutive blanks. Concretely, consider the example trajectory of a 4-word sentence in Fig. 2. Given the order (3, 1, 4, 2), at step 0 when we generate x 3 , we must create both left and right blanks for future generations of x 1 and x 2 , x 4 . In step 1 of generating x 1 , we create a right blank but no left blank because there are no more words on x 1 's left. Subsequent steps can be deduced by analogy. The correspondence between trajectories and generation orders allows us to write the marginal likelihood as: Text infilling Input: They also have which . Target: They also have ice cream which is really good . Style transfer Positive: The employees behind the deli counter were super nice and efficient ! Negative: The employees behind the deli counter were rude and unprofessional ! Figure 4. Examples of inputs and outputs for the three rewriting tasks. We contrast text infilling, where blanks can cover an arbitrary number of words, with ancient text restoration, where the number of characters to recover is indicated by the number of '?' symbols in the input. learning to realize x equally well, independent of the order. This is desirable to ensure that the model is able to complete any partial input text regardless of the position of the blanks. From Equation (6), we can derive our first (naive) training algorithm. First, sample a permutation σ from S n and a step t from 0 to n − 1, then compute the estimated loss However, this procedure has a large variance and can only compute the loss of a single action in one pass (in contrast to left-to-right language models that compute n word losses per pass). To train more efficiently, we note that the canvas c x,σ t depends only on the first t elements of σ. Hence we can combine loss calculations of trajectories that are the same in the first t steps but different at the t + 1 step. Switching the summation order of σ and t, we have: This leads to our efficient training algorithm: first sample t and σ 1:t , then construct the canvas c x,σ t , and compute loss − log(n!) − n n−t σt+1 log p(a x,σ t |c x,σ t ; θ) . In this way, we can compute in expectation n/2 action losses per pass. Experiments We start by measuring the performance of BLM on language modeling benchmarks and comparing it with traditional left-to-right language models as a sanity check. We then demonstrate the BLM's ability to rewrite specified portions of text in a document by evaluating it on three text editing tasks: text infilling (Zhu et al., 2019), ancient text restoration (Assael et al., 2019) and style transfer (Shen et al., 2017). Figure 4 displays example inputs and outputs for these tasks. Experimental Details In all experiments, the sequence representations in BLM are obtained using the encoder module of a transformer base architecture (Vaswani et The MLP network used for blank prediction has one hidden layer of size 1024. Weight decay, learning rate and dropout are tuned based on the perplexity achieved on the validation set. For tasks that require decoding, we use beam size in {1, 5, 10, 20} and choose the best value as observed on the validation set. We note that beam search in BLM does not search for the sentence with the maximum marginal likelihood p(x; θ), but instead for a sentence and a trajectory that have the maximum joint likelihood p(x, σ; θ). Language Modeling To compute the perplexity of the BLM and the Insertion Transformer, we use the Monte-Carlo method to estimate the likelihood in Eq. (5) with m = 1000 samples. Results Table 1 The finding is particularly noteworthy, since the language modeling task is more challenging for free-order models like ours. Text Infilling The task of text infilling is motivated by many practical applications where the goal is to augment partially completed documents with missing information (Zhu et al., 2019). Following the protocol of Zhu et al. (2019), we automatically compile test data by deleting portions of documents, and ask systems to fill them in. The first row in Fig. 4 showcases an example input-output pair. The infilling task evaluates model's ability to complete blanks in a document while maintaining semantic consistency with the imposed context. Dataset We experiment on the Yahoo Answers dataset (Yang et al., 2017), which has 100k training documents and 10k documents for validation and testing respectively. Each document has 78 words on average. For a document x, we randomly mask a given ratio r of its tokens. Contiguous masked tokens are collapsed into a single blank token " ", resulting in a canvas c with k such blanks. The systems are required to complete the blanks in c. Baselines We compare our approach against the following three baselines: • The seq2seq-full baseline is a Transformer model trained to output the full document x from input c. Note that it may have invalid outputs that do not match the input format, such as missing existing tokens in c or generating tokens in incorrect locations. • The seq2seq-fill baseline is a Transformer model that only generates tokens to be placed in the blanks, with a special '|' token to indicate separation. For the example in Fig. 4, its target output will be "ice cream |is really good". Unlike seq2seq-full, seq2seq-fill does not have the problem of losing existing tokens in c. However, it may still fail to generate the correct number of '|' tokens that matches the input. • The Insertion Transformer does not explicitly support controlling the position of insertion. We force it to generate words only in the designated blanks by normalizing the predictions over valid locations. Note that the model still may not fill all of the required blanks. Metrics Following prior work (Zhu et al., 2019;Liu et al., 2019), we measure the accuracy of generation by computing its BLEU score against the original document x, and the fluency of generation as its perplexity evaluated by a pretrained (left-to-right) language model. In addition, we report the failure rate of baselines, defined as the percentage of invalid generations, i.e. generations that do not respect the constraints of the task. Results In Figure 5, we plot the failure rate, BLEU score, and perplexity of models at different mask ratios. Our BLM is the only method that is able to consistently generate valid outputs. Seq2seq baselines have a failure rate ranging from 15% to 56% as the mask ratio increases. Insertion Transformer has the highest failure rate: in more than 88% of cases, it does not fill all the blanks. This indicates that the Insertion Transformer is not suitable for generation with location constraints. According to the BLEU score, BLM and seq2seq-full have the highest infilling accuracy, on average 5.8 points higher than that of the Insertion Transformer and seq2seq-fill. For reference, we also plot the BLEU score of the input canvas when time was created , where did it come from ? it was the first part of the universe to be recycled and made into space . Insertion when time flies , where does it go ? the center of the earth has to be recycled and made into new time . when time was created , where ? the name of the universe to be recycled and made into space . For the seq2seq-fill baseline, we represent the outputs of the model along with the merged document. In this example, the insertion transformer produces invalid completions by failing to generate tokens in the "? the" blank. At mask ratio 0.5, the seq2seq-fill baseline also generates an invalid document by producing too many '|' tokens, i.e. filling to many blanks. with respect to the original document. When the mask ratio is 0.5, the input BLEU score is 13.0, and BLM brings it up to 34.8 after infilling. In terms of fluency, with the exception of seq2seq-fill, the outputs of all other methods have perplexity lower than the original data perplexity. This is because with greedy decoding or beam search, the models tend to generate the most typical output with the highest likelihood. The inspection of typical generations validates the superiority of BLM. In Fig. 6, we present an illustrative output for each model at different mask ratios. In the low mask ratio setting, models only need to use a single word to fill in blanks and produce a grammatically correct completion. Most models successfully accomplish this task. With the higher mask ratio of r = 0.5 where half of the words are deleted and the main ideas of the document are concealed, the infilling task is much more challenging and requires models to creatively generate sentences that fit the imposed canvas. Although the original meaning of the sentence is not recovered, BLM is the only model able to produce a coherent document with consistency between the question and the answer. Overall, BLM displays the best performance both quantitatively and qualitatively. For seq2seq approaches, generating the full document is superior to generating only the infilled content. Probably because that in the former case the decoder can better model the full text, whereas in the latter case the decoder must model segmented text and meanwhile count for blanks. Ancient Text Restoration Ancient text restoration is a form of text infilling where there exist fragments in ancient documents that are illegible due to time-related damages and need to be recovered (Assael et al., 2019). The second row in Figure 4 illustrates an example of input and output for the task. Restoration is performed at the character-level, and the number of characters to recover is assumed to be known, denoted by a '?' symbol in the input. In reality, when epigraphists restore a deteriorated document, the length of the lost fragment is unknown and needs to be guessed as a first step. While previous work relies on these expert conjectures (Assael et al., 2019), we note that our formulation is able to bypass this limitation and can flexibly generate completions without this additional knowledge. For purposes of comparison, however, we evaluate our method on the length-aware setting. Length-aware Blank Language Model (L-BLM) We present a variant of the BLM that is well-suited to the specific features of this task. The vocabulary V is an alphabet of characters from the ancient Greek language. We extend the vocabulary V with special " [t] " tokens that denote the length of the fragment to recover. Specifically, as a preprocessing step, consecutive '?' characters are collapsed into a single " [t] " token, where t is the number of '?' symbols. For each such blank token, L-BLM is trained to predict a character and the lengths of the new blanks to its left and right. In all experiments, we use special blank tokens for lengths up to 1000 and follow our usual canvas creation procedure. Table 2. Character error rate for the ancient text restoration task in both single-slot and multi-slot settings. Dataset The PHI-ML dataset (Assael et al., 2019) is made of fragments of ancient Greek inscriptions containing more than 3 million words and 18 millions characters. We evaluate models in two settings: single-slot and multi-slot. The test set is generated following Assael et al. 2019's procedure: a context of length L = 1000 is sampled from an inscription, then a slot of length C ∈ [1, 10] is sampled from that context. The characters from that slot are replaced with the '?' prediction symbol and constitute the target. For the single-slot experiment, we use the testing script from prior work (Assael et al., 2019) and sample 12,800 testing samples, for a total of 63,234 characters to predict, with mask ratio of 1.2%. For the multi-slot setting, we progressively increase the number of slots, yielding larger mask ratios. In total, we generate a total of 1000 samples for each mask ratio of 25%, 40% and 50% with respectively 150,235, 400,827 and 406,231 characters to restore. Baselines Previous work has proposed PYTHIA (Assael et al., 2019), a sequence-to-sequence based approach specialized in ancient text restoration. A variant of PYTHIA, PYTHIA-WORD, uses both character and word representation as input. During training, the model learns to recover masked characters using examples where a single slot has been sampled, with a slot length limited to 10. For the multislot setting, PYTHIA is applied iteratively as described in Assael et al. 2019. Beam search of size 20 is applied to each independent prediction. Metrics We measure the character error rate (CER) of all models in both settings. Results Table 2 summarizes the experimental results. L-BLM achieves similar character error rate as PYTHIA in the single-slot setting, significantly outperforming human experts. When PYTHIA is augmented with word representations, the model is able to further decrease the error rate compared to character-only methods. In reality, restoring damaged inscriptions requires the reconstruction of multiple lost fragments. As a larger proportion of the text is removed, PYTHIA-WORD's performance is degraded. In contrast, L-BLM is robust to this setting change and significantly outperforms prior work. We posit that L-BLM's advantage lies in its ability to efficiently maximize the joint likelihood of the completions over all slots. In contrast, PYTHIA-WORD's is only aware of one slot at a time. Moreover, L-BLM can handle slots of arbitrary long length while PYTHIA-WORD is limited to slots of up to 10 characters, which is a limiting factor for real-world usage. Sentiment Transfer The goal of sentiment transfer is to modify the sentiment of a sentence while maintaining its topic (Shen et al., 2017). An example is described on the third row of Figure 4. Inspired by the way humans perform rewriting, we follow a recent line of work in style transfer Xu et al., 2018;Wu et al., 2019) that adopts a two-step approach: 1. Remove words and expressions of high polarity from the source sentence; 2. Complete the partial sentence with words and expressions of the target sentiment. Step 1 has been performed in previous work by masking tokens either based on their frequency-ratio Wu et al., 2019) or their attention scores (Xu et al., 2018;Wu et al., 2019). Step 2 is performed by various sequence models conditioning on the masked sentence and the target sentiment. We evaluate the contribution of our model in Step 2 as a substitute for infilling models used in prior pipelines Wu et al., 2019). To this end, we train two instances of BLM on the dataset, one for each sentiment. At test time, the corresponding BLM is used to produce completions of the target sentiment. Dataset We run experiments on the benchmark Yelp review dataset (Shen et al., 2017), using the standard split of 450K non-parallel training sentences, 4K validation sentences and 1K testing sentences. Each sentence is labeled as either positive or negative. Baselines We compare the performance of our model against two infilling methods. The DELETE-AND-RETRIEVE method ) is a seq2seq-based approach where hidden representations of the masked sentence is concatenated with a learned attribute embedding before decoding. Additionally, a retrieval module is used to collect relevant expressions of the target sentiment to guide generation. The MASK-AND-INFILL model (Wu et al., 2019) is based on a pretrained BERT base model and then finetuned by conditioning on the sentiment of the sentence to reconstruct. Metrics We use evaluation methods introduced by prior work (Shen et al., 2017;Li et al., 2018;Wu et al., 2019;Yang et al., 2018). To assess the accuracy of the generated sentences with respect to the target sentiment, we use a pretrained CNN classifier that achieves 97.7% accuracy on the validation set. We also measure the BLEU score between the transferred sentences and human references . Table 3 demonstrate the ability of different models to perform text infilling for style transfer. The DELETE-AND-RETRIEVE method with the frequency-ratio based masking strategy achieves high sentiment accuracy, but can only do so at the expense of content fidelity. By constraining BLM to fill in blanks in between content words, we ensure that the predictions will yield high content preservation, improving both BLEU score and sentiment accuracy over the original masked sentence. Results in The MLM formulation in MASK-AND-INFILL is problematic on this task for two reasons. By design, MLM is forced to generate the same number of tokens as there were originally in the source sentence, making it more difficult to produce coherent sentences that are consistent with the target sentiment. Furthermore, MLM is trained to predict the masked tokens independently rather than jointly, which further hurts performance. Our formulation of BLM does not suffer any of these weaknesses. With both masking strategies, our model outperforms the MASK-AND-INFILL baseline on all metrics, proving its superiority as the bettersuited formulation for this setup 2 . In Fig 7, we present examples generated by the blank language model. BLM is able to dynamically adapt to the imposed canvas and can fill in blanks with expressions of varied lengths, such as "very helpful" → "rude" or "nowhere to be found" → "the best i found". We note that failure cases arise when negative polarity items are left unmasked; Source the food 's ok , the service is among the worst i have encountered . BLM the food 's ok , the service is probably the best i have encountered . Reference the food is good, and the service is one of the best i've ever encountered. Source the beans were in the burro in the rice was nowhere to be found . BLM the beans were in the burro in the rice was the best i found . Reference the beans were in the burro and the rice was plentiful Source everyone that i spoke with was very helpful and kind . BLM everyone that i spoke with was rude and unprofessional . Reference everyone that i spoke with wasn't helpful or kind. Source everything is fresh and so delicious ! BLM everything is horrible and so expensive ! Reference everything was so stale Source there is definitely not enough room in that part of the venue . BLM there is always enough parking in that part of the venue . Reference there is so much room in that part of the venue Source it is n't terrible , but it is n't very good either . BLM it is n't fancy , but it is still very good either . Reference it is n't perfect , but it is very good . Source executive chefs would walk by not even saying good morning . BLM executive chefs would come by without even saying good morning . Reference the excecutive chef was nice and said good morning to us very often BLM is then unable to produce satisfactory outputs from the canvas. Conclusion In this paper, we proposed the blank language model for flexible text generation. BLMs can generate sequences in different orders by dynamically creating and filling in blanks. We demonstrate the effectiveness of our method on various text rewriting tasks, including text infilling, ancient text restoration and style transfer. Future work may explore sequence modeling tasks beyond text rewriting that also benefit from flexible generation order. An example is music modeling: harmonic constraints naturally impose a canvas that composers fill in with the melody.
6,858.6
2020-02-08T00:00:00.000
[ "Computer Science" ]
Planarised optical fiber composite using flame hydrolysis deposition demonstrating an integrated FBG anemometer This paper reports for the first time a planarised optical fiber composite formed using Flame Hydrolysis Deposition (FHD). As a way of format demonstration a Micro-Opto-Electro-Mechanical (MOEMS) hot wire anemometer is formed using micro-fabrication processing. The planarised device is rigidly secured to a silicon wafer using optical quality doped silica that has been deposited using flame hydrolysis and consolidated at high temperature. The resulting structure can withstand temperatures exceeding 580K and is sensitive enough to resolve free and forced convection interactions at low fluid velocity. ©2014 Optical Society of America OCIS codes: (120.0120) Instrumentation, measurement, and metrology; (060.2370) Fiber optics sensors; (060.3735) Fiber Bragg gratings; (130.3990) Micro-optical devices. References and links 1. G. Roelkens, D. Vermeulen, S. Selvaraja, R. Halir, W. Bogaerts, and D. Van Thourhout, “Grating-Based Optical Fiber Interfaces for Silicon-on-Insulator Photonic Integrated Circuits,” IEEE J. Quantum Electron. 17(3), 571– 580 (2011). 2. C. Kopp, B. Ben Bakir, J. Fedeli, R. Orobtchouk, F. Schrank, H. Porte, L. Zimmermann, and T. Tekin, “Silicon Photonic Circuits : On-CMOS Integration, Fiber Optical Coupling, and Packaging,” IEEE J. Sel. Top. Quantum Electron. 17(3), 498–509 (2011). 3. J. P. Koplow, S. W. Moore, and D. A. V. Kliner, “A new method for side pumping of double-clad fiber sources,” IEEE J. Quantum Electron. 39(4), 529–540 (2003). 4. D. J. Ripin and L. Goldberg, “High efficiency side-coupling of light into optical fibres using imbedded vgrooves,” Electron. Lett. 31(25), 2204–2205 (1995). 5. Y. Tian, W. Wang, N. Wu, X. Zou, C. Guthy, and X. Wang, “A miniature fiber optic refractive index sensor built in a MEMS-based microchannel,” Sensors (Basel) 11(12), 1078–1087 (2011). 6. C. Pang, H. Bae, A. Gupta, K. Bryden, and M. Yu, “MEMS Fabry-Perot sensor interrogated by optical systemon-a-chip for simultaneous pressure and temperature sensing,” Opt. Express 21(19), 21829–21839 (2013). 7. V. P. Wnuk, A. Méndez, C. Ave, S. Ferguson, and T. Graver, “Process for Mounting and Packaging of Fiber Bragg Grating Strain Sensors for use in Harsh Environment Applications,” Smart Struct. Conf. 46, (2005). 8. A. Saran, D. C. Abeysinghe, R. Flenniken, and J. T. Boyd, “Anodic bonding of optical fibers-to-silicon for integrating MEMS devices and optical fibers,” J. Micromech. Microeng. 13(2), 346–351 (2003). 9. R. Knechtel, “Glass frit bonding: an universal technology for wafer level encapsulation and packaging,” Microsyst. Technol. 12(1-2), 63–68 (2005). 10. A. D. Yablon, Optical Fiber Fusion Splicing, Springer Series in Optical Sciences (Springer, 2005). 11. M. Kawachi, “Silica waveguides on silicon and their application to integrated-optic components,” Opt. Quantum Electron. 22(5), 391–416 (1990). 12. H. L. Rogers, S. Ambran, C. Holmes, P. G. R. Smith, and J. C. Gates, “In situ loss measurement of direct UVwritten waveguides using integrated Bragg gratings,” Opt. Lett. 35(17), 2849–2851 (2010). 13. A. Kilian, J. Kirchhof, B. Kuhlow, G. Przyrembel, and W. Wischmann, “Birefringence Free Planar Optical Waveguide Made by Flame Hydrolysis Deposition (FHD) Through Tailoring of the Overcladding,” J. Lightwave Technol. 18(2), 193–198 (2000). 14. P. Dumais, “Thermal Stress Birefringence in Buried-Core Waveguides with Over-Etch,” IEEE J. Quantum Electron. 47(7), 989–996 (2011). 15. J. Salort, A. Monfardini, and P.-E. Roche, “Cantilever anemometer based on a superconducting micro-resonator: application to superfluid turbulence,” Rev. Sci. Instrum. 83(12), 125002 (2012). 16. P. Zyłka, P. Modrzynski, and P. Janus, “Vortex Anemometer Using MEMS Cantilever Sensor,” J. Micromechanical Syst. 19(6), 1485–1489 (2010). #224955 $15.00 USD Received 15 Oct 2014; revised 27 Nov 2014; accepted 9 Dec 2014; published 19 Dec 2014 (C) 2014 OSA 29 Dec 2014 | Vol. 22, No. 26 | DOI:10.1364/OE.22.032150 | OPTICS EXPRESS 32150 17. M. Schwerter, T. Beutel, M. Leester-Schädel, S. Büttgenbach, and A. Dietzel, “Flexible hot-film anemometer arrays on curved structures for active flow control on airplane wings,” Microsyst. Technol. 20(4-5), 821–829 (2014). 18. P. Caldas, P. A. S. Jorge, G. Rego, O. Frazão, J. L. Santos, L. A. Ferreira, and F. Araújo, “Fibre Optic Hot-Wire Flowmeter Based on a Metallic Coated Hybrid LPG-FBG Structure,” in Fourth European Workshop on Optical Fibre Sensors (2010), 7653, p. 76530B. 19. S. Gao, A. P. Zhang, H.-Y. Tam, L. H. Cho, and C. Lu, “All-optical fiber anemometer based on laser heated fiber Bragg gratings,” Opt. Express 19(11), 10124–10130 (2011). 20. X. Wang, X. Dong, Y. Zhou, Y. Li, J. Cheng, and Z. Chen, “Optical fiber anemometer using silver-coated fiber Bragg grating and bitaper,” Sens. Actuators A Phys. 214, 230–233 (2014). 21. X. Wang, X. Dong, Y. Zhou, K. Ni, J. Cheng, and Z. Chen, “Hot-Wire Anemometer Based on Silver-Coated Fiber Bragg Grating Assisted by No-Core Fiber,” IEEE Photon. Technol. Lett. 25(24), 2458–2461 (2013). 22. Y.-J. Rao, “In-fibre Bragg grating sensors,” Meas. Sci. Technol. 8(4), 355–375 (1997). 23. C. G. Lomas, Fundamentals of Hot Wire Anemometry (Cambridge University, 2011). 24. B. D. C. Collis and M. J. Williams, “Two-dimensional convection from heated wires at low Reynolds numbers,” J. Fluid Mech. 6(03), 357–384 (1959). 25. C. Holmes, D. O. Kundys, J. C. Gates, C. B. E. Gawith, and P. G. R. Smith, “150 GHz of thermo-optic tuning in direct UV written silica-on-silicon planar Bragg grating,” Electron. Lett. 45(18), 954 (2009). 26. C. Sima, J. C. Gates, H. L. Rogers, P. L. Mennea, C. Holmes, M. N. Zervas, and P. G. R. Smith, “Ultra-wide detuning planar Bragg grating fabrication technique based on direct UV grating writing with electro-optic phase modulation,” Opt. Express 21(13), 15747–15754 (2013). 27. F. Rafiq, M. Adikan, S. R. Sandoghchi, C. W. Yi, R. E. Simpson, M. A. Mahdi, A. S. Webb, J. C. Gates, and C. Holmes, “Direct UV Written Optical Waveguides in Flexible Glass Flat Fiber Chips,” IEEE J. Sel. Top. Quantum Phys. 18, 1534–1539 (2012). 28. C. Holmes, L. G. Carpenter, J. C. Gates, and P. G. R. Smith, “Miniaturization of Bragg-multiplexed membrane transducers,” J. Micromech. Microeng. 22(2), 025017 (2012). 29. C. Holmes, L. G. Carpenter, H. L. Rogers, J. C. Gates, and P. G. R. Smith, “Quantifying the optical sensitivity of planar Bragg gratings in glass micro-cantilevers to physical deflection,” J. Micromech. Microeng. 21(3), 035014 (2011). 30. T. H. Laby and G. W. C. Kaye, Tables of Physical and Chemical Constants, 16th ed. (Longman, 2005). Introduction Integrating optical fiber with a planar substrate is used throughout photonics.Examples of which include waveguide coupling [1,2], optical pumping [3,4] and fabrication of micromechanical sensor systems [5,6].Adhesion is usually achieved through use of glues such as epoxy [7], a fusion splice [8] or glass frit [9], which do exhibit limitations.These include poor optical quality, thermal degradation at only a few hundred degrees Celsius, degradation in the presence of common solvents and the potential of introducing microdamage to the fiber [10].In this paper we introduce a new method for adhesion that involves the consolidation of glass soot formed through Flame Hydrolysis Deposition (FHD) [11] about a fiber as illustrated in Fig. 1.The resulting glass-fiber composite (SEM imaged in Fig. 2) overcomes the limitations associated with other adhesion methods.Furthermore, it has the added advantage of utilizing a commercial silica deposition technique that has a low propagation loss [12] and can be doped to manipulate refractive index and stress optic properties [13,14].The platform is also conducive to planar microfabrication.This paper reports the microfabrication of a hot-wire anemometer as a way of demonstrating the capability of the fiber-planar composite. Micromechanical anemometers have most recently been employed to monitor superfluidic turbulence [15], Karmen vortices [16] and for active flow control in aeronautics [17].Hotwire anemometers use an electrically heated wire to monitor fluid velocity.The principle works as fluid flowing past the wire effectively cools the system through forced convection, which typically results in a measureable change in electrical resistance.Similar concepts using hot-wire fiber Bragg gratings (FBGs) have recently been demonstrated [18][19][20][21], where the FBGs are directly used to optically monitor thermal fluctuations.However these have not been monolithically integrated.In this work a MOEMS hot-wire anemometer is constructed that uses an exposed section of optical fiber as the 'hot-wire'.The fiber-FHD platform used to realize this is conducive to monolithic microfabrication, enabling micro-heaters to be directly integrated over a selected FBG section. Concept and theory The hot-wire anemometer presented in this work consists of a silicon chip on which an optical fiber is adhered using FHD silica.After fabricating the fiber-FHD composite selective etching of silicon and FHD is made to form a bridge of optical fiber 1 mm in length, illustrated in Fig. 3. Selective deposition of gold both onto the exposed fiber and chip is made to form a conducting wire and supply tracks.To monitor thermal fluctuations two FBGs are positioned on the bridge structure (B) and the main body of the chip (A), as illustrated in Fig. 3. where λ B is the Bragg wavelength, ρ α is the photoelastic coefficient, ε axial is the strain, η is the thermo-optic constant and T is temperature.For SMF28 the thermo-optic coefficient is approximately 13pm.K −1 at 1550nm [22]. The heat transfer of a hot-wire system can be expressed as, where E is thermal energy stored, W is power generated by joule heating and H is heat transferred to surroundings.The heat transfer is the summation of convection, conduction and radiation parts.Considering the steady state condition (dE/dt = 0) and making the assumptions of small radiation and conduction contributions and an equilibrated temperature over sensor length then, ( ) where I and R are the electrical current and resistance, T w and T a are the temperatures of the wire and air respectively, A is the surface area of the wire and h is the heat transfer.Heat transfer is a function of fluid velocity v, which according to King's Law approximates, where a, b and c are constants, with the Siddall and Davies corrected empirical value for c being accepted as 0.45 [23].Combining Eqs.(1-4) the spectral Bragg shift of grating B (see Fig. 3) can be approximated by This equation is valid only when King's Law sufficiently approximates heat transfer. Considering Reynolds number R e as a descriptor for characteristic fluidic phenomena.For a hotwire anemometer the Reynolds number is, where v is the perpendicular fluid velocity, d is the diameter of the cylinder and γ is the kinematic viscosity.Rigorous description of hotwire anemometers at very low Reynolds numbers are understood to exhibit a so called 'buoyancy effect', where free convection dominates over forced convection.One such phenomena described by Collis and Williams [24] highlighted that free convection is significant when the Reynolds number is less than the cube root of the Grashof number G r shown in following equation, The following results shall show the fabricated device approximating King's Law and at low fluid velocity deviating from this trend and exhibiting molecular free convection phenomena. Results The fabricated device consisted of bridge section that was 1 mm in length and contained a Bragg grating of equal length.Additionally a Bragg grating was defined away from bridge structure to monitor thermal fluctuations for the bulk chip.Both Bragg gratings were fabricated using direct UV writing (DUW) after the fiber was bound to the wafer [25][26][27] using FHD.FHD silica soot was formed through burning precursors of SiCl 4 , PCl 3 and BCl 3 at flow rates of 139, 31 and 70 sccm respectively, using the torch configuration illustrated in Fig. 1.The oxyhydrogen flame produced by the torch had O 2 , H 2 and Ar flow rates set to 6.5, 1.9 and 8.0 l.min −1 respectively.Consolidation of the soot was subsequently made at 1260°C, after which thickness and refractive index were measured using a Metricon Prism Coupler, giving values of 2µm and 1.4452 respectively. The micro-mechanical bridge structure was defined through the selective removal of silicon and FHD silica substrate beneath the adhered fiber, using standard planar processing techniques [28,29].The bridge was subsequently patterned with gold through forming a mask and depositing gold through sputtering.Two gold contacts at the roots of the bridge were also defined to assist electrical connection.The resistance of the sampled element was 0.16 kΩ. Figure 4 illustrates the spectral tuning of grating B (see Fig. 3) with supplied electrical power, demonstrating an efficiency of 0.46 nm/mW, which is more efficient than comparable optically heated FBG anemometers [18][19][20][21].The spectral response is linear with increasing power except for higher operational powers, where the spectrum becomes characteristically chirped shown in Fig. 5.It is suggested that the cause of chirp is a consequence of heat dissipation through the roots of the bridge resulting in a temperature variation over the grating. The thermo-optic coefficients of the two gratings were calibrated by placing the chip in an oven over a 20-60°C temperature range.The measured values were 14.0 ± 0.4 pm.K −1 in the body of the chip (position A in Fig. 3) and 14.1 ± 0.4 pm.K −1 in the hot-wire section (position B).This is comparable to the accepted FBG response of 13pm.K −1 at 1550nm, with the small increase being a likely result of the silicon substrate having a greater thermal expansion coefficient than the silica fiber.It is observed in Fig. 5 that a 1.6 V (16mW) operational voltage corresponds to ~4.1 nm of Bragg grating shift.Inferring spectral shift as thermo-optic response, this is equivalent to an operational temperature of 584 ± 8 K, notably a temperature that exceeds the operational specification of most glues. The following data considers a maximum 1V applied operational voltage (corresponding to 6.25mW).The spectral shift of grating B at this drive voltage is 3.23nm, from which an inferred operational temperature of 522 ± 6 K is assumed.This is approximately 100 K greater than that achieved with optically heated FBG approaches [18][19][20][21] and more typical of traditional hot-wire anemometers [23]. The following experimental data was taken using dry nitrogen gas, which shall be termed as 'fluid' in the following discussion.Fluid velocity was calibrated using a commercial hotwire TSI Airflow anemometer (TA460 with the Probe 484).The specified error of the commercial anemometer is ± 15 mm.s −1 or ± 3% whichever is greatest.The gauge and chip were located within a cylindrical tube such that they were level and perpendicular to the flow. Figure 6 depicts data taken at a 6.25 mW operational power.It is noted that two distinct regimes are observed from the data.There is a high fluid velocity regime >10 mm.s −1 , where the data follows King's Law as understood in Eq. ( 5) and a lower fluid velocity regime <10 mm.s −1 , where the data deviates from King's Law. The deviation from King's Law occurs at ~11 mm.s −1 , which from Eq. ( 6) corresponds to a Reynolds number of 0.13 (considering a fiber diameter of 125µm and a kinematic viscosity of 0.014 [30]).Through extrapolating the work by Collis and Williams [24] this is the expected value at which the interaction between forced and free convection becomes significant, termed as buoyancy effects.Greatest sensitivity occurs at lower air velocities that still approximate King's Law.Taking a fluid velocity of 15mm.s−1 a corresponding 12.3 pm/mm.s−1 sensitivity can be inferred.To quantify resolution a constant fluid flow was established at this fluid velocity and a 5-sample standard error of 2 pm was measured.This suggests a maximum resolvable air velocity of ~0.2 mm.s −1 for this operational voltage.This measurement removed the error most notable in Fig. 6 that is associated with the commercial hot-wire anemometer used for calibration ( ± 15 mm.s −1 ).The result shows greater sensitivity than optically heated FBGs [18][19][20][21], which typically show 10's mm.s −1 resolution.This difference is believed to be a result of the greater operational temperatures that are achievable with the reported device.7 illustrates the sensitivity and associated error at mid-range test velocity (i.e.sensitivity at 30 mm.s −1 ) for several operating powers.As expected from Eq. ( 5) sensitivity follows linearly with operational power (I 2 R), with larger operational powers having greater sensitivity. It must be noted that there was a relatively small spectral increase in Bragg grating 'A', during measurements.This corresponded to 3.6 K at maximum operational power and is a result of bulk heating.One additional use for this could be monitoring of the drive current. Conclusions We have demonstrated for the first time a fiber-FHD composite.The application chosen for demonstration has been a hot wire anemometer.It was shown that the composite has the ability to withstand planar processing and inferred operational temperatures exceeding 580K.Furthermore, Bragg gratings can be directly written into the structure to measure physical actuation. The fabricated device had a maximum resolvable air velocity of 0.2 mm.s −1 and was sufficiently sensitive at low velocities to measure fluidic buoyancy effects, when free convection becomes significant over forced convection. Fig. 1 . Fig. 1.The three stage process to fabricate a fibre-FHD planar composite. Fig. 3 . Fig. 3.The geometry of the hotwire anemometer consisting of an optical fibre surrounded by a conducting gold layer.Fibre Bragg gratings are located at A and B. Gold was sputtered resulting in a non-uniform coating around the fibre.The spectral response of an FBG to physical changes is understood to be[22], Fig. 6 . Fig.6.The spectral response of a Bragg grating located in a hot-wire element (1V operating potential) subject to increasing air-velocity. Fig. 7 . Fig. 7. Device sensitivity for a Bragg grating anemometer, with respect to operational voltage. Figure Figure7illustrates the sensitivity and associated error at mid-range test velocity (i.e.sensitivity at 30 mm.s −1 ) for several operating powers.As expected from Eq. (5) sensitivity follows linearly with operational power (I 2 R), with larger operational powers having greater sensitivity.It must be noted that there was a relatively small spectral increase in Bragg grating 'A', during measurements.This corresponded to 3.6 K at maximum operational power and is a result of bulk heating.One additional use for this could be monitoring of the drive current.
4,090.8
2014-12-29T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Digital Twin-Driven Collaborative Scheduling for Heterogeneous Task and Edge-End Resource via Multi-Agent Deep Reinforcement Learning With the interdisciplinary advances of mobile communication and edge computing, massive heterogeneous tasks are accessing wireless networks and competing for the edge-end computing and communication resources. Digital twin (DT), which establishes the digital models of physical objects for simulation, analysis and optimization, provides a promising method for network scheduling and management. This paper proposes a DT-driven edge-end collaborative scheduling algorithm for heterogeneous tasks and heterogeneous computing/communication resources. Specifically, multiple end devices (EDs) cooperate with each other to accomplish a complex job, where each ED can offload individual task to multiple edge servers (ESs) for parallel computing. By fully considering deadline requirements of heterogeneous tasks, maximum computing capabilities of ESs and EDs, computing resource estimation deviations of DT, maximum transmit powers of EDs and tolerable peak interference powers to coexisting EDs, we formulate a job completion time minimization problem to jointly optimize the edge-end task division, transmit power control, computing resource type matching and allocation. To solve this non-convex problem, we first reformulate it by multi-agent Markov decision process, where a compound reward leveraging latency reward and deadline reward according to the task criticality is designed. Then, we propose a multi-agent deep reinforcement learning-based scheduling algorithm, where Actor-Critic framework with estimation and target networks is designed for policy and value iterations. Meanwhile, a step-by-step $\epsilon $ -greedy algorithm is proposed to balance exploration and exploitation, avoiding local optimal trap. Through offline centralized training by DT and online distributed execution by EDs, we realize edge-end collaborative computing for heterogeneous tasks. Experimental results demonstrate that, comparing with typical benchmark algorithms, the proposed algorithm converges with the highest reward and achieves the smallest job completion time, where the deadlines of heterogeneous tasks can be well satisfied respectively. estimation and target networks is designed for policy and value iterations.Meanwhile, a step-by-step ϵ-greedy algorithm is proposed to balance exploration and exploitation, avoiding local optimal trap.Through offline centralized training by DT and online distributed execution by EDs, we realize edge-end collaborative computing for heterogeneous tasks.Experimental results demonstrate that, comparing with typical benchmark algorithms, the proposed algorithm converges with the highest reward and achieves the smallest job completion time, where the deadlines of heterogeneous tasks can be well satisfied respectively.Index Terms-Digital twin, collaborative scheduling, edge computing, task offloading, multi-agent deep reinforcement learning. I. INTRODUCTION W ITH the rapid development of 5G, more and more devices are accessing the Internet and interconnecting humans, machines, and things with each other, towards Internet of Everything [1].Thus, there is an explosion that massive heterogeneous tasks are delivering over the 5G network.The heterogeneous tasks can be video/media tasks that require broadband communications, sensing/measuring tasks that require low-power communications, and industrial control tasks that require realtime computing and deterministic communications.To accomplish a complex job, we need to coordinate these heterogeneous tasks.For example, when an engineer teleoperates a robot for high precision machining, the heterogeneous tasks include holographic media and force-feedback control data for human's visual-haptic-auditory perception, and multi-sensor multi-controller data for robot's positioning, teaching and learning [2].When these heterogeneous tasks implement high-concurrent access [3], they must compete for the limited communication resources distributed in temporal, spatial and frequency domains, such as timeslot, power, antenna, channel, and subcarrier.This will cause communication conflicts, which certainly decrease the quality of experience (QoE). To enhance the QoE, multi-access edge computing (MEC) is proposed to process tasks nearby the end devices (EDs) and reduce the task processing latency.For example, by deploying edge server (ES) at the base station (BS), BS can implement some network management functions and provide computing resources for task processing.Thus, MEC-enhanced 5G is currently regarded as a key enabler for vertical industries.However, employing MEC will also introduce new problems.First, task offloading to ESs will consume the heterogeneous communication resources, which certainly exacerbates the communication resource competition problem.Second, the computing resources distributed at EDs and ESs are also heterogeneous, wherein the computing resources can be supplied by CPU, GPU, or others.In this way, the edge-end computing capabilities for different tasks are also different.Thus, there remains a challenge that how to schedule the heterogeneous computing and communication resources for the massive heterogeneous tasks to realize edge-end collaboration. Digital twin (DT), which establishes the digital models of physical objects for simulation, analysis and optimization, provides a novel way to address the above challenges.DT is initially proposed for cyber-physical production systems to achieve smart manufacturing in Industry 4.0 [4].Since the proposal of DT, it arises great interests from both academia and industries.With the interdisciplinary advances in 5G, cloud/edge computing, big data, and artificial intelligence, the capability of DT is continuously enhanced, empowering not only one-way information mirroring and simulations, but also round-trip interaction and operations.Thus, DT is quickly diffusing in numerous different industries, such as smart city, Internet of vehicles, and 5G.Currently, DT is regarded as a key technology enabling 6G [5].In particular, cybertwin [6] and networked twin [7] are proposed and investigated for network management and automation.Furthermore, DT network [8], and DT edge network [9] are formulated. With DT, the heterogeneous computing and communication resources can be virtualized and modelled for flexible scheduling.Motivated by this, we employ DT to collaboratively schedule the edge-end heterogeneous computing and communication resources for the heterogeneous tasks with different deadline requirements.Specifically, we consider a multi-ES multi-ED scenario with a synchronous DT deployed at the cloud server (CS).To accomplish a complex job, EDs cooperate with each other and offload individual tasks to multiple ESs for parallel computing via the scheduling of DT, where different tasks require different types of computing resources and have different task deadlines.To minimize the job completion time (JCT), we employ multi-agent deep reinforcement learning (MADRL) and propose MADRL-based heterogeneous task and resource collaborative scheduling (MADRL-HTRCS) algorithm. The main contributions of this paper are summarized as follows. 1) We study a general scenario with single-CS, multi-ES and multi-ED, where the computing resources of ESs and EDs are heterogeneous and can support different kinds of tasks.We utilize DT to virtualize and model the heterogeneous computing resources, during which DT's estimation deviations between actual and estimated computing resources for ESs and EDs are considered.Meanwhile, the tasks are also heterogeneous and can be totally/partially/none offloaded to multiple ESs for parallel computing.That is to say, each task is processed by the cooperation between ED and multiple ESs. 2) By fully considering the deadline requirements of heterogeneous tasks, the maximum computing capabilities of both EDs and ESs, the computing resource estima-tion deviations of DT, the maximum transmit powers of EDs and the peak interference powers to coexisting EDs, we establish a job completion time minimization (JCTM) problem to optimize the edge-end task division, transmit power control, computing resource type matching and allocation.Due to the non-convexity of the JCTM problem, we reformulate it by multi-agent Markov decision process (MDP), where each ED is modelled as an agent interacting with environment and other agents independently.Furthermore, we design a compound reward leveraging latency reward and deadline reward according to the task criticality.3) To approximate an optimal solution, we propose the MADRL-HTRCS algorithm that supports offline centralized training by DT and online distributed execution of EDs.Specifically, we employ the Actor-Critic (AC) framework and design estimation and target AC networks for policy and value iterations.Moreover, a step-by-step ϵ-greedy algorithm is applied to balance exploration and exploitation.Extensive experiments validate the effectiveness and superiority of the proposed algorithm by comparing with some benchmark algorithms.The rest of this paper is organized as follows.In Section II, the recent works on task-resource scheduling are reviewed.In Section III, the DT-based system model is presented.After that, we establish the JCTM problem and reformulate it by multi-agent MDP in Section IV.Then, we propose the MADRL-HTRCS algorithm in Section V, and validate its effectiveness by extensive experiments in Section VI.Finally, the whole work is concluded in Section VII. II. RELATED WORK Task-resource collaborative scheduling is the basis of cloud/edge computing and has attracted broad interests during the past decade.Previous works have comprehensively investigated different MEC scenarios, employed different theories or algorithms, and optimized different objectives to achieve different goals.Specifically, the MEC scenario is related to the numbers of ESs and EDs, including single-ES multi-ED, multi-ES single ED, multi-ES multi-ED and so on.Herein, a task can be divided or not, and offloaded to one ES or multiple ESs.Typical task offloading schemes include binary offloading and partial offloading [10].Meanwhile, the scheduling objectives can be tasks (e.g., offloading ratio), computing resources (e.g., CPU cycle) and communication resources (e.g., transmit power, bandwidth, channel, subcarrier).On this basis, existing works formulate different optimization problems aiming at minimizing energy, latency or cost, and propose different algorithms based on convex optimization, machine learning, etc.In the following, we summarize the most related works from the perspective of optimization objectives. A. Energy Consumption Minimization Energy consumption is always a key indicator for wireless networks, especially for energy-constrained EDs.The energy consumption mainly includes the communication energy consumption for task offloading and the computing energy consumption for task processing. For single-CS single-ES multi-ED scenario, [11] employs deep Q-network (DQN) to study the long-term energy consumption minimization problem under the constraint of computing resources and latency.For multi-ES single-ED scenario, to minimize the overall energy consumption while satisfying the latency limit, [12] investigates the joint optimization of multi-task offloading, non-orthogonal multiple access (NOMA) transmission, and computing resource allocation.Similarly, [13] establishes a three-layer offloading framework and investigates the overall energy consumption minimization problem subject to latency constraints from EDs.Moreover, [14] investigates stochastic computation offloading and resource allocation problem to optimize long-term energy efficiency using Lyapunov optimization and asynchronous AC algorithm in DT network.Reference [15] proposes a deep learning-based user association and resource allocation algorithm which is trained by DT to minimize the maximum normalized energy consumption. B. Latency Minimization Although MEC can help process complex tasks for computation-intensive EDs, it may also introduce traffic conflicts and increase communication latency.Thus, minimizing the latency, which consists of communication latency and computing latency, is also very important, especially for timesensitive tasks. For single-ES multi-ED scenario, [16] investigates the long-term caching placement and resource allocation problem, and adopts deep reinforcement learning (DRL) to minimize the content delivery latency.In contrast, [17] studies multi-ES single-ED scenario and optimizes the NOMA-based transmission duration and task division to multiple ESs.Furthermore, [18] employs convex optimization and MADRL to jointly optimize sub-channel assignment, offloading decision, and computing resource allocation in multi-ES multi-ED scenario.Reference [19] proposes a risk-sensitive DRL algorithm to minimize the offloading and computing latency of all tasks constrained by given energy capacity.By modelling the user mobility and environment dynamics in DT, [20] proposes an AC-based DRL algorithm to minimize the offloading latency under the constraint of service migration cost for user mobility.Moreover, with DT and blockchain, [21] minimizes the latency for edge association by federated MADRL. C. Cost Minimization In addition to optimizing energy consumption and latency respectively, more recent works focus on minimizing the system cost, which is usually defined as a weighted sum of energy consumption and latency.In this way, the system performance can be optimized according to the requirements of specific tasks. For single-ES multi-ED scenario, [22] combines AC and DQN algorithms to jointly optimize the task offloading policy and channel allocation for time-varying channels.Reference [23] considers the case that multiple EDs offload their tasks via NOMA to multiple ESs, and employs reinforcement learning and matching game theory to solve the joint task scheduling and resource allocation problem with respect to task, power, subcarrier, and computing frequency.Furthermore, [24] exploits MADRL to optimize offloading decisions and transmit powers for edge-end orchestrated resource allocation of industrial wireless networks.Based on asynchronous advantage AC and DQN algorithms, [25] optimizes offloading decisions, node selection, bandwidth and computing resource allocations for single-CS multi-ES multi-ED scenario, wherein DT is utilized in the cloud.More recently, [26] considers multi-ES single-ED scenario based on DT and blockchain, and proposes a decision tree and double DQN (DDQN) solution for intelligent task offloading.Moreover, [27] proposes adaptive DT for vehicular edge network and employs MADRL to minimize the offloading cost. Besides the above works, some works also define special optimization objectives for task-resource scheduling.For example, [28] formulates a multi-objective problem to minimize latency and energy consumption simultaneously, and employs MADRL to make an optimal offloading decision for cloud-edge-end computing.Reference [29] proposes an endto-end DRL algorithm to maximize the number of tasks before their respective deadlines and minimize energy consumption simultaneously.Reference [10] formulates a computing rate maximization problem subject to the long-term data queue stability and average power constraints, and employs Lyapunov and DRL to achieve the optimal computing performance.In addition, by proposing a D3PG-based task offloading algorithm, [30] tries to maximize QoE with respect to service latency, energy consumption and task success rate.Reference [31] employs DQN to maximize average QoE for DT-empowered Internet of vehicles. From the aforementioned works, we can observe that existing studies on task-resource collaborative scheduling by DT are still on the early stage.More importantly, few existing works consider the heterogeneous computing resources problem, where different tasks require different types of computing resources.This motivates us to investigate the DT-driven edge-end heterogeneous computing and communication collaborative scheduling for heterogeneous tasks. III. SYSTEM MODEL In this section, we present the system model, including the network model, communication model, edge and local computing models.For ease of reading, we list the key notations in Table I. A. Digital Twin-Based Network Model In this paper, we consider a general single-CS, multi-ES and multi-ED scenario.As shown in Fig. 1, there are one DT-embedded CS, N ES-enhanced BSs and M resourceconstrained EDs in the physical space.Specifically, with full consideration of the strong computing capability and multi-type computing resources of CS, DT is deployed in CS Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.to mirror and model all physical network elements into the cyber space.In this way, DT senses the heterogeneous tasks, measures the heterogeneous computing and communication resources, trains the following proposed scheduling algorithm and schedules the heterogeneous tasks and resources. TABLE I SUMMARY OF KEY NOTATIONS To accomplish a complex job, multiple EDs cooperate with each other, where each ED implements individual task, respectively.The tasks are heterogeneous that have different data sizes, require different types of computing resources and should be completed before different deadlines.For m-ED, the data size and task deadline are denoted as D m and T max,m (m = 1, . . ., M ).Due to the limited computing resource, an ED can divide its task into multiple subtasks and offload them to different ESs for parallel computing.The task may be totally offloaded for full edge computing, none offloaded for full local computing or partially offloaded.Specifically, On the contrary, m-ED offloads the total task to n-ES when v m,n = 1. To guarantee all subtasks are processed for the task of m-ED, we have the task division constraint as where n = 0 indicates the subtask for local processing. B. Communication Model According to the scheduling by DT, m-ED employs the transmit power p m for task offloading.The transmit power of m-ED must be constrained by its hardware capability, i.e., where the maximum transmit powers of all EDs are assumed the same as P max .Meanwhile, the task offloading of EDs on the same wireless channel may cause co-channel interference with each other.Each ED has an interference temperature which is the peak interference power that an ED can tolerate.For simplicity, we assume the tolerable peak interference powers of EDs are the same as I p .When m-ED performs task offloading, its transmit power is constrained by all coexisting EDs, where m * -ED with the maximum channel power gain from m-ED imposes the strongest constraint.That is m * = arg max g m,m ′ , (m ′ = 1, . . ., M, m ′ ̸ = m).Then, by fully considering all possible offloading interferences to m * -ED, the transmit power of m-ED is constrained by where g m,m * and g m ′ ,m * denote the channel power gains from m-ED and m ′ -ED to m * -ED, respectively.Herein, the channel between any pair of EDs and/or ESs is assumed to be Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. symmetric and the channel state information can be accurately evaluated by DT for modelling and scheduling.Note that this assumption can be easily extended to the asymmetric channels with/without evaluation error.By transmit power control for wireless communication, we can calculate the task offloading rate between m-ED and n-ES as where W m,n denotes the bandwidth between m-ED and n-ES for task offloading, σ 2 n denotes the noise at n-ES, g m,n and g m ′ ,n denote the channel power gains from m-ED and m ′ -ED to n-ES, respectively. According to equation ( 4), we can further calculate the communication latency for task offloading. It is observed that equations ( 3) and ( 4) take into account the interferences of all possible EDs that offload tasks to ESs.When the number of EDs is large, the co-channel interferences become large, which limits the transmit powers of EDs and certainly reduces their offloading rates.That is to say, when EDs perform high-concurrent task offloading, there will be significant computing and communication resources competitions among EDs.In this way, an efficient task-resource collaborative scheduling algorithm is critical for job completion. C. Edge Computing Model The computing resources of ESs or EDs are single-type and heterogeneous, e.g., CPU, GPU.For example, an ED processing image or media task only equips with GPU, while an ED processing sensing or control data only equips with CPU.Obviously, a GPU-type task offloaded to a CPU-equipped ES will not be processed with high efficiency.Thus, we assume that an ES with given type of computing resource, can only process the task offloaded by the ED with the same type of computing resource. By mapping the physical computing resources of EDs and ESs at DT, DT can match the types of heterogeneous computing resources distributed at ESs and EDs.The heterogeneous computing resource type matching decision is expressed as where where C m is the required cycles for computing 1 Byte task.However, DT may have a computing resource estimation deviation ∆f m,n which may be either positive or negative.In this way, the actual computing resource of n-ES allocated to m-ED is calculated as f m,n + ∆f m,n which should satisfy and where F max,n denotes the maximum computing rate of n-ES. Then, we can calculate the computing latency deviation between the actual value and the estimated value, i.e., For the subtasks offloaded from m-ED, the actual computing latency by n-ES is calculated as Furthermore, the edge computing latency for the subtasks of m-ED by n-ES is calculated as wherein the computing results' feedback latency from n-ES to m-ED is ignored since the data size of feedback is generally very small and can be carried back by the acknowledged information during communication. As the task of m-ED is divided into multiple subtasks and offloaded to multiple ESs for parallel processing, the edge computing latency for the total task of m-ED is calculated as D. Local Computing Model Similar to the edge computing model, the local computing resource of m-ED estimated by DT is denoted as f m .Then, the local computing latency estimated by DT is calculated as There is also estimation deviation ∆f m which can be obtained by DT in advance [20], [26].Thus, we have where F max,m is the maximum computing rate of m-ED decided by the physical hardware.This is because each ED should utilize the full computing resource to process the Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. local task for latency reduction.The local computing latency deviation is calculated as In this way, the actual local computing latency by m-ED is calculated as IV. PROBLEM FORMULATION AND TRANSFORMATION A. Job Completion Time Minimization Problem As a task is completed by DT-driven edge-end collaborative computing, the task processing latency of m-ED is calculated as the maximum latency for edge computing and local computing, i.e., which includes the cases of none, partial and total offloading.Then, we can calculate JCT as M m=1 T m , where a job is completed by the sequel completion of all tasks.Furthermore, with full consideration of heterogeneous tasks' requirements, heterogeneous computing and communication resources constraints, the JCTM problem is formulated as (1), ( 2), ( 3), ( 6), ( 8), ( 9), where In the JCTM problem, we consider the task division constraint as (1), the transmit power constraints as (2) and (3), the computing resource type matching decision as (6), the computing capability constraints as ( 8) and ( 9), and the task deadline constraint as (20).Obviously, there are both integer and real variables, which are coupled with each other in the JCTM problem.Thus, it is a mixed integer non-linear programming problem, which is NP-hard and cannot be solved within a polynomial time by common methods such as convex optimization [32].Thus, we employ multi-agent MDP to transform the problem for MADRL solution. B. Problem Transformation by Multi-Agent MDP By the estimation and scheduling of DT in cloud, EDs and ESs cooperate with each other to complete a complex job.In this way, any action of an ED may influence the total system state such as co-channel interference, task division, and resource scheduling by DT.Meanwhile, the state transformation is also related with previous state and action.Thus, we employ multi-agent MDP to reformulate the JCTM problem.The multi-agent MDP is described by five tuples ⟨M, S, A, Z, R⟩, where M, S, A, Z, and R denote the agent set, state space, action space, state transition probability, and reward function, respectively. 1) Agent Set M: Aiming at minimizing JCT, each ED acts as an agent to learn its computing resource type matching decision, task division ratio, transmit power, and computing resource allocation.Thus, M EDs form an agent set M = {1, . . ., M }. 2) State Space S: The state space describes the running status of tasks as well as edge-end computing and communication resources, which can be observed by agent and evaluated by DT.At each decision epoch t, the state s m (t) of m-agent is characterized by data size, computing resource requirement, task deadline, computing resource estimation deviation, bandwidth and channel power gain, i.e., where Furthermore, we define the total state space of all agents at the decision epoch t as s(t) = {s m (t)} M . 3) Action Space A: The action space presents the policies of all agents.At each decision epoch t, m-agent performs action a m (t) according to the whole state s(t) subject to the constraints in the JCTM problem.The action describing the computing resource type matching decision, task division ratio, transmit power of ED, and computing resource allocation of ES, is given by where u m (t) = {u m,n (t)} 1×N , v m (t) = {v m,n (t)} 1×(N +1) , and f m (t) = {f m,n (t)} 1×N .Furthermore, we define the total action space of all agents at the decision epoch t as a(t) = {a m (t)} M .4) State Transition Probability Z: At each decision epoch t, the state transition probability z m (t) describes the probability that s m (t) transfers to s m (t + 1) when m-agent performs action a m (t), namely z m (s m (t + 1); s m (t), a m (t)). 5) Reward Function R: The reward presents the award or penalty for agent when it takes action at a given state.For multi-agent MDP, M agents interact with environment and cooperate with each other according to the state and the policy to obtain individual reward r m (t).Specifically, at each decision epoch t, m-agent performs action a m (t) at state s m (t), obtains the reward r m (t) and moves to the next state s m (t + 1). Fully considering the objective and constraints in the JCTM problem, we design the reward as the sum of latency reward and deadline reward.The latency reward is defined as r Latency m (t) = −T m (t), while the deadline reward is defined as r DDL m (t) = T max,m (t) − T m (t).In this way, when the latency exceeds the deadline, there is a negative reward, namely penalty. As there are diverse requirements of heterogeneous tasks, we design the compound reward of m-agent as where ρ m is the weight parameter set according to the deadline requirements of heterogeneous tasks.That is, the larger value of the weight parameter, the stricter deadline of this task.On the above basis, we further define the long-term accumulative reward of m-agent as where t 0 denotes the previous time, and γ m ∈ [0, 1] denotes the discounted factor indicating how the past reward impacts the current reward for m-agent. By maximizing the long-term accumulative reward of each agent, DT can obtain an optimal task-resource collaborative scheduling policy that minimizes JCT. V. MADRL-BASED HETEROGENEOUS TASK AND RESOURCE COLLABORATIVE SCHEDULING ALGORITHM Generally, the reformulated MDP problem can be solved by dynamic programming when the state transition probability is known.However, it is quite difficult to obtain the state transition probability since the environment is dynamic and the agent cannot predict the next state before taking action.Moreover, there exists the state space explosion problem due to the complex coupling of optimization values of multiple agents.Thus, we employ the model-free MADRL and propose the MADRL-HTRCS algorithm to learn an optimal solution. A. Algorithm Design The structure of MADRL-HTRCS algorithm is depicted in Fig. 2. We employ the Actor-Critic structure as basis, where the actor is used to generate action for the agent while the critic is used to guide the actor for generating a better action.The actor further includes estimation actor network which is used for training, and target actor network which is used for action execution of agent.Similarly, the critic also includes estimation critic network and target critic network which are used to evaluate the action of actor.Herein, the actor network employs policy-based deep neural network (DNN) while the critic network employs value-based DNN. Fully considering the dynamics of environment, we adopt the centralized training and distributed execution strategy.That is, the estimation critic network and target critic network are trained by DT in a centralized way while estimation actor network and the target actor network are executed by EDs in a distributed way. 1) Actor Network: As shown in Fig. 3a, the actor network consists of an input layer, a fully connected layer and an output layer, where the fully connected layer includes three hidden layers and a softmax layer.For the first two hidden layers, we use the rectified linear unit (ReLU) as the activation function for nonlinear approximation.For the final hidden layer, we use tangent (Tanh) as the activation function to bound actions.In this way, the input state is transformed into all possible actions with respect to computing resource type matching decision, task division ratio, transmit power, and computing resource allocation. For the estimation actor network of m-agent, the input is its current state s m (t), indicating data size, computing resource requirement, computing resource type, estimation deviation, task deadline, bandwidth and channel power gain.After the processing of three hidden layers, the outputs are the probabilities of different actions.With the softmax layer, the sum of the output probability of each action is 1.Then, an action is selected as the final output action a m (t). Similarly, for the target actor network of m-agent, the input is the next state s m (t + 1), while the output is the next action a m (t + 1) after the processing of fully connected layer.Note that although the estimation actor network and the target actor network employ the same DNN structure, their parameters are different, which are denoted as θ πm and θ ′ πm , respectively.2) Critic Network: As shown in Fig. 3b, the critic network consists of an input layer, a fully connected layer, and an output layer, where the first two hidden layers of the fully connected layer are also associated with ReLU. For the estimation critic network of m-agent, the inputs are the states and actions of all agents, namely S and A. After the processing of fully connected layer, the output is the Q-value.At the decision epoch t, the Q-value of m-agent is defined as Similarly, for the target critic network of m-agent, the inputs are the next states and actions of all agents at the decision epoch t+1, denoted as S ′ and A ′ .Correspondingly, the output Qm after the processing of fully connected layer.The structures of estimation critic network and target critic network are also the same, but with parameters θ Qm and θ ′ Qm , respectively. B. Algorithm Training The centralized training of the MADRL-HTRCS algorithm is implemented by DT, as summarized in Algorithm 1. Specifically, the critic network of each agent is managed by DT which can obtain the states and actions of all agents and make them fully observable to each agent.In this way, from the perspective of one agent, the environment is static no matter what action is taken by other agents. During the centralized training, DT first gets a global view on the states and actions of all agents, and then utilizes the information to train the estimation critic network for each agent, with the objective of maximizing the Q-value.For m-agent, its Q-value Q m (S, A; θ Qm ) is updated according to the Bellman criterion [33] as In this way, the temporal difference error is calculated as and the loss function is given by L(θ Qm ) = E δ 2 .Then, we update the parameter θ Qm by minimizing the loss function, wherein the stochastic gradient descent algorithm is adopted as follows. In contrast, the actor network of each agent is deployed in ED since the actor is locally executable and can take actions according to its locally observed state.Note that it is also necessary to train the local observed states by DT which can periodically synchronize the trained DNN parameters to Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.all agents.Herein, the parameter θ πm is updated by gradient descent as where π m (a m (t); θ πm ) indicates the policy by taking action a m (t). To ensure the stability of training process, we softly update the parameters of target AC network by the historical parameters of the estimation AC network as follows. C. Algorithm Execution After the centralized training by DT, EDs perform distributed execution as summarized in Algorithm 2. Specifically, m-agent first downloads the training results by DT and inputs them into its own target actor network.Then, m-agent observes the environment and state s m (t), and generates its action a m (t) for the reward r m (t) according to the trained policy π m . Initially, the agent takes actions randomly for exploration since there is not enough knowledge.When the knowledge is enough, the agent takes actions to maximize its reward.Thus, there always exists a tradeoff between exploration and exploitation, where too many explorations will affect the stability of long-term Q value calculation while too many exploitations will cause the insufficient exploration of the action space.The conventional greedy algorithm simply selects the optimal action for reward maximization, resulting in the loss of some efficient actions and the corresponding knowledge.Thus, we propose a step-by-step ϵ-greedy algorithm to balance exploration and exploitation as follows. with where a Exploration m (t) is the exploration action randomly selected while a Exploitation m (t) is the exploitation action selected from the explored action space; ϵ 0 is a positive value for initial exploration, β is the decreasing rate of exploration, and K is the training iterations.Obviously, with the iteration of training, ϵ decreases and the agent gradually transfers from exploration to exploitation.In this way, we can balance exploration and exploitation, and avoid the oscillation caused by setting a large ϵ for long-time. D. Algorithm Complexity Analysis We further analyze the computational complexity of the proposed MADRL-HTRCS algorithm.The computational complexity mainly depends on the structure of neuron network and its number of parameters.As both actor network and critic network employ DNN, the computational complexity is calculated based on that of DNN.Given a DNN employing L layers with O l neurons in l-th layer, the computational complexity is calculated as [20].Thus, the computational complexities of actor network and critic network are calculated as O(J a ) and O(J c ), respectively. At the centralized training stage, M agents with E experiences are trained for K iterations, and the computational complexities of actors and critics are O a (J a KE M ) and O c (J c KE M ), respectively.The training process is offline completed by DT in the CS, which can provide sufficient computing resources. At the distributed execution stage, each agent executes action according to the actor network, and the computational complexity of actor is calculated as O a (J a ).The execution process is online completed by ED independently, and can guarantee the timeliness. VI. PERFORMANCE EVALUATION To evaluate the performance of the proposed MADRL-HTRCS algorithm, we implement numerical experiments, and analyze the effectiveness and superiority by comparing with some benchmark algorithms in this section. A. Experiment Environment and Setting 1) Learning and Training Environment: The hardware setup includes Intel i7-13700k CPU and NVIDIA RTX4090-24G GPU, while the software environment includes TensorFlow-GPU-1.14.0 and Python-3.7. The parameters of DNN are set as follows.For Actor, the numbers of neurons for the first and second hidden layers are respectively set to 300 and 100, while that for the third hidden layer is set according to the dimensionality of possible actions.For Critic, the numbers of neurons for the three hidden layers Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.are set to 300, 100 and 1, respectively.During training, the learning rates of Actor and Critic networks are respectively set to γ a = 10 −4 and γ c = 10 −3 , the discount factor is set to γ m =0.9, the initial value and decreasing rate for exploration are set to ϵ 0 = 0.9 and β = 10 −4 , respectively [24]. B. Performance Evaluation In Fig. 5, we first evaluate the convergence of MADRL-HTRCS by the step-by-step ϵ-greedy algorithm.Obviously, JCT by different exploration parameters converges around different values.When the exploration parameter is set to 0 or 1, JCT oscillates in a certain interval.This is because the ϵ-greedy algorithm keeps on randomly choosing actions or exploring state space, respectively.In contrast, when there are both exploration and exploitation (i.e., ϵ 0 ̸ = {0, 1} ), JCT first decreases and then converges.For the same iteration, the larger exploration value, the smaller JCT.This is because a good action space can be established by sufficient exploration.To be specific, when the exploration parameters are 0.1 and 0.9, the total iterations required for convergence to approximate the minimum JCT are around 2000 and 4000, respectively.Thus, without loss of generality, we set ϵ 0 = 0.9 in the following experiments as there is a good balance between exploration and exploitation.Furthermore, Fig. 6 compares the normalized reward of MADRL-HTRCS with those of MADRL-PSES, MADRL-BSES and DDQN-HTRCS.With the increase of training iterations, the normalized rewards of all scheduling algorithms increase from small values to large values, and maintain at stable intervals, respectively.That is to say, all scheduling algorithms can converge, which validates their effectiveness by DRL.When these scheduling algorithms converge, the three MADRL-based algorithms obtain higher normalized rewards than the single-agent DDQN-HTRCS algorithm.This phenomenon validates the superiority of MADRL in distributed execution, while its complexity for centralized learning is not higher than that of single-agent DRL.In particular, MADRL-HTRCS obtains the highest normalized reward.This is because MADRL-HTRCS can offload partial task to multiple ESs for parallel computing according to the computing resource utilization of each ES and the channel states among EDs and ESs.In contrast, MADRL-PSES can only offload partial task to a single ES, while MADRL-BSES can only offload the whole task to a single ES or process the task locally.In this way, the utilization of computing and communication resources by MADRL-PSES and MADRL-BSES are not sufficient as those by MADRL-HTRCS.Fig. 7 compares JCT by different scheduling algorithms for different numbers of EDs.When the number of EDs is small (e.g., M =5), indicating that the job is not complex, JCT by any number of ESs and/or by any scheduling algorithms is almost the same with a small value.This is because the computing and communication resources are sufficient for EDs' tasks.With the number of EDs increasing, JCT of all algorithms increases.The reason is explained as follows.When more EDs participate in the job, namely the job becomes more complex, EDs must compete for the given computing and communication resources, resulting in the increase of both computing latency and communication latency.When the number of EDs becomes large (e.g., M =35), the performance gaps among different scheduling algorithms become large.In particular, JCT of DDQN-HTRCS is the largest while that of MADRL-HTRCS is the smallest.This phenomenon validates that MADRL-HTRCS is more suitable for massive heterogeneous tasks collaborations. More specifically, Fig. 8 evaluates how the estimation deviation of DT influences JCT by MADRL-HTRCS, where ∆f ≜ ∆f n = ∆f m,n = {−0.5,−0.2, 0, 0.2, 0.5} are selected to make the figure clear.With the increase of estimation deviation, JCT decreases correspondingly.Specifically, when the estimation deviation is positive, JCT is smaller than that without estimation deviation (i.e., ∆f = 0).This is mainly because the required computing resources are over estimated, and more computing resources are allocated for actual computing, which certainly reduces the computing latency and the corresponding JCT.On the contrary, when the estimation deviation is negative, the computing resources actually allocated are less than the required computing resources, which increases JCT. Fig. 9 further presents how the number of ESs influences JCT.Obviously, for given number of EDs, JCT decreases with the increase of ESs.This is because the computing resources are enhanced with more ESs deployed, and the computing Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.latency is reduced accordingly.Meanwhile, MADRL-HTRCS always obtains a smaller JCT than DDQN-HTRCS.Fig. 10 demonstrates how the maximum transmit power of EDs influences JCT for different numbers of EDs.When the maximum transmit power of EDs is 0 mw, EDs do not offload any tasks to ESs for edge computing, and process tasks totally based on local computing resources.Thus, for the same number of EDs, JCT by MADRL-HTRCS and DDQN-HTRCS is the same but very large.With the maximum transmit power of EDs increasing, JCT decreases.This is because EDs can offload tasks to ESs under the peak interference power constraints.However, when the maximum transmit power of EDs achieves certain values, JCT no longer decreases.This is due to the fact that the transmit powers of EDs achieve the tolerable peak interference power, and cannot be further enhanced in order to protect other EDs.In addition, for given maximum transmit power of EDs, JCT increases with the numbers of EDs increasing due to the same reason as Fig. 7. Also, MADRL-HTRCS obtains better performance than DDQN-HTRCS. Fig. 11 comprehensively investigates the impacts of the maximum transmit power and the peak interference power of EDs on JCT.When the peak interference power of EDs is very small (e.g., I p = 10 −6 mw), JCT is almost equally large, regardless of the maximum transmit power of EDs.This is because the transmit powers of EDs are too small to approach 0 mw since they are strictly constrained by the peak interference power of coexisting EDs.With the increase of peak interference power of EDs, namely relaxing the interference constraint, JCT decreases since EDs can employ suitable transmit powers to offload tasks to ESs.When the peak interference power achieves certain values, JCT doesn't decrease accordingly since the transmit powers of EDs are no longer constrained by the peak interference power but constrained by the maximum transmit power.Specifically, when the peak interference power is around I p = 10 −2 mw, JCT for P max = 50 mW first doesn't decrease since the transmit powers of EDs achieve their maximum values.In contrast, JCT for P max = 100 mW and P max = 200 mW can further decrease but successively converge later. In detail, Fig. 12 further presents the average processing latencies of heterogeneous tasks by MADRL-HTRCS.We can observe that the task processing latency of each type also increases with the number of EDs, and can well satisfy the differentiated deadline requirements even for M = 30.Herein, the task processing latency of control task is the smallest, while that of multimedia task is the largest.These observations validate that DT employing MADRL-HTRCS can well schedule the heterogeneous computing and communication resources according to the requirements of heterogeneous tasks.When the number of EDs further increases (e.g., M = 35), the latencies of sensing and control tasks no longer satisfy their required deadlines.This is mainly because their resources requirements cannot be well satisfied when massive EDs compete for the given computing resources of ESs.For this case, if we want to guarantee the requirements of heterogeneous tasks, more resources should be deployed, such as deploying more ESs with more computing resources. Correspondingly, Fig. 13 depicts the status of tasks division ratios of EDs and computing resource allocations of ESs by MADRL-HTRCS.We can observe that the computing resources allocation of each ES in Fig. 13(b) is generally proportional to the task division radio of each ED in Fig. 13(a).This is due to the task-oriented and on-demand resource scheduling by MADRL-HTRCS, which can divide task, control transmit power, match computing resource type allocate computing resources according to the deadline requirements, offloading interferences and channel states. VII. CONCLUSION In this paper, we proposed a DT-driven edge-end collaborative scheduling algorithm for heterogeneous tasks and resources based on MADRL.With full consideration of deadline requirements of heterogeneous tasks, heterogeneous computing resource types and capabilities of EDs and ESs, computing resource estimation deviation of DT, maximum transmit power and tolerable peak interference power of EDs, we formulated the JCTM problem to divide tasks for parallel computing, match the type of edge-end computing resource, allocate computing resources of ESs, and control transmit powers of EDs.Due to the non-convexity of the JCTM problem, we transferred it into a multi-agent MDP problem, where a compound reward consisting of latency reward and deadline reward was designed.Then, we employed MADRL to deal with the explosive state space and proposed the MADRL-HTRCS algorithm to approximate the optimal solution.With extensive experiments, we minimized JCT through offline centralized training by DT and online distributed execution by EDs.The results showed that, MADRL-HTRCS can satisfy the deadlines of heterogeneous tasks and achieve the smallest JCT comparing with typical benchmark algorithms. ) , P = {p m } M , and F = {f m,n } M ×N are the computing resource type matching decisions, task division ratios, transmit powers of EDs, and computing resources of ESs. Fig. 3 . Fig. 3. Structure of actor network and critic network. 2 ) Network Environment and Resource Setup: We consider a dynamic wireless network environment where the numbers of ESs and EDs are set to N = 3 ∼ 4 and M = 5 ∼ 35 to cover a given area.As depicted in Fig. 4, ES-enhanced BSs are fixed in given positions while EDs are randomly deployed within the coverage of BSs.By calculating the distances among EDs and ESs, we can obtain the channel power gains, where the path loss exponent is set to 3. For simplicity, the bandwidths for task offloading are equally set to W m,n = 20 MHz, the maximum transmit powers of EDs are equally set to P max =200 mW, while the noise powers are equally set to σ 2 n = 10 −11 mW [20].To evaluate the influence of interference constraints, the tolerable peak interference powers of EDs are equally set to I p = 10 −6 ∼ 10 6 mW.Without loss of generality, we assume there are only CPU and GPU computing resources randomly configured to ESs during experiments.The maximum computing resources of ESs are equally set to F max,n =100 GHz/s, while those of EDs are equally set to F max,m = 5 GHz/s.The computing resource estimation deviations of DT for ESs and EDs are randomly set to [−0.5, 0.5] GHz/s.3) Heterogeneous Task Setup: We consider three kinds of tasks, namely control task, sensing task, and multimedia task.The control tasks have small data size with D m ∈ [10, 300) Bytes and strict deadline T max,m = 10 ms, the sensing tasks have medium data size with D m ∈ [300, 1000) Bytes and medium deadline T max,m = 50 ms, while the multimedia tasks have big data size with D m ∈ [1000, 1500) Bytes and slack deadline T max,m = 100 ms.Correspondingly, we set ρ m = 300, 200, 100 for control, sensing and multimedia tasks, respectively.During experiments, control, sensing and multimedia tasks are randomly generated and their ratios to Fig. 10 . Fig. 10.JCT versus the maximum transmit power of EDs for different numbers of EDs: N = 3, Ip =1 mW. Fig. 11 . Fig. 11.JCT versus the tolerable peak interference power of EDs for different maximum transmit powers of EDs: N = 3, M = 10. By matching the heterogeneous computing resources among ESs and EDs, DT further evaluates the edge-end resources and schedules tasks for parallel computing.The computing resource is measured by computing rate f m,n , namely the number of computing cycles per second.When m-ED offloads v m,n D m task to n-ES, the edge computing latency estimated by DT is calculated as o n and o m indicate the computing resource types of n-ES and m-ED, respectively.⊗ is the exclusive OR operation.u m,n = 1 indicates that the computing resource types of m-ED and n-ES are the same.Otherwise, the computing resource types are different and n-ES cannot support the task processing for m-ED.
10,694.6
2023-10-01T00:00:00.000
[ "Computer Science", "Engineering" ]
The crosstalk between autophagy and apoptosis was mediated by phosphorylation of Bcl-2 and beclin1 in benzene-induced hematotoxicity Increasing evidence suggested that benzene exposure resulted in different types of hematological cancer. Both autophagy and apoptosis were reported to play vital roles in benzene toxicity, but the relationship between autophagy and apoptosis remain unclear in benzene-induced hematotoxicity. In this study, the toxic effect of benzene on autophagy and apoptosis in benzene-exposed workers and in vitro were verified. Results showed that benzene metabolite (1, 4-benzoquinone, 1, 4-BQ) dose-dependently induced autophagy and apoptosis via enhancing phosphorylation of Bcl-2 and beclin1. Finally, we also found that the elevated ROS was in line with enhancing the phosphorylation of Bcl-2 and beclin1 which contributed to 1, 4-BQ-induced autophagy and apoptosis. Taken together, this study for the first time found that the effect of 1, 4-BQ on the crosstalk between autophagy and apoptosis were modulated by the ROS generation via enhancing phosphorylation of Bcl-2(Ser70) and phosphorylation of beclin1(Thr119), which offered a novel insight into underlying molecular mechanisms of benzene-induced hematotoxicity, and specifically how the crosstalk between autophagy and apoptosis was involved in benzene toxicity. This work provided novel evidence for the toxic effects and risk assessment of benzene. Introduction Benzene is widely used as an essential industrial material, and is also commonly presented in oil, gasoline vapors, wood-burning and cigarette smoke 1,2 . With the widespread prevalence of benzene, benzene is also known as a common air pollutant in the environment 3 . During industrial production, occupational exposure to benzene occurs in rubber production plants, shoe manufacturing and printing factories mainly via inhalation 4,5 . Exposure to benzene can cause various health hazards, including hematotoxicity 6,7 , aplastic anemia 8 , and human leukemogen 9 . At present, attention on toxic effect of low-dose benzene exposure has been gain. Therefore, it is necessary to explore the effects and mechanisms which evaluates the health effect of benzene exposure. Various biological processes could induce the benzene toxicity, such as oxidative stress, apoptosis and autophagy. Although recent studies have shown that oxidative stress is widely recognized as a major factor of benzene-induced hematotoxicity, the cause of benzene toxicity has not been fully elucidated [10][11][12] . The studies have reported benzene activated reactive oxygen species (ROS) in the workers of benzene exposure [11][12][13] . However, the exact molecular mechanisms underlying the effect of oxidative stress on benzene-induced hematotoxicity are still not clear. A recent reported revealed that the activation of oxidative stress resulted in apoptosis 14 . Besides, apoptosis was reported to be involved in benzene-induced toxicity 11,12,15 . Several studies revealed that autophagy was induced under benzene exposure and suggested that autophagy played a key role in benzene toxicity 15,16 . Recently, there has been growing interest in the relationship between autophagy and apoptosis. However, it is still not clear whether benzene regulated the crosstalk between autophagy and apoptosis. Apoptosis and autophagy are different types of cell death, and there are various molecular mechanisms involved in the regulation of the crosstalk between autophagy and apoptosis, such as the expression of beclin1 and Bcl-2, beclin1-Bcl2 complex and the modification of beclin1 and Bcl-2 17,18 . Among them, the interaction and modification of beclin1 and Bcl-2 played a key role in modulating the crosstalk between autophagy and apoptosis. Moreover, previous findings showed that Bcl-2 regulated autophagy via beclin1 19,20 , and Bcl-2 phosphorylated resulted in dissociating from beclin1 and induction of autophagy 21,22 . However, whether the expression, interaction and modification of beclin1 and Bcl-2 are involved in the crosstalk between benzene-induced autophagy and apoptosis remained unclear. Hence, we proposed that benzene stimulated ROS generation and then induced the oxidative stress that led to the phosphorylation of beclin1 and Bcl-2, which accelerated the activation of autophagy and apoptosis causing benzene-induced hematotoxicity. This study firstly measured the effects of benzene on the crosstalk between autophagy and apoptosis. Then, the relationship between autophagy and apoptosis was detected. Further, the effects of beclin1-Bcl2 complex on benzene-induced autophagy and apoptosis was evaluated. Eventually, the autophagy inhibitor 3-Methyladenine (3-MA) and the apoptotic inhibitor (Z-VAD-FMK) were employed to deeply investigate the molecular mechanisms by which benzene effected the crosstalk between autophagy and apoptosis. This study for the first time revealed that benzene-induced hematotoxicity via enhancing phosphorylation of Bcl-2 and phosphorylation of beclin1, which contributed to the crosstalk between autophagy and apoptosis. This study aimed to explore exposure to benzene could be a potential hazardous effect on hematotoxicity. Study population We selected 140 workers randomly, and the benzene concentration of 70 workers was negligible, while 70 workers were known to occupationally exposed to benzene. Participants were required to fill a consent form and answer a questionnaire, including life-style, demographic and occupational information including gender, age, drinking, smoke, medications, and family history of health status. This study was approved by the Committees for Ethical Review of Research involving Human Subjects of Capital Medical University. Exposure assessment Our study monitored individuals' exposure airborne benzene for 5 h of a working day. Trans, trans-muconic acid (t, t-MA) and S-phenylmercapturic acid (S-PMA) are the urinary metabolites, which were also measured by LCP-MS (Agilent 7700x, USA) in urine samples collected from study participants. ELISA assays ELISA assays were performed to determine the level of oxidative, autophagy and apoptosis according to the manufacturer's instructions. Then culture supernatants and the serum were harvested, centrifuged, and placed. The oxidative stress-related protein (MDA, 8-OHdG, 8-iso-PGF2a, and NQO1) and autophagy-associated and apoptosis-associated protein (Bcl-2, beclin1, p62) were used to determine oxidative, autophagy, and apoptosis present in the culture supernatants and serum. All assays were performed in duplicate and repeated three times. The TEM observation The normal human lymphocyte line (AHH-1) was provided by the National Institute for Radiological Protection, China CDC (Chinese Center for Medical Response to Radiation Emergency). After incubating for 24 h with 20 μM 1,4-BQ, cells were washed 3 times with PBS and harvested by centrifugation for 5 min at 1200 rpm. Then the supernatant was washed 3 times with PB and fixed with 1% citric acid for 4 h. The cell sample was dehydrated in a series of ethanol, and finally embedded in an epoxy resin. Ultrathin sections were obtained using an ultramicrotome and then stained with aqueous uranyl acetate and aqueous lead citrate. After that, cell samples were imaged by a transmission electron microscope (TEM). mRFP-GFP-tagged LC3 Cells were transfected with a fluorescent mRFP-GFPtagged LC3-expressing virus (Genechem, GPL2001A) according to the manufacturer's instructions. Cells were transfected for 72 h. The cells then were detected after 1, 4-BQ treating for 24 h. GFP and mRFP expression was visualized with a confocal microscope (Leica Microsystems, Germany). Autophagic flux was detected via analyzing the punctate pattern of GFP and mRFP. Immunofluorescence and confocal microscopy Cells were exposed to 1, 4-BQ for 24 h, then fixed with 4% paraformaldehyde for 30 min, washed 3 times with PBS, and permeabilized with 0.5% TritonX-100 in PBS for 10 min at room temperature. Then, the cells were blocked with 10% normal goat serum for 1 h. After that, cells were incubated with rabbit anti-LC3B, or simultaneously incubated rabbit anti-Bcl-2 antibody and mouse anti-beclin1 antibody overnight at 4°C followed by secondary antibody for 1 h at room temperature. Finally, cells were counterstained with DAPI and imaged with a confocal microscope. RNA isolation and quantitative real-time PCR (qRT-PCR) Total RNA was extracted with Column Blood RNA-OUT (TIANDZ, China) according to the manufacturer's protocol. To determine mRNA levels, RevertAid First strand cDNA (Thermo Fisher Scientific, USA) was synthesized using 1 µg of total RNA in 20 µL reverse transcriptase reaction mixture according to the manufacturer's protocol. Then quantitative real-time polymerase chain reaction (QRT-PCR) was performed on Bio-Rad (CFX96TM optics Module) using SYBR Green (Thermo Fisher Scientific, USA). Co-immunoprecipitation Cell extract (100 mg) was precleared with Protein-G agarose, then incubated at 4°C overnight with beclin1 and Bcl-2 antibody with constant rotation. Protein G-sepharose beads were prewashed three times in immunoprecipitation buffer, 0.5% Triton X-100 for 15 min and then incubated at 4°C for 6 h with the protein/antibody mixture with constant rotation. The precipitant was collected by centrifugation at 10,000 × g for 1 min and washed three times with immunoprecipitation buffer to remove nonspecifically bound proteins. The washed beads were suspended in sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) loading buffer (30 ml/tube). Beads were removed by centrifugation at 10,000 × g for 1 min and the supernatant was analyzed by SDS-PAGE and western blotting. Western blotting Total cellular protein lysates were prepared by lysing cells with a protease inhibitor cocktail and a phosphatase inhibitor cocktail. Equal amounts of total proteins were separated for phos-beclin1(p-Thr119), phos-Bcl2 (p-Ser70), SQSTM1, beclin1, Bcl-2, LC3B, and Caspase-3 detection. Actin was used as the protein loading control. Experiments were performed for at least third times and a representative experimental result was shown. Grayscale analysis of proteins was quantified with Imaging J. Statistical analysis Statistical analysis was performed by the Statistical Package for the Social Sciences (SPSS) software version 17.0. The Kolmogorov-Smirnov tests were used to check Normality Distributions of all variables. The differences between the two groups were analyzed by independentsample t tests. And the data were presented by mean ± SD values. The result was considered to be statistically significant when p-values in 2 sides < 0.05. Results The concentration of airborne benzene and urinary benzene metabolites levels In this population-based study, 140 workers were recruited, including 70 non-known benzene workers and 70 benzene-exposed workers. Additionally, smoking, life style and alcohol consumption between the two groups were matched. The median of air benzene concentrations was 0.050 mg/m 3 and 2.639 mg/m 3 in control and benzene exposure group, respectively. As shown in Supplemental material, Supporting Table 1s, there was a statistically significant difference of airborne benzene concentration in control and benzene exposure group. Supplemental material, Supporting Table 2s showed that the levels of urinary metabolites (S-PMA and t, t-MA) in benzene exposure group were higher than that seen in controls. However, no significant difference between control and benzene exposure group in terms of urinary metabolites (S-PMA and t, t-MA) was found in all subjects. Oxidative stress injury, autophagy, and apoptosis were correlated with benzene exposure It was reported that oxidative stress is a main mechanism of benzene-induced toxicity 7,10,23 . MDA, 8-OHdG, NQO1 and 8-iso-PGF2a, reflecting the level of oxidative stress which triggered by low-dose benzene exposure, were detected using ELISA. Figure 1a, b showed that MDA and 8-OHdG had a rise trend, and there was statistically difference between two groups. 8-iso-PGF2a and NQO1 in benzene exposure group was higher than in control group (Fig. 1c, d). These results indicated that benzene exposure led to oxidative stress injury. In order to investigate the effect of autophagy and apoptosis on benzene-induced hematotoxicity, Bcl-2, beclin1 and p62 was measured by ELISA assay. Bcl-2 and p62 in benzene exposure group was higher than in control group, while beclin1 in benzene exposure group was lower than in control group ( Fig. 1e-g). To establish the relationship among oxidative stress injury, autophagy and apoptosis triggered by benzene exposure, the correlated analysis was performed. As shown in Fig. 2a-f, j, NQO1 and 8-iso-PGF2a were highly correlated with the autophagy-associated protein (beclin1 and p62) and apoptosis-associated protein (Bcl-2). The results of Supplemental material, Supporting Fig. 1s showed MDA and 8-OHdG were not associated with autophagy and apoptosis, suggesting that the decrease of NQO1 triggered by oxidative stress was closely correlated with benzeneinduced autophagy and apoptosis. In addition, Fig. 2g-j demonstrated that Bcl-2 expression was closely related to beclin1 and p62 expression. The results illustrated that benzene-induced abnormal autophagy and apoptosis were closely related to the activation of oxidative stress. The induction of autophagy and apoptosis in benzeneexposed group was correlated with benzene-induced hematotoxicity Blood clinical parameters reflects hematotoxicity, PLT, and LYM manifested a significant reduction in benzene exposure group compared with control group (Supplemental material, Supporting Fig. 1s). To further investigate the exact mechanism of benzene-induced hematotoxicity, the relationship between cell death-associated (autophagy and apoptosis) protein and blood clinical parameters was analyzed by correlation analysis in the population-based study (Fig. 3e). Interestingly, we found that the protein of beclin1, Bcl-2, and p62 were both closely related to the mount of LYMs. Population-based results were consistent with results in vitro, which used LYMs to explore mechanisms in vitro (Fig. 3a-c). The correlation analysis also showed that beclin1 was closely related to PLT (Fig. 3d), but no other correlations were found in this study (Fig. 2sa-h). Therefore, we concluded that benzene might induce hematotoxicity via modulating the crosstalk between autophagy and apoptosis. 1, 4-BQ induced autophagy and apoptosis via the activation of oxidative stress In order to confirm the activation of autophagy and apoptosis, TEM, Western bolt and ELISA assays were also performed to confirm the effect of 1, 4-BQ on autophagy and apoptosis in the AHH-1 cells. TEM assay was a common method for detecting the activation of autophagy and apoptosis. The untreated cells were morphologically normal, while there were many autophagic vacuoles and double-membrane autophagosomes in the 1, 4-BQ-treated cells (Fig. 4a). We also found that cells showed apoptotic-like ultrastructural changes such as chromatin margination, cytoplasmic vacuolization, nuclear fragmentation and apoptotic body formation after treating with 1, 4-BQ (Fig. 4a). These representative images of TEM proved the significant evidence that 1, 4-BQ activated autophagy and apoptosis. To get a deeper insight into the underlying mechanism of autophagy and apoptosis which were triggered by benzene metabolite 1, 4-BQ, the level of intracellular oxidative stress was measured using the DCFH-DA probe. As shown in Fig. 4b, the level of oxidative stress gradually elevated in AHH-1 cells with increasing concentrations of 1, 4-BQ. Thus, the oxidative stress inhibitor NAC reversed 1, 4-BQ-activated oxidative stress (Fig. 4b). To investigate the effect of ROS on 1, 4-BQ-induced autophagy and apoptosis, the change of 1, 4-BQ-induced autophagy and apoptosis was detected using qRT-PCR, Western blotting and ELISA assay after treating with oxidative stress inhibitor NAC. Administration of NAC along with 1, 4-BQ reversed the increase of autophagyassociated and apoptosis-associated genes (p62, LC3 and Fig. 1 Oxidative stress, autophagy and apoptosis were correlated with benzene exposure. a-d The level of oxidative stress was measured using ELISA assay. The indicators of oxidative stress, including MDA, 8-OHdG, NQO1 and 8-iso-PGF2a, were measured in control (n = 70) and benzene exposure group (n = 70). *p < 0.05, compared to controls. e-g The expression of Bcl-2, beclin1 and p62 was measured by ELISA assay. Data are represented in the form of mean ± SD. *p < 0.05, compared to control group Fig. 2 Oxidative stress was significantly correlated to benzene-induced autophagy and apoptosis. a-j The correlations between oxidative stress, autophagy and apoptosis were analyzed using correlation analysis. Pearson's correlation among oxidative stress, autophagy-associated and apoptosis-associated proteins was respectively calculated. Data are represented in the form of mean ± SD. *p < 0.05 compared to control group caspase-3) (Fig. 4c-e). Thus, the effect of NAC on autophagy were measured using mRFP-GFP-tagged LC3 which monitored autophagosomes and autolysosomes to further examine the change of autophagic flux (Fig. 4f). We found that the numbers of puncta corresponding to autophagosomes (GFP+ RFP+) and autolysosomes (GFP− RFP+) increased after 1, 4-BQ-teated cells. NAC treatment reversed the increase of autophagosomes (GFP+ RFP+) and autolysosomes (GFP-RFP+) induced by 1, 4-BQ. As shown in Fig. 5a, the expression of NQO1 was measured by ELISA assay. The NQO1 dosedependently decreased, and NAC treatment effectively reversed the reduction of NQO1 which induced by 1, 4-BQ. Figure 5b-d showed that the protein expression of Bcl-2, p62 and beclin1 reversed after NAC treatment which revealed that NAC reversed 1, 4-BQ-induced autophagy and apoptosis. Moreover, in the NAC + 20 μM 1, 4-BQ group, NAC treatment further inhibited the p62 and LC3I/II conversion which were markers of autophagy activation (Fig. 5e, h-j). Thus, the effect of NAC on apoptosis were measured using the level of cleaved-caspase-3 (Fig. 5k). Besides, the immunofluorescence analysis further revealed that LC3B puncta increased in 1, 4-BQ-treated cells. And the increase of punctate LC3B induced by 1, 4-BQ was reversed after NAC treatment (Fig. 5l). Therefore, we concluded that 1, 4-BQ activated autophagy and induced apoptosis by activating oxidative stress. 1, 4-BQ activated autophagy and induced abnormal apoptosis via enhancing Bcl-2 Ser70 phosphorylation and beclin1 Thr119 phosphorylation which triggered by ROS To evaluate the mechanism of oxidative stress on benzene-induced autophagy and apoptosis, we therefore investigated whether phosphorylation of Bcl-2 and phosphorylation of beclin1 caused 1, 4-BQ-induced autophagy and apoptosis. The results showed Bcl-2 Ser70 phosphorylation and beclin1 Thr119 phosphorylation obviously increased after 1, 4-BQ treatment. The results in Fig. 5e-g presented that the level of phosphor-Bcl-2 Fig. 3 The induction of autophagy and apoptosis in benzene-exposed group was correlated with benzene-induced hematotoxicity. a-e The correlations between autophagy-associated and apoptosis-associated proteins and blood clinical parameters which reflecting benzene-induced hematotoxicity (Ser70) and phosphor-beclin1 (Thr119) elevated gradually with increasing concentrations of 1, 4-BQ. After treating with NAC, the phosphor-Bcl-2 (Ser70) and phosphor-beclin1 (Thr119) was inhibited. In addition, the immunofluorescence analysis of phosphor-beclin1 (Thr119) also showed that the increase of phosphor-beclin1 (Thr119) induced by 1, 4-BQ was reversed after treating with NAC (Fig. 4s). The results demonstrated that ROS activated autophagy and apoptosis via enhancing phosphorylation of Bcl-2 at Ser70 site and phosphorylation of beclin1 at Thr119 site. It has been reported that beclin1 binds to Bcl-2 to form beclin1-Bcl2 complex, which modulated the crosstalk between autophagy and apoptosis 24 . The relationship between beclin1 and Bcl-2 was analyzed by Bioinformatics analysis (Fig. 3s). This study aimed to investigate whether 1, 4-BQ activated autophagy and apoptosis by regulating the dissociation of beclin1-Bcl2 complex and showed that a smaller amount of beclin1 co-immunoprecipitated with Bcl-2 after 1, 4-BQ treatment (Fig. 8b). Interestingly, oxidative stress inhibitor NAC, autophagy inhibitor 3-MA and apoptosis inhibitor Z-VAD-FMK reduced the dissociation of beclin1-Bcl2 which triggered by 1, 4-BQ (Fig. 8a-c). These findings demonstrated that 1, 4-BQ promoted the dissociation of beclin1-Bcl2 complex and thereby induced autophagy and apoptosis. An overview of the exact mechanisms involved the benzene-induced hematotoxicity was presented in Fig. 9. Discussion Autophagy and apoptosis are considered to be a potential toxic effect of benzene that results in cytotoxicity, proliferation, and even diseases 12,16,25 . Recent studies have reported that there are a variety of mechanisms involved in benzene-induced hematotoxicity, including autophagy and apoptosis. Our previous study has reported that benzene-induced apoptosis 11,12 , but the underlying mechanism of the relationship between autophagy and apoptosis remained unclear. In this study, we focused on investigating the specific mechanisms through which the (see figure on previous page) Fig. 5 1, 4-BQ induced autophagy and apoptosis by enhancing phosphorylation of Bcl-2 at Ser70 site and phosphorylation of beclin1 at Thr119 site. a-d The NQO1, Bcl-2, beclin1, and p62 were measured using ELISA assay after oxidative stress inhibitor(NAC) treatment. e-k Western blotting was used to analyze the level of phosphorylation of Bcl-2 at Ser70 site, phosphorylation of beclin1 at Thr119 site and autophagy-and apoptosis-associated proteins. Data are represented in the form of mean ± SD. *p < 0.05 compared to control group. # p < 0.05 compared to 20 μM 1,4-BQ-treated group. l The image of immunofluorescence was analyzed by Image J. Data are represented in the form of mean ± SD. *p < 0.05 compared to control group. # p < 0.05 compared to 20 μM 1,4-BQ-treated group Fig. 6 1, 4-BQ-induced autophagy promoted abnormal apoptosis. a After treating with autophagy inhibitor (3-MA), the toxic effect of 1, 4-BQ on the expression of autophagy-and apoptosis-associated genes was investigated by qRT-PCR. b The Bcl-2, beclin1 and p62 were measured using ELISA assay after autophagy inhibitor (3-MA) treatment. c Western blotting was used to analyze the level of proteins which were related to autophagy and apoptosis. d After treating with autophagy inhibitor (3-MA), LC3B puncta was analyzed by image J. Data are represented in the form of mean ± SD. *p < 0.05 compared to control group. # p < 0.05 compared to 20 μM 1,4-BQ-treated group Fig. 7 1, 4-BQ-induced apoptosis in turn enhanced autophagy. a After treating with apoptosis inhibitor (Z-VAD-FMK), the toxic effect of 1, 4-BQ on the expression of autophagy-and apoptosis-associated genes was investigated by qRT-PCR. b The Bcl-2, beclin1, and p62 were measured using ELISA assay after apoptosis inhibitor (Z-VAD-FMK) treatment. c Western blotting was used to analyze the level of proteins which were related to autophagy and apoptosis. d After treating with apoptosis inhibitor (Z-VAD-FMK), LC3B puncta was analyzed by image J. Data are represented in the form of mean ± SD. *p < 0.05 compared to control group. # p < 0.05 compared to 20 μM 1, 4-BQ-treated group Base on previous studies, we verified that the effect of benzene on autophagy and apoptosis using TEM, Western blotting and mRFP-GFP-tagged LC3. The results showed that benzene activated autophagy, while apoptosis was induced by benzene. In addition, we further investigated benzene-induced ROS generation, which damaged cell ultrastructures and caused cell death, such as autophagy and apoptosis. It was well accepted that autophagy and apoptosis can be modulated by oxidative stress, since autophagy and apoptosis often were enhanced by oxidative stress 12,26 . In our previous study, the inhibition of 8iso-PGF2a might be affected by inactivation of NQO1 13 . Inhibition of NQO1 has been reported to be accompanied by enhanced autophagy 14 , but the effect of NQO1 inhibition on benzene-induced autophagy remain unclear. After treating with oxidative stress inhibitor, the activation of NQO1 was shown to suppress the autophagy process which was found to aggravated benzene-induced cytotoxicity. These results suggested that benzeneinduced oxidative stress enhanced autophagy. It has been reported that benzene can led to autophagy and apoptosis 11,12,16,26 , but it is not determined whether benzene modulated the crosstalk between autophagy and apoptosis. Our results showed that both Bcl-2 and cleavedcaspase-3 were suppressed by autophagy inhibitor 3-MA, indicating the inhibition of autophagy attenuated the increase of apoptosis induced by 1, 4-BQ. When cells were treated with apoptosis inhibitor Z-VAD-FMK, autophagy markers of p62 and LC3II were totally inhibited. Further, we also found that autophagosomes (GFP+ RFP+) and autolysosomes (GFP− RFP+) was inhibited after treating with apoptosis inhibitor Z-VAD-FMK, indicating that apoptosis inhibitor Z-VAD-FMK inhibited autophagy. These results supported that there was crosstalk between benzene-induced autophagy and apoptosis. Previously, beclin1 was shown to enhance benzeneinduced autophagy 15 , but beclin1 was inhibited in 1, 4-BQ-treated cells while benzene enhanced autophagy in this study. Interestingly, we found that there was the difference of the mechanism of Beclin1-mediated benzene-induced hematotoxicity between in normal cell line and tumor cell line. We firstly found that benzeneinduced the activation of autophagy in cells is not caused by the expression of beclin1, then what mechanism is responsible for the activation of autophagy? Under abnormal conditions, a balance between apoptosis and Fig. 9 The putative schematic representation of mechanisms involved in the benzene-induced hematotoxicity autophagy that maintains intracellular homeostasis was broken, and this balance was perturbed in neurodegenerative disorders 18 . It was generally accepted that the beclin1-mediated autophagy is not only regulated by its own expression, but also based on promoting beclin1 binding to Bcl-2 to regulate the occurrence 18,19,27 . Therefore, we attempted to explore the effect of beclin-Bcl2 complex on the crosstalk between benzene-induced autophagy and apoptosis. Our data verified that benzene promoted the dissociation of beclin1 and Bcl-2, which enhanced autophagy triggered by benzene, but not the expression of beclin1. More and more evidence about the crosstalk between autophagy and apoptosis has focused on posttranslational modifications (PTMs) [17][18][19]24,28,29 . The phosphorylation of BH3-only domain within beclin1, or the BH3 receptor domain within Bcl-2, disrupted the beclin1-Bcl2 complex, resulting in the stimulation of autophagy 30,31 . Previous study has demonstrated that phosphorylation of the BH3-domain residue Thr119 inhibited beclin1-Bcl2 interaction 21,22,30,32 . In addition, the phosphorylation of Bcl-2 (Thr69/Ser70/Ser87) seem to be effective in abrogating beclin1-Bcl-2 complex 17,18 . However, it was unknown which site of beclin1 and Bcl-2 plays a key role in benzene-induced autophagy and apoptosis. Herein, we found that benzene-induced oxidative stress directly phosphorylated Bcl-2-Ser70 and beclin1-Thr119 dissociating the complex of beclin1-Bcl2, promoting autophagy. Autophagy was enhanced by benzene, because phosphorylation of Bcl-2 at Ser70 site and phosphorylation of beclin1 at Thr119 site is strongly induced on benzene-induced autophagy and apoptosis. It is necessary for modulating the crosstalk between benzene-induced autophagy and apoptosis. These findings provide a new insight of the mechanisms of the crosstalk between autophagy and apoptosis, which is beneficial to investigate benzene-induced hematotoxicity. In conclusion, benzene exposure stimulated ROS generation, which in turn modulated the crosstalk between autophagy and apoptosis via phosphorylating of Bcl-2 at Ser70 site and beclin1 at Thr119 but not the expression beclin1. Meanwhile, not only was phosphorylation of Bcl-2 at Ser70 site and beclin1 at Thr119, but the autophagy was promoted by dissociation beclin1-Bcl2 complex. Therefore, benzene-induced hematotoxicity through mediating the crosstalk between autophagy and apoptosis via phosphorylation of Bcl-2 at Ser70 site and beclin1 at Thr119.
5,798.8
2019-10-01T00:00:00.000
[ "Chemistry", "Biology" ]
The Role of Reactive Oxygen Species (ROS) in the Formation of Extracellular Traps (ETs) in Humans Extracellular traps (ETs) are reticulate structures of extracellular DNA associated with antimicrobial molecules. Their formation by phagocytes (mainly by neutrophils: NETs) has been identified as an essential element of vertebrate innate immune defense. However, as ETs are also toxic to host cells and potent triggers of autoimmunity, their role between pathogen defense and human pathogenesis is ambiguous, and they contribute to a variety of acute and chronic inflammatory diseases. Since the discovery of ET formation (ETosis) a decade ago, evidence has accumulated that most reaction cascades leading to ET release involve ROS. An important new facet was added when it became apparent that ETosis might be directly linked to, or be a variant of, the autophagy cell death pathway. The present review analyzes the evidence to date on the interplay between ROS, autophagy and ETosis, and highlights and discusses several further aspects of the ROS-ET relationship that are incompletely understood. These aspects include the role of NADPH oxidase-derived ROS, the molecular requirements of NADPH oxidase-dependent ETosis, the roles of NADPH oxidase subtypes, extracellular ROS and of ROS from sources other than NADPH oxidase, and the present evidence for ROS-independent ETosis. We conclude that ROS interact with ETosis in a multidimensional manner, with influence on whether ETosis shows beneficial or detrimental effects. Introduction and Background Extracellular traps (ETs) are reticulate formations of extracellular DNA associated with antimicrobial molecules ( Figure 1A-D). Their formation, mainly by neutrophils and other cells of the immune system (eosinophils, macrophages, mast cells) in a distinctive process of cell death, has been identified as an important evolutionarily conserved mechanism of vertebrate innate immune defense [1][2][3][4]. First described in 2004 in humans [5], the ability of cells to release ETs is now known to occur not only in mammals but is also found in immune cells of birds and fish (e.g., [6][7][8]). ETs constitute complex three-dimensional web-like scaffolds of DNA strands with dimensions down to 2 nm, the size of individual double helices ( Figure 1D) [9]. These scaffolds are decorated with histones and other molecules, including elastase ( Figure 1B,D), myeloperoxidase (MPO), bactericidal permeability-increasing protein (BPI), cathepsin G and other proteinases that are all antimicrobially effective [5,[9][10][11][12][13][14]. ETs have been shown to aid the entrapment and/or removal of bacterial, fungal, protist and even platyhelminth pathogens (e.g., [15][16][17][18][19]). They are also formed during viral infections, probably exerting a cell protective role [20,21]. However, the protein components of ETs have also been identified as toxic to host cells (e.g., [20,22]) and as potent triggers of autoimmunity [23,24]. Thus, since their discovery [5], ETs have been established in an ambiguous role between pathogen defense and host tissue damage [25]. Mechanisms of ET formation (ETosis) have been found to vary in relation to the signaling pathways involved and in the morphological execution of the process, enabling more than one mechanism per cell type. This is well documented for PMA and fMLP induced pathways, and for the polymorphisms of ET formation in neutrophils (see below under . There is also evidence that some types of ETosis leave the cells viable [30,31], a subject dealt with in more detail in Section 7 below. Despite this heterogeneity, ETosis is mainly a distinctive process of cell death involving full chromatin decondensation and break-up of the cell membrane. This holds, primarily, for ET formation by neutrophils (NETosis), which is perhaps the best-investigated form of ETosis. Morphological research has identified a standard pattern referred to as the NETotic cascade. The steps of this cascade are well defined in the recent literature (e.g., [2,32,33]) and describe the progressive change from the undisturbed globular cell, via cytoplasmic and nuclear swelling, vacuolization, membrane protrusion, enzyme binding to DNA, histone citrullination and chromatin decondensation, to terminal membrane rupture and NET release ( Figure 1B). Independent of the mechanisms and cell types from which ETs derive, there is mounting evidence that they strongly contribute to severe acute illness and chronic inflammation when formed in excess or are insufficiently cleared (e.g., [25]). This has led to a surge of research activity into the details of these pathogenic effects, specifically, again, with regard to neutrophil generated ETs (NETs), which appear to be the most abundant. (A, B), and scanning and transmission electron microscopy (C and D, respectively). (A) NETs generated in vitro from human neutrophils isolated from whole venous blood using a standard gradient separation medium containing sodium metrizoate and Dextran 500 [34]. NETosis induction by stimulation with 1 μM fMLP followed the procedure described by [35]. DNA was stained with sytox ® green. At light microscopic resolution, NETs appear as irregular cloudy structures in which dense clusters of brightly stained extracellular DNA (asterisks) merge with more faintly stained dilated areas in which the DNA is more thinly spread and forms a meshwork of threads (white arrowheads). Lobulated nuclei of non-NET forming neutrophils (white arrows) are found within the meshwork as well as outside of it, some being slightly out of focus; (B) elongated plume of NET-DNA (arrowhead) protruding from one of two attached NET-forming neutrophils from the sputum of a patient with chronic obstructive pulmonary disease (COPD). The cells are immunostained for peptidyl arginine deiminase 4 (PAD4, red) and citrullinated histone 3 (citH3, green), DNA is stained with 4',6-diamidino-2-phenylindole (DAPI, blue). Overlapping PAD4 and citH3 staining at nuclear and cytoplasmic sites is characteristic of NET-forming neutrophils (cf. [9,36]) and conforms with the observation of [37] that histone H3 deimination by PAD4 is not entirely confined to the nucleus; (C) bacterium (arrow) entangled in NETs from the sputum of a COPD patient; (D) on-grid preparation of in vitro generated NETs (procedures as described for A above) immunogold stained for the enzyme neutrophil elastase, one of the key protein components of NETs. ET Formation Is Linked to Reactive Oxygen Species (ROS) and Autophagy ROS are a heterogeneous group of oxygen-containing molecules with high chemical reactivity, some being rendered unstable and extremely reactive due to an unpaired electron. This group includes peroxides, hypochlorous acid, hydroxyl radicals, singlet oxygen, and the superoxide anion, among other compounds. Physiological generation of ROS occurs either as byproducts of (redox) reactions in various cell organelles including mitochondria, peroxisomes, and endoplasmic reticulum, or by primary enzyme function, such as with oxidases and oxygenases. Such enzymes have long been associated with the respiratory burst of phagocytes (see below), but are now known to occur in virtually every type of cell and tissue [38][39][40]. It is now known that the regulation of autophagy, and especially its association with ETosis, is closely tied to ROS in a multifactorial manner (e.g., [56,57]). Importantly, the level of intracellular ROS determines whether the autophagy reaction ends in NETosis [1,44,58,59]. Placed in a wider perspective, this association with NETosis strengthens the view that autophagy is not a "simple" mechanism of cell death alternative to apoptosis, but serves primarily to protect vertebrate cells from various types of harm including that from microbes (cf. [60]). However, the exact ways in which ROS interfere with the signaling network behind autophagy to initiate and/or promote NETosis are still incompletely understood, particularly in relation to how they contribute to the remodeling of the cell interior. The Role of NADPH Oxidase-Derived ROS in NET Formation NADPH (nicotinamide adenine dinucleotide phosphate) oxidases are a family of membrane bound multiprotein enzymes that generate ROS delivered into either extracellular or intracellular compartments. Phagocytes have long been known to express large amounts of NADPH oxidase residing in the plasma membrane and in phagosome membranes ( Figure 2), but intermediate or low amounts of the enzyme occur in most-if not in all-mammalian cell types and tissues [40,61,62]. The phagocyte NADPH oxidase consists of a b-type cytochrome-containing transmembrane protein termed gp91 phox (also known as NOX2, see below), with four further "phox" (phagocytic oxidase) elements (p22 phox , p40 phox , p47 phox , p67 phox ) and a guanosine triphosphatase (GTPase), usually belonging to the Ras-related C3 botulinum toxin substrate types Rac1 or Rac2 within the Rho (Ras homologue) family and Ras (rat sarcoma) superfamily of small GTPases. The gp91 phox subunit was found to have several homologues in nonphagocytic NADPH oxidases (NOX). Together with gp91 phox , these homologues have been pooled in the NOX family, comprising NOX1, NOX2 (= gp91 phox ), NOX3, NOX4, NOX5, and also DUOX (dual oxidase) 1 and 2. Nonphagocytic NADPH oxidases differ from the phagocytic enzyme in molecular structure, subcellular location and biochemical function [40,[62][63][64]. NADPH oxidases exist in different states of activation (resting, primed, active, or inactive). Signal molecules able to induce subunit assembly and activation include proinflammatory cytokines, lipopolysaccharides, Toll-like receptor (TLR) agonists, and chemical agents such as PMA [65], and are largely the same as those known to induce NETosis. Forming the functional enzyme requires the phosphorylation of protein subunits (e.g., p40 phox ) by MAPK, and the translocation of cytosolic components to membranes [65][66][67][68]. The situation is complicated by the fact that the regulatory pathways of NADPH oxidase assembly vary depending upon the molecular triggers. In neutrophils, this is exemplified by the difference between fMLP and PMA, the former acting via receptor-mediated pathways with downstream protein kinase involvement (e.g., [69,70], see also information on PI3K/AKT/mTOR pathways below) while the latter operates directly via protein-kinase C (e.g., [71,72]). Once assembled and activated, NADPH oxidases transfer electrons from cytoplasmic NADPH across biological membranes and couple them to molecular oxygen, thus generating the superoxide radical anion Ɣ O2 í . This enables a cascade of ROS generation that continues with the rapid conversion of Ɣ O2 í ·to hydrogen peroxide H2O2, either spontaneously or catalysed by superoxide dismutase (SOD) [40]. The process may then proceed to the MPO catalysed formation of hypochlorous acid (HOCl) from H2O2 [73][74][75]. HOCl is able, in turn, to re-react with H2O2 to generate singlet molecular oxygen ( 1 O2) and peroxyl radicals [76]. Availability of transition metals (specifically iron) may enable the H2O2 to undergo the Fenton reaction rendering the highly reactive hydroxyl radical ( Ɣ OH) [39,40]. Further reactions may follow under given conditions, such as the production of nitric oxide (NO) by inducible nitric oxide synthase (iNOS), and may again entail consecutive reactions, e.g., with Ɣ O2 í to form peroxynitrite [66,77]. It has been recognized for some time that neutrophils and other phagocytes produce large amounts of extracellular ROS (e.g., H2O2 and superoxide anion Ɣ O2 í ) upon stimulation with a wide variety of agents. Due to a transient rise in oxygen demand, this behavior was originally named respiratory (or oxidative) burst (e.g., [40,78]). Plasma membrane-bound phagocyte NADPH oxidase is commonly thought to be the main source of ROS delivery to the extracellular space during respiratory bursts, and into engulfed phagosomes for microbial killing [65,79,80]. In addition, NADPH oxidase derived Ɣ O2 í has been shown to promote microbial killing in the phagosome also indirectly by enabling the activation of serin proteases (e.g., cathepsin G and neutrophil elastase, NE) in neutrophil azurophilic granules via modulation of ion influx and pH [14,81,82] (there is, however, evidence suggesting that the importance of this mechanism for the microbicidal capacity of the neutrophils is limited [83]). The circumstances of how NADPH oxidase-derived ROS influence NET formation inside the cell appear complex, and are still not fully conclusive. There is accumulating evidence that NADPH oxidase-derived ROS acting at the intracellular level are capable, and in some cases requisite, to initiate the formation of NETs. Experiments with 1 O2 scavengers have confirmed singlet oxygen involvement in NADPH oxidase-dependent NET formation in human neutrophils upon stimulation with PMA [74]. Experimental inhibition [32,84] and mutation-caused failure [85] of NADPH oxidase has been shown to prevent NET formation. NADPH oxidase-deficient neutrophils of mutant mice and of humans with chronic granulomatous disease (CGD) are not able to form NETs [18,32,86]. However, although recent work has indicated that it is intracellular ROS levels that direct signaling in favor of autophagy/NETosis (see Section 2 above), the details of how this occurs are still not fully understood. Particularly it remains unclear how ROS influence the main features of NETosis, i.e., chromatin decondensation, histone citrullination, binding of enzymes to DNA, and membrane rupture. Present research has only provided pieces of a much larger puzzle. There is evidence indicating that in addition to their role in NE activation, ROS enable the release of NE and MPO from the azurophilic granules in the neutrophil cytoplasm. This is a prerequisite for the translocation of these enzymes to the nucleus, where NE aids histone degradation and MPO chromatin relaxation [13,59,87,88]. Additionally, ROS act to facilitate the citrullination of histone proteins by peptidyl arginine deiminase type 4 (PAD4) [89]. All these together are thought to promote nucleosome disassembly, chromatin decondensation, and the rupture of intracellular membranes. An upstream requirement of NADPH oxidase-dependent NET formation, and specifically also of PAD4-mediated histone citrullination, seems to be a rise in cytosolic calcium concentration via influx from the ER and/or the extracellular space [37,90]. The work of Parker et al. [91] indicates that the involvement of NADPH oxidase-derived ROS in the regulatory pathways of NET formation, just as the pathways themselves, vary depending upon the inducing molecular stimuli. These authors found that NETosis requires NADPH oxidase-derived ROS when induced with PMA or by bacterial stimulation, but not if the induction occurs via calcium influx mediated by the bacterial calcium ionophore ionomycin (see Section 8 below). Other work confirmed the role of NADPH oxidase-derived ROS in PMA-induced NET release from human neutrophils and demonstrated involvement of the MAPK/ERK pathway [84]. Similarly, MPO was found essential only after PMA induction, while bacteria induced NETs were formed without it [91]. Probably in partial contrast to this, Metzler et al. [92] found that functional MPO is a strict requirement for NET formation when comparing the NET-forming ability of neutrophils from MPO-deficient subjects and healthy donors after stimulation with PMA and opsonized Candida albicans cells. Work investigating NET formation in neutrophilic granulocytes of carp suggests that a stimulus-dependent selective requirement of ROS is an evolutionarily conserved pattern of vertebrate phagocytes [8]. NADPH Oxidase-Dependent NETosis Is a Matter of Delicate Coordination, Depending on Various Co-Factors There are several lines of evidence to indicate that the regulatory influences behind the NADPH oxidase-dependent pathway of autophagy/NETosis induction are far from straightforward, and much is still incompletely understood. Perhaps the most fundamental question in this respect is under what circumstances NADPH oxidase-derived ROS signaling promotes autophagy/NETosis rather than cell death through apoptosis? This is because ROS such as H2O2 have also been found to promote neutrophil apoptosis, e.g., by caspase activation via the sphingolipid ceramide or the lysosomal aspartyl protease cathepsin D [93][94][95]. Such dual response is evident from recent work on the mouse lung showing that Aspergillus infection in the presence of functional NADPH oxidase enables NETosis while at the same time promoting neutrophil apoptosis [18]. Present knowledge of factors that affect whether NADPH oxidase-derived ROS direct a neutrophil into the autophagy/NETosis pathway is limited. Particular focus may be placed on the following aspects (see also Figure 2): ƒ Cathepsin C. Recent work investigating NET formation in patients with Papillon-Lefèvre syndrome suggests that the cycteine protease cathepsin C interferes with the interplay between NADPH oxidase-derived ROS and NETosis (albeit without entailing a substantial deficit in general immune defense) [83]. ƒ DOCK proteins. A co-regulatory role in NADPH oxidase-dependent NETosis has been established for "dedicator of cytokinesis" (DOCK) proteins. Via their function as activators of Rac GTPases, DOCK proteins are involved in both neutrophil chemotaxis [96] and NADPH oxidase-dependent ROS production, the latter entailing a massive reduction in NET-forming ability in DOCK2 / individuals and to an almost complete loss of this ability in DOCK2 í/í /DOCK5 / double deficient individuals in the murine test system [97]. ƒ Zinc. Intracellular zinc ion (Zn 2+ ) concentration has been identified as a co-regulator in PMA-induced protein kinase C mediated NET formation, which depends on the NADPH oxidase-derived ROS [98]. ƒ mTOR-related pathways. Data from in vitro experimentation with human neutrophils using fMLP as inducing agent suggest that a central role in the ROS-mediated regulation toward NETosis is played by pathways involving the mTOR serine/threonine kinase [99]. Specifically the (PI3K/AKT)/mTOR pathway has been confirmed as a ROS sensitive negative regulator of autophagy [57,100,101], and is also gaining attention in relation to autophagy-related NETosis [44,59]. Moreover, also in this case, commitment to autophagy/NETosis was not found to be mandatory, as the ROS mediated impairment of mTOR activity may also terminate in apoptosis [57,102,103], so that additional factors must be assumed to play a role. Relevant regulatory influence in this respect will result from whether the molecular inducers activate NETosis in an mTOR-dependent manner, or along other pathways, as shown for IL-8 [26,27,90]. ƒ Protein kinase C. The observations of Neeli and Radic [104] suggest that the calcium-dependent regulation of histone deimination by PAD4 (see also Section 3 above) is influenced by an intricate antagonism between the alpha and zeta isoforms of protein kinase C (PKC). ƒ Extracellular matrix. Important co-factors seem to be located in the extracellular matrix. Recent work on human neutrophil responses to fungal (Candida albicans) infection indicates that ubiquitous matrix constituents such as fibronectin may be attached with a significant role in deciding between respiratory burst behavior and NETosis [105]. This work demonstrates that neutrophils, when exposed solely to Candida-derived beta-glucan, activate ROS production but not NETosis. Simultaneous presence of fibronectin and beta-glucan, by contrast, leads to the suppression of ROS production and to rapid NET generation. The type of NETosis found depends on MAPK/ERK but not on ROS, and exhibits fine structural features that are strikingly similar to those shown for ROS independent Staphylococcus induced NETosis by Pilsczek et al. [31] (see also Section 7 below). This illustrates that extracellular matrix components may exert a more complex influence on the co-regulation of ROS production and NETosis than presently assumed. These results are in close agreement with experimental findings prior to the detection of (N)ETosis. Testing neutrophils adhering to substrates including collagen IV, laminin, thrombospondin, heparan sulfate proteoglycan (HSP) and again fibronectin during respiratory bursts primed by PMA and TNF-alpha, Borgquist et al. [106] showed a highly variable ROS production depending upon the extracellular matrix present. Recent work demonstrates that a major role in extracellular matrix effects on ROS-dependent NETosis mediated by NADPH oxidase and MPO is likely to be played by immobilized immune complexes that bind to neutrophil FcȖRIIIb receptors [29]. ƒ Microbial-derived substances. A further source of co-factors influencing the regulation of ROS-dependent NETosis is microbial-derived substances. This is demonstrated by recent work showing that gram-positive bacteria derived peptide bacteriocins containing polycyclic thioether amino acids (so-called lantibiotics, such as nisin) enhance levels of NADPH oxidase-derived intracellular ROS and induce NETosis in human neutrophils in a dose-dependent manner [107]. All this strengthens the view that ROS-dependent NETosis in neutrophils is modulated in a complex manner by integration of multiple stimuli (see also Byrd et al. [105]). However, in the light of the results of a recent investigation of NET-forming ability in cathepsin C-deficient individuals (see point on cathepsin above), it may be necessary to add that failure of ROS-mediated NET formation need not entail a substantial deficit in general immune defense [83]. Subtypes of NADPH Oxidase and Extracellular ROS In addition to the present incomplete knowledge of co-factors, understanding of the role of ROS in the regulation of subcellular events during NETosis is further complicated by a variety of other parameters, such as the different pathways of NADPH oxidase activation depending on the type of external inducers. These uncertainties are exemplified by the observation that it is still not clear whether neutrophil NADPH oxidase may be divided into two subtypes that are distinct from each other by both their location and the involvement of PI3K in their activation pathways. This divergence has been demonstrated by inhibition experiments with the PI3K inhibitor wortmannin (a fungal steroid metabolite), which reliably blocks neutrophil NADPH oxidase activation if depending on PI3K pathways. It was first shown for activation induced by N-formylmethionyl-leucylphenylalanine (fMLP), while PMA-induced activation was found to be resistant to inhibition by wortmannin [108]. The extended experiments of Karlsson et al. [71] then demonstrated that PMA is also able to induce a wortmannin-sensitive (i.e., PI3K-dependent) NADPH oxidase activation in neutrophils, which, however, resulted in only intracellular but not extracellular release of superoxide. Neutrophils were therefore thought to harbour two NADPH oxidase subtypes, one residing in the plasma membrane, the other in the membranes of the so-called specific granules within the neutrophil cyptoplasm. Activation of both subtypes was found to depend on MAPK/ERK kinase and protein phosphatase 1 and/or 2A, while diverging in dependence on PI3K, which was found essential only for the intracellular variant of the enzyme [71]. Such a role of the specific granules in NADPH oxidase-dependent intracellular ROS generation has been further corroborated by Ambruso et al. [109], while more recent work indicates that endosomes commonly assigned to the secretory vesicles are also involved [110]. In view of the emerging importance of ROS dependent PI3K-mTOR pathways in the control of NET formation via regulation of autophagy (e.g., [99,103], see also Section 2 above), this is a relevant point of uncertainty that would require further examination. However, despite such uncertainties, other capacious evidence summarized by Bedard and Krause [40] suggests that all phagocyte/neutrophil NADPH oxidase contains NOX2/gp91 phox as central functional component, irrespective of localization. The model of NADPH oxidase function presented by these authors does not discriminate between plasma membrane-and granule membrane-localized subtypes of the enzyme. Instead, it depicts a flexible situation in which most NOX2 is, together with p22 phox , localized in the membranes of intracellular granules as long as the cells maintain a resting state. According to this model, the granules fuse with the plasma membrane only after completion of subunit assembly and activation, thus enabling ROS release to the extracellular compartment. In addition, the model implicates that granule membrane-bound NOX2 may become functional as part of NADPH oxidase intracellularly, without the need for fusion with the surface membrane. This may be requisite for the proposed roles of ROS in signaling cascades for NETosis induction (see also [111]). Irrespective of whether or not neutrophil ROS release to the extracellular and intracellular compartment is caused by distinct subtypes of NADPH oxidase, it is unclear how extracellular ROS contributes to NETosis induction. As reported above, most work on the role of ROS in NETosis signaling refers, as a matter of course, to intracellular ROS. However, knowledge is incomplete as to the extent to which these ROS originate from outside the cells. While it seems clear that there is an inward passage of superoxide anions Ɣ O2 í and H2O2 through anion (Cl í ) and aquaporin channels, respectively [112], it is not known how this transmembrane flux is balanced with ROS generation by intracellular NADPH oxidase (or other sources such as mitochondria). Some progress in this context has been made by the in vitro analyses of Kirchner et al. [41]. Testing the effects of various inhibitors of ROS generating enzymes (NADPH oxidase, SOD, MPO) and mitochondrial electron transport on NETosis in human neutrophils, these authors found that NADPH oxidase-and MPO-derived ROS, but not those from SOD and mitochondria, are important for NET release. This further underpins the potential importance of NADPH oxidase-generated superoxide, and perhaps also of spontaneously formed (not SOD-catalyzed) H2O2, in NETosis induction. However, as the NADPH oxidase inhibitor diphenyleneiodonium chloride (DPI) used in this work is likely to act similarly on plasma membrane-bound and intracellular NADPH oxidase [113,114], and, more importantly, also on mitochondrial OXPHOS flavoenzymes [115], the questions as to true sources of NETosis inducing ROS remain largely unresolved. The Role of ROS from Sources other than NADPH Oxidase Although much of the present knowledge on the role of ROS in the molecular signaling of ETosis/NETosis has been obtained from NADPH oxidase-dependent pathways, there are several lines of evidence suggesting that ROS contributing to the formation of NETs need not necessarily derive from pathways involving NADPH oxidase. It becomes increasingly evident that there are likely two distinct (main) types of ROS-dependent NETosis: one that requires ROS supply from NADPH oxidase and responds to NADPH oxidase blocking, and another that does neither of these, instead relying on other ROS sources (cf. [116]). Substantial new information has come from work supplying singlet oxygen ( 1 O2) to neutrophils from CGD patients and healthy humans via application of the photosensitizing agent Photofrin (porfimer sodium). Results indicate that singlet oxygen is likely to be able to induce NET formation independent from NADPH oxidase activation [74]. An alternative important source of ROS in eukarotic cells is mitochondrial oxidative phosphorylation (OXPHOS) complexes. Mitochondria are generally thought to be the largest contributors to intracellular ROS production (e.g., [39]). Work on oxidative stress levels has indicated that this might also apply to neutrophils [117], despite the fact that mature neutrophils are characterized by low mitochondrial content and reduced levels of oxidative phosphorylation [118]. Cytochrome C from mitochondrial OXPHOS complexes has been shown to support caspase activation for neutrophil apoptosis, the alternative cell fate to autophagy/NETosis [95]. ROS are relevant byproducts of mitochondrial energy supply to respiratory/oxidative bursts [40]. Intracellular ROS may be influenced by the dependence of mitochondrial OXPHOS complexes on NADPH-derived NADH as electron donors, entailing competition for NADPH between mitochondria and NADPH oxidase (e.g., [117]). This competition could be even stronger as NADPH oxidase may be able to utilize NADH as a second electron source in addition to NADPH, even though NOX2-containing phagocyte NADPH oxidases have a preference for the latter [40]. Despite a result to the contrary reported by Kirchner et al. [41], mitochondrial ROS production may thus, albeit in limited form, exert influence on intracellular ROS levels of ET-forming cells, even in neutrophils. This conclusion is in strong agreement with most recent evidence of a NADPH oxidase-independent pathway of NETosis that depends on mitochondrial ROS and is mediated by small conductance calcium-activated potassium channel 3 (SK3) (Figure 2). In contrast to NADPH oxidase-dependent NETosis, this pathway does not essentially require MAPK/ERK activation [116]. Further research will be required to determine the full scope of influence by mitochondrial ROS on the regulation of ROS-dependent NETosis, especially in the context of the close correlation with autophagy that is now apparent (see Section 2 above). Another source of ROS not directly related to NADPH oxidase that is likely to play a particular role in ETosis-mediated inflammation is MPO. ETosis is certainly a key mechanism of MPO release to the extracellular compartment. The catalytic activity of MPO is partitioned between a halogenation cycle and a peroxidase cycle [119]. It is well known that MPO released by neutrophils during respiratory bursts catalyzes the oxidation of chloride (Cl í ) bromide (Br í ) and thiocyanate (SCN í ) by H2O2 to hypochlorous acid (HOCl), hypobromous acid (HOBr) and hypothiocyanous acid (HOSCN), respectively. Although MPO itself may become inactivated by H2O2 [120], these oxidants support the phagocytes' ability to kill pathogens. Both are, directly and via their secondary reactions, able to harm host tissues in various manners, leading to growth arrest, apoptosis or necrosis in a dose dependent manner [73,75,121,122]. Direct molecular influences include inactivation of thiol enzymes (specifically by HOSCN), modification of lipoproteins and perturbation of phosphorylation-dependent signaling pathways such as MAPK/ERK [119][120][121]123]. The potential feedback effects on the role of MAPK/ERK pathways in the induction of ET formation (see Sections 2 and 3) are as yet largely undetermined. It will be a challenge to integrate the finding that extracellular products of MPO are unable to rescue ET formation in MPO-deficient neutrophils [92] into this context. HOCl and HOBr contribute to the formation of ROS and radicals also via various secondary reactions. Both react, for example, with H2O2 to form singlet oxygen 1 O2 and peroxyl radicals [76,124,125] (see also Section 3 above). Both also target thiols, thioethers, disulfides, amines and amides, leading to the formation of advanced oxidation products (AOPPs) that interfere with the structure and physiology of cells [119]. AOPP formation with the involvement of extracellular MPO is also likely to include Fenton reactions. They generate hydroxyl radicals (OH) from H2O2 catalyzed by transition metals (mainly iron) [39,40], accounting for the peroxidation of lipids with unsaturated fatty acyl residues, a long known factor in ROS-induced tissue damage [126][127][128]. At sites of inflammation, the extracellular iron required to allow for this is likely to be abundantly available from ferritin secretion by macrophages [129][130][131][132]. Further radicals derive from the peroxidase cycle of MPO, which performs one electron oxidation of a multitude of organic and inorganic substrates. These include amino acids (tyrosine, tryptophan), thiols, ascorbate, steroid hormones and urate, but also singlet oxygen 1 O2 and nitric oxide NO [119]. Thus, MPO is not only an important constituent in the intracellular regulation of ET formation (see Section 3) but once extruded, in all probability also a key contributor to extracellular (interstitial) ROS accumulation and phagocyte-mediated tissue injury. The Role of ROS in Non-Cell Death ETosis In addition to the accumulating information on the characteristics of "standard type" ETosis, research over the last decade has also rendered evidence of alternative forms of ET formation. These forms deviate from the "standard type" both mechanistically and in that they may leave the donor cells viable, thus being referred to as non-cell death ETosis. But all these alternative forms have some relationship to ROS. The earliest example of non-cell death ETosis is provided by Yousefi et al. [133]. This work describes a fast form of ET extrusion by eosinophils, which diverges from the "standard type" by its eruptive ("catapult-like") nature, and also that it utilizes mitochondrial DNA thus avoiding instant cell death. The release of these ETs can be stimulated by IL-5 and LPS and proved sensitive to blocking with diphenyleneiodonium (DPI). Thus, it is likely that the underlying signaling is mediated by membrane bound receptors and depends on ROS. However, the exact circumstances of how this occurs have not been elucidated and may be complex. LPS alone is thought to be unable to activate NADPH oxidase while facilitating activation by subsequent triggers [2,66]. DPI has been shown to be not only an effective blocker of NADPH oxidase but also of other NAD(P)-dependent enzymes (such as glucose 6-phosphate dehydrogenase) and mitochondrial OXPHOS flavoenzymes [115]. It is thus unclear from where the ROS involved in this process derive and how they act. Similarly, it is not clear from both the morphological and the chemical points of view, how the mitochondrial DNA is combined with granule proteins to form these ETs, and how the lack of histones in mitochondrial DNA influences their function. A second variant of ETosis that leaves the donor cells viable and relies, in all probability, on mitochondrial DNA has been established for human neutrophils in response to treatment with granulocyte/macrophage colony-stimulating factor (GM-CSF) and subsequent TLR4 or complement factor 5a (C5a) receptor stimulation. Also in this case, treatment with DPI leads to a complete block of (N)ET release, and neutrophils from ROS-deficient CGD patients failed to generate such type of ETs [30]. This indicates that the process depends on ROS, but with the same uncertainties/caveats regarding the use of the blocking agent DPI as mentioned for eosinophil ETosis above. A further type of fast ETosis/NETosis occurring without instant cell death was recently described for human neutrophils in response to stimulation with Staphylococcus aureus bacteria [31]. Similar to eosinophil ETosis [133], this type is rather fast, with NETs being observable as early as 5-10 min after onset of stimulation. In contrast to both eosinophil ETosis [133] and the type of NETosis described by Yousefi et al. [30], the process is not based on the release of mitochondrial DNA. Instead, DNA from nuclear chromatin is extruded via vesicles that bud from the nuclear membranes and rupture after release into the extracellular environment. However, this third variant of ETosis seems to only represent the initial phase of a longer lasting cycle, terminating in "conventional" cell death NETosis. But even if so, it is remarkable that this initial phase appears to be entirely devoid of ROS regulation [31]. ROS-Independent ETosis Evidence as to whether there are pathways of NETosis that are entirely independent of ROS appears incomplete and conflicting. Some uncertainty still remains as to the role of ROS from NADPH oxidase or other sources in the induction of NET formation by elevation of cytosolic calcium. Experiments testing the effects of bacterial ionomycin in human promyelocytic leukemia (HL-60) cells and neutrophils strongly suggested that calcium influx-mediated induction of NETosis utilizes a ROS-independent pathway [37,91]. In contrast to this, recent work also investigating calcium ionophore-mediated NETosis, provides evidence for a calcium activated pathway of NETosis that depends on mitochondrial ROS, while being independent of NADPH oxidase-derived ROS [116]. It remains to be clarified how these findings can be integrated. An ambiguous but inspiring situation has developed regarding the relationship between uric acid (UA) and NETosis depending on NADPH oxidase-derived ROS. UA, an abundant terminal product of vertebrate nitrogen metabolism, can act as an antioxidant but also as a pro-oxidant and pro-inflammatory factor, depending on the particular conditions [134]. This dual nature is clearly reflected in the present literature on the role of UA in mammalian NETosis. Some recent work suggests that UA in the form of monosodium urate (MSU) crystals is a strong inducer of ROS-dependent NETosis, which can be inhibited by anti-oxidants such as butylated hydroxytoluene (BHT), butylated hydroxyanisole (BHA) and ascorbic acid [135]. However, other work testing the effects of non-crystalline UA in solution found a dose-dependent ambivalent influence [136]. Low concentrations of UA (1 mg/100 mL) exerted an inhibitory effect on NADPH oxidase-dependent NET formation, most likely due to the antioxidant potential of UA. High concentrations (8 mg/100 mL), were, by contrast, found to be potent inducers of NETosis. Tests with ROS-inhibited control neutrophils and neutrophils from ROS-deficient CGD patients demonstrated that NETosis induction by high UA levels occurs in a NADPH oxidase/ROS-independent manner, with nuclear factor "kappa-light-chain-enhancer" of activated B-cells (NF-țB) playing a role in the signaling pathway [136]. This is consistent with findings that NF-țB protein accumulates in the nuclei of PMA-or TNFĮ-stimulated neutrophils [137], and that reduced phosphorylation of the NF-țB p65 subunit by different inhibitors (ASA, BAY-11-7082, and Ro 106-9920) abrogates the formation of NETs [138]. Conclusions There is rapidly growing evidence that ROS are able to interact with the formation of ETs in a multidimensional manner (Figure 2). This occurs either directly via the signaling cascades that allow for ET formation and release, or indirectly via influence on other factors that modulate the process. The interaction with ROS is likely to be an important determinant in the regulatory network that determines whether ETosis is beneficial or noxious. The ROS-ETosis interaction will need to be taken into account to understand the characteristics of virtually all kinds of inflammatory disease, and to improve their treatment.
8,120
2015-05-04T00:00:00.000
[ "Biology" ]
Continual Dialogue State Tracking via Example-Guided Question Answering Dialogue systems are frequently updated to accommodate new services, but naively updating them by continually training with data for new services in diminishing performance on previously learnt services. Motivated by the insight that dialogue state tracking (DST), a crucial component of dialogue systems that estimates the user's goal as a conversation proceeds, is a simple natural language understanding task, we propose reformulating it as a bundle of granular example-guided question answering tasks to minimize the task shift between services and thus benefit continual learning. Our approach alleviates service-specific memorization and teaches a model to contextualize the given question and example to extract the necessary information from the conversation. We find that a model with just 60M parameters can achieve a significant boost by learning to learn from in-context examples retrieved by a retriever trained to identify turns with similar dialogue state changes. Combining our method with dialogue-level memory replay, our approach attains state of the art performance on DST continual learning metrics without relying on any complex regularization or parameter expansion methods. Introduction As conversational digital assistants are becoming increasingly popular and versatile, it is important to continuously update them to accommodate more services. 1One of their key components is a dialogue state tracking (DST) model that estimates the user's goal, i.e. the dialogue state (Williams et al., 2013).The dialogue state is used for queries sent to application programming interfaces to retrieve information that grounds the dialogue model's response. Unfortunately, naively updating a model for a new service by training with new data causes catastrophic forgetting (McCloskey and Cohen, 1989;French, 1999): upon learning from new data, the model's performance for previous services regresses.To mitigate this issue while also avoiding the impracticality of training a model from scratch with data from all services each time new data becomes available, three main approaches have been established as generally effective approaches to continual learning (CL): memory replay, regularization, and parameter expansion.Variations and combinations of the three have been applied for DST in previous work (Liu et al., 2021;Madotto et al., 2021;Zhu et al., 2022). However, most previous work has focused on improving CL performance with service-specific inputs or outputs, a paradigm that limits knowledge transfer between services (left side of Figure 1).This approach introduces a large distribution shift from one service to another since the model needs to memorize service-specific slots that it needs to predict as part of the output.However, DST can become a significantly more consistent task across services by simply reformulating it as a collection of example-guided question answering tasks.Our approach, Dialogue State Tracking as Example-Guided Question Answering (DST-EGQA 2 ) trains a model to learn to answer natural language questions that correspond to dialogue state slots (right side of Figure 1) with the help of in-context examples instead of predicting service-specific structured outputs all at once without any explicit guidance (left side of Figure 1).We hypothesize that DST-EGQA benefits continual learning because it transforms the DST task to become more granular, easy, and consistent across services. We discover that this is indeed the case, as our approach leads to significant gains in CL performance without using any of the aforementioned CL approaches or data augmentation methods.Specifically, we transform DST into the TransferQA (Lin et al., 2021) format and add examples from a retriever that is trained to identify turns that result in similar dialogue state updates (Hu et al., 2022).In addition, our approach does not require complex partitioning of the full training set into training samples and retrieval samples.We find that we can use each sample in the training set as both target samples and examples in the retrieval database without causing any label leakage.Also, we experiment with a wide array of retrievers and find that models trained to perform DST-EGQA can be effective even with lower quality retrievers by intentionally training it with subpar examples such that it can learn when to leverage good examples and 2 Code available at https://github.com/facebookresearch/DST-EGQA ignore bad ones.Lastly, we simply tweak the sampling approach for memory replay to sample at the dialogue-level instead of the turn-level and achieve significant gains to CL performance even with a single dialogue sample, resulting in state-of-theart performance on the Schema Guided Dialogue (SGD) dataset (Zhu et al., 2022). In summary, our main contributions are: 1. We show that simply reformulating DST as a fine-grained example-guided question answering task (DST-EGQA) significantly improves continual learning performance by enhancing task consistency across services. 2. We propose a simple but highly effective dialogue-level sampling strategy for choosing memory samples that leads to state-of-theart performance when combined with DST-EGQA. 3. We share a thorough analysis on DST-EGQA to establish its effectiveness, robustness, and limitations as a method for continual learning. Dialogue State Tracking as Example-Guided Question Answering (DST-EGQA) The goal of continual learning for DST is to sequentially train on a stream of n services T 1 ...T n with the goal of minimal degradation, i.e. catastrophic forgetting, of peak performance that was achieved when the model was trained on for each service T i . In this section, we motivate and elaborate on the methodology of DST-EGQA for attaining this goal.Figure 2 presents an illustrated overview. DST as question answering Dialogue state tracking (DST) is defined as estimating the beliefs of a user's goals at every turn in a dialogue.It was traditionally formulated as a slot-filling task (Wu et al., 2020;Heck et al., 2020), and more recently as a structured text generation task (Hosseini-Asl et al., 2020;Peng et al., 2021;Su et al., 2022), shown in (0) in Figure 2. If a user were to say "Find me a 3 star hotel.", the goal is to deduce hotel-star = 3.However, we can also indirectly achieve the same predictions by reformulating DST as a collection of per-slot questions to answer (Gao et al., 2019;Lin et al., 2021).Given the same user request, we can ask our model to answer "What is the hotel star rating the user wants?" and have it predict 3. We hypothesize that this question answering approach is more conducive to continual learning because it leverages a general skill that is understandable through natural language.We only need to ask different questions to predict slot values we are interested in.On the other hand, directly predicting a structured dialogue state requires training the model to generate slots that is has never generated before. To transform DST into question answering as shown in (1) in Figure 2, we leverage the Trans-ferQA (Lin et al., 2021) format.Given DS t , the dialogue state of a dialogue until turn t expressed as (key, value) pairs {(s t,i , v t,i ) | i ∈ I} for slot i, I = {1, ..., N T }, where N T is the number of slots of interest for domain T , each s t,i is transformed into a question with a manually pre-defined template Q : s i → q i .The overhead of creating these templates is minimal as it only has to be done once and is as simple as transforming the name slot in the hotel domain to a natural text question equivalent, e.g."What is the name of the hotel that the user wants?".Thus, with dialogue history until turn t as H t = {u 1 , b 1 , ..., u t−1 , b t−1 , u t }, where u i is the user's utterance on the ith turn and b i is that of the bot's, the original single input output pair of becomes N T granular question answer pairs: where ⊕ denotes simple text concatenation.A difference from the original TransferQA approach is that since we will be finetuning the model, we skip the step of training with external question answering datasets and do not take any special measures to handle none, i.e., empty slots, because our models will learn to generate none as the answer for these slots.Further detail on the TransferQA format and additional examples of the fully constructed inputs are shared in Appendix A.1. Fine-tuning with in-context examples Adapting to new services can be made even more seamless by providing in-context examples (Wang et al., 2022;Min et al., 2022;Ouyang et al., 2022). Even when faced with a question it has never seen before, the examples provide guidance on how it should be answered.This kind of task reformulation enables the development of models that achieve state-of-the-art zero-shot performance and generalizability even with small models (60M parameters) by explicitly fine-tuning with instructions and in-context examples.Since most recent work that focus on generalizabilty and zero-shot models leverages generation models because of their open vocabulary, we also place our focus on generation models.Motivated by the results from Tk-instruct (Wang et al., 2022) and MetaICL (Min et al., 2022) that showed even relatively small models can generalize well if explicitly trained to follow instructions with examples, we explore whether we can prevent a model from overfitting to domain-specific questions and instead continually develop examplebased question answering capabilities to enhance continual learning performance.Therefore, we extend Equation 2 (3) Throughout this work, we use k = 1 unless otherwise specified. Retrieving relevant in-context examples The goal of the retrieval system is to find an example turn H ′ t ′ that requires similar reasoning for answering the target sample H t , such that fine-tuning with it as an in-context example will help enable the model to apply the same reasoning for answering the question for the target sample.Hu et al. (2022) found that instead of matching for dialogue state overlap, matching for similar dialogue state change ∆DS, i.e. state change similarity (SCS), yields more relevant examples.State changes are simply a subset of DS that is different from the previous turn: We found that computing similarity with this definition of state change results in many ties that leads to less relevant examples being lumped into the same rank as more relevant ones, so we make minor modifications by including the ∆DS operations, e.g.INSERT, DELETE, and UPDATE, as part of the slot key: where o is the slot operation.To resolve ties that still remain with this modification, we use the BM25 (Robertson et al., 2009) score between the target and example's last bot and user utter-ances (b t − 1, u t ). 4 With our changes, we were able to observe a much better top k = 1 match, which we verified manually with 100 random samples.We denote examples retrieved with this new SCS+BM25 score as the Oracle because getting ∆DS requires knowing the DS that we would like to predict ahead of time, and therefore cannot be used at test time.However, the Oracle score is useful for training a retriever that can retrieve examples with similar ∆DS and for estimating the upper bound for DST-EGQA. Using the Oracle score, for each sample in the training set, we calculate its similarity with other training samples and select the top 200 samples.From the selected samples, we pair the top ten and bottom ten as hard positive and hard negative samples, respectively, to train a SentenceBERTbased (Reimers and Gurevych, 2019) retriever using contrastive loss.We call the resulting retriever IC-DST-retriever v2 (IDR2).This is the same configuration for creating the dataset that was used to train the original retriever used for IC-DST, but instead of using x% of the entire training data, we use the entire training set of the first domain T 1 to train separate retrievers for each of the five domain orderings.We impose this constraint such that we conduct our experiments under the practical assumption that we are only provided data for T 1 at the beginning and we do not want to extend the continual learning problem for training the retriever.More details of IDR2's training procedure can be found in Section A.3. Dialogue-level sampling for memory The approaches that we outlined thus far are not orthogonal to existing continual learning methods.Therefore, they can be combined to further boost performance.One of the simplest methods is memory replay, which samples training data from previous tasks and adds them to the current training set so that the models forget less.For memory replay to be effective, it is important to select representative and nonredundant training samples. In DST, a training sample is a single turn in a dialogue, since dialogue state is predicted for every turn.To reduce redundant instances, we propose a simple change to selecting training samples.Instead of combining turns from all dialogues and then randomly sampling turns, we propose sam-pling at the dialogue-level first and then including all turns from the sampled dialogues to form the memory.The motivation is that there are rarely the same type of dialogue state updates within a dialogue, but there is a high chance that frequent dialogue state updates across dialogues may be sampled multiple times when using turn-level sampling. The simple difference between the sampling strategies are clearer when observing their code snippets in Python 3: Turn-level sampling. Data We use the continual learning setup proposed by Zhu et al. (2022), which uses 15 single domains from the Schema Guided Dialogue dataset (Rastogi et al., 2020), and aggregate our results over the same five domain orders to make the most reliable comparisons with their results.Comparing results with the same order is crucial as we find that results can have significant variance depending on the chosen domains and their order.For multi-task training, there is only a single permutation, and therefore we aggregate results over runs with three different seed values.Our formulation described in Section 2.2 shows that we are operating under the assumption that the domain of interest will be known ahead of time. Evaluation DST performance is mainly measured by joint goal accuracy (JGA), which indicates the percentage of turns for which all slot values are correctly predicted.For CL, given JGA for domain i after training up to the t th domain a t,i and the total number of domains T , we compare our approaches with three metrics from Zhu et al. ( 2022): , how much the training on the current domain reduces JGA on data from previously seen domains.We place the most importance on Final JGA, while FWT and BWT provide additional signal on how different approaches provide more transferability, and hence task consistency, between domains. Baselines We replicate the baseline results from Zhu et al. • EWC: use the same samples selected for memory replay to regularize with the Fisher information matrix (Kirkpatrick et al., 2017) • AdapterCL (Madotto et al., 2021): freeze the base model and train parameter efficient adapters for each domain with number of weights that are equivalent to 2% of that of the pretrained model. • Continual Prompt Tuning (Zhu et al., 2022): freeze the base model and continually train soft prompts after reformulating DST as a masked-span recovery task (Raffel et al., 2020).We include their best results, which take advantage of a memory buffer for replay and for memory-guided backward transfer, a form of regularization that prevents updates if doing so would increase the current model's loss on the memory samples by computing gradients on them. For DST-EGQA, we compare various configurations to better understand the strengths and weak-nesses of our approach.We vary the retriever used during training and combine with other memory replay strategies.We also show CPT Multi-task and DST-EGQA Multi-task to show the multi-tasking upper bound performance for average JGA. Other than random sampling and BM25, retrieval ranking is based on the similarity between sentence embeddings, which is the dot product between the query and the key.With the exception of oIDR, which was trained to identify similarity with the last turn's dialogue state and last utterance pairs between the bot and user: the query and key of the database uses only the last utterance pairs: u t−1 ⊕ u t .We found this approach to be better as it diminishes the undesirably high similarity assigned to examples from the same dialogue that have the same previous dialogue state. Technical details We conduct our experiments with the T5-small model (Raffel et al., 2020).We train with a single GPU using the AdamW optimizer, a learning rate of 1e-4, and a batch size of 16.We train on each domain for ten epochs without early stopping.We select the checkpoint with the best validation set performance when moving on to the next domain.Our experiments are run on V100, A40, and A100 GPUs, based on availability.7 4 Experiments and Analysis Main results TransferQA's format is more CL-friendly.The results for only transforming the DST from prior work (Equation 1) to that of granular question answering using the TransferQA (Equation 2) format is shown in the row for DST-EGQA − In-context examples in Table 1.Without the help of any incontext examples, the transformation alone yields a dramatic improvement in CL performance, increasing average JGA from 14.4 to 43.2, and also improving on both FWT and BWT.These results supports our hypothesis that a question answering task that is understandable through natural language is more conducive to better continual learning than learning to generate service-specific structured output. Example-guided question answering further enhances CL performance.The subsequent rows for DST-EGQA shows that fine-tuning with incontext examples can further enhance all CL metrics by a large margin.Most notable is the boosts that are seen in the FWT, for which memory replay has almost a negligible effect.Augmenting DST-EGQA with memory replay leads to even larger boosts, even exceeding the CPT Multi-task model, with most gains coming from BWT, which is expected with memory replay methods.Using the Oracle retriever at test time leads to statistically insignificant improvements, indicating that IDR2 can retrieve examples that are on par with the Oracle examples.Lastly, we can see that the relative gains in Average JGA and BWT from memory replay becomes less pronounced with models trained with in-context examples, indicating that memory replay and example-guided question answering have overlapping gains.Double-dipping the training set as a retrieval database does not lead to overfitting.It is important to note that, because our retrieval methods are commutative, a target sample that is paired with an example will serve as an example when the example becomes the target sample.Therefore, the answers for all training samples are seen as part of the context during training with our setup described in Section 2.3.This raises overfitting concerns that the model could easily memorize the of GPU in final performance introduces minimal variability to the final result.answers for all samples and thus not learn generalizable question-answering.Interestingly, this does not seem to be the case, as training in this setup leads to improved or on-par final test set performance compared to training without any examples.This implies that our approach does not impose additional data constraints of having to split the training set into dedicated training samples and retrieval samples for it to be effective. However, not shown in Table 1 is that we find that DST-EGQA is sensitive to the training dynamics (Section 4.2) and the quality of the retrieved examples (Section 4.3). Training dynamics In practical settings we don't have an oracle retriever, and our database may not contain the a perfect example for each case seen at test time.Thus, we may in fact retrieve irrelevant examples.It is important for the model to be able to handle these situations.Specifically, it should be able to leverage relevant examples, yet ignore irrelevant ones.To become more robust to these realistic circumstances, it may be useful to intentionally mix in irrelevant examples during training for DST-EGQA.We vary the combination of IDR2 and Oracle used for training, validation, and test time.Results in Table 2 support our hypothesis, showing that aligning the retrieval method from training time with the method used at test time leads to the best performance.Interestingly, best performance is achieved by using the Oracle retriever at validation time, shown by the large gap between IDR2 → IDR2 → IDR2 and IDR2 → Oracle → IDR2 (second and third row).This is somewhat surprising given that one may expect selecting a checkpoint that performs the best in the same setting as test time would lead to better test time performance. Retrieval method sensitivity The findings from Section 4.2 raises a question on whether training with other retrievers that may provide a different mixture of good and bad exam- ples can lead to a further boost performance with DST-EGQA.We apply all the retrievers defined in Section 2.3 and use the same training dynamics that led to best results previously to examine each retriever's effectiveness.As shown in Table 3, our IDR2 model seems to capture this balance the most effectively, as it is significantly better than all other retrieval methods. Memory sampling strategy and size We study the effect of the memory sampling strategy and the size of the memory budget.We do not use in-context examples for all configurations to study their effects in isolation.As hinted by the results in Table 1, dialogue-level sampling seems to be a superior sampling strategy to turn-level sampling.We take a deeper dive into the relationship between the two sampling techniques and how both approaches scale with memory budgets by varying the memory budget sizes to 10, 50, and 100.Here, size refers to the number of training samples.To make sure the comparison between turn-level and dialogue-level samples is fair, we sample dialogues until the total number of turns in sampled dialogues exceed the target size, and then sample the targeted number of samples from the exceeded set. Table 4 shows that dialogue-level sampling achieves a significantly better performance for all equivalent memory budget sizes for turn-level sampling and is even on par with the next budget size used for turn-level sampling.This is likely due to dialogue-sampling leading to a more comprehensive set of samples that cover a wider diversity of dialogue state updates in these smaller sizes of the memory budget as described in subsection 2.4.As the memory budget becomes larger, however, the gap between turn-level sampling and dialogue-level sampling diminishes, since both methods converge to multi-task training when the memory budget is unlimited. Number of in-context examples We also study the effect of having more than one incontext example and share the results in Table 5.Including only one example to learn from in-context creates a single point of failure, which is especially risky for suboptimal retrieval methods.Having additional examples to learn from can help mitigate this risk.Therefore, we repeat our experiments using multiple in-context examples.However, at least with small model sizes, the DST models are not able to effectively leverage additional examples.This is not surprising for the Oracle retriever, where in most cases the top example is the best example that can be leveraged from the training set. Related Work Continual learning Continual learning prolongs the lifetime of a model by training it further with new incoming data without incurring the cost of catastrophic forgetting (McCloskey and Cohen, 1989;French, 1999).There are three main branches of continual learning: architecture-based methods, replay-based methods, and regularization-based methods.Architecture-based methods propose dynamically adding model weights when learning new data (Fernando et al., 2017;Shen et al., 2019).Replay-based methods mitigate catastrophic forgetting by keeping a small sample of the previous data as part of a a memory budget to train with the new data (Rebuffi et al., 2017;Hou et al., 2019).These methods mainly experiment with sampling strategies and memory budget efficiency.Lastly, regularization-based methods place constraints on how the model becomes updated during training with the new data such that its performance on previous data is maintained (Kirkpatrick et al., 2017;Li and Hoiem, 2018). Dialogue state tracking Continual learning for DST has been explored by a series of recent work that applied a combination of methods mentioned above.Liu et al. (2021) expanded on SOM-DST (Kim et al., 2020) with prototypical sample selection for the memory buffer and multi-level knowledge distillation as a regularization mechanism.Madotto et al. (2021) applied various continual learning methods to end-to-end task-oriented dialogue models and found that adapters are most effective for the intent classification and DST while memory is most effective for response generation.More recently, Zhu et al. (2022) proposed Continual Prompt Tuning (CPT), which is most related to our work.CPT improves continual learning performance by finetuning soft prompts for each domain and reformulating DST to align with T5's maskedspan recovery pretraining objective (Raffel et al., 2020).Compared to CPT, we suggest a more granular reformulation to facilitate the learning from examples and do not rely on any regularization nor additional weights. Task reformulation and in-context learning Enhancing a model's generalizability to various tasks by reformulating input and/or outputs to become more uniform has become an increasingly popular method for massive multi-task learning (Aghajanyan et al., 2021), even for tasks that are considered distant from one another.T5 (Raffel et al., 2020) accelerated this movement by providing dataset or task-specific labels or minimal instructions to the inputs and then doing multi-task training.Building on T5, Sanh et al. (2022) and Wei et al. (2021) used more elaborate and diverse set of instruction templates and showed that this can significantly boost zero-shot performance.Cho et al. (2022) applied a similar idea to a more selective set of pre-finetuning tasks before training on the target DST dataset to improve DST robustness.Tk-instruct (Wang et al., 2022) takes a step further by scaling up the amount of tasks included in T0 and also provides positive and negative examples in the context in addition to the instructions.Similarly, Min et al. (2022) introduced MetaICL, which explicitly trains a model with the few-shot in-context learning format used for large language models (Brown et al., 2020), and showed that it showed better in-context learning performance than larger models.Task reformulation has also been recently explored to help the model better understand the task at hand and reduce domain-specific memorization and thus boost zero-shot DST performance (Li et al., 2021;Lin et al., 2021;Gupta et al., 2022;Zhao et al., 2022). Conclusion In this paper, we propose Dialogue State Tracking as Example-Guided Question Answering as a method for enhancing continual learning performance that factors dialogue state tracking into granular question answering tasks and fine-tunes the model to leverage relevant in-context examples to answer these questions.Our method is an effective alternative to existing continual learning approaches that does not rely on complex regularization, parameter expansion, or memory sampling techniques.Analysis of our approach reveals that even models as small as 60M parameters can be trained to perform in-context learning for continual learning and that complementing such a model with a randomly sampled memory achieves stateof-the-art results compared to strong baselines. Limitations Using the TransferQA idea and retrieved examples for in-context fine-tuning adds a lot of configurations, which we have not been exhaustively explored, in lieu of prioritization of, we judged, more important experiments.For example, we did not explore sensitivity to the specific wording of questions, as was done with T0 (Sanh et al., 2022).We leave as future work the testing of the hypothesis that having more diverse questions per slot can lead to even more generalizability between domains and bring even further improvements to DST-EGQA. Another limitation of DST-EGQA is that the retrieval database stores all previously seen samples from training and thus can be considered a memory with infinite size in our current formulation.Although the samples in the retrieval database are not used as training samples and provided as in-context examples during inference after being trained with subsequent services, the memory requirement for maintaining the database may be quite high.However, we believe that this memory requirement is still less restrictive than having the compute for fully retraining the model with all data whenever the model needs to learn a new service, especially when the training data set is large. Lastly, an important practical consideration is the varying technical overhead in implementation and portability of different approaches.Compared to other approaches, training and inference is relatively simple, as we use an autoregressive text generation objective without special modifications.However, while our approach does not require any additional parameters, it does require a database and a retrieval model that is comparable in size to the DST model.Therefore, depending on the technical constraints, managing these two components may be less desirable. Figure 1 : Figure 1: Left: When continually learning with the original DST format, DST models need to memorize new slot keys when learning each subsequent service.Right: Instead, reformulating DST into a bundle of granular question answering tasks with help from similar examples (symbolized by the light bulbs) makes training data uniform across all services.Learning new services effectively becomes additional training for the general task of example-guided question answering and is more conducive to continual learning. Figure 2 : Figure2: DST-EGQA overview.We factor (0) the original dialogue state tracking task into a (1) granular question answering task with the TransferQA format(Lin et al., 2021) and extend it to (2) pair each question with retrieved examples that are provided in-context such that the domain-shift is reduced further to an example-guided question answering task.In TransferQA, the original dialogue state is mapped to templated questions that correspond to each slot key and value pair, which in aggregate request the equivalent information.DST-EGQA applies TransferQA for continual learning and uses the target dialogue as the query to retrieve similar examples from the database, which is formed from the training set excluding the target. to include in-context examples that are retrieved from the training set, as shown in (2) in Figure 2. To retrieve relevant examples, we use H t to form a query that retrieves the top k samples {H ′ j t ′ |j ≤ k} to use as in-context examples. 3By inserting the retrieved examples and their relevant slot values for each slot question q i , the final format becomes: i , the average of JGA on each domain after training on all domains in the continual learning setup, (ii) 1,i , how much training on the current domain boosts JGA on future unseen domains, and (iii) Backward Transfer (BWT) (2022) using their implementation, which include approaches from Madotto et al. (2021): • SimpleTOD (Hosseini-Asl et al., 2020): perform DST as a structured text generation task, predicting the full state as a single sequence.As was done in Zhu et al. (2022), we modify the SimpleTOD format to append the domain name at the end of the dialogue history as described in Equation 1. • Memory: randomly select M turns from the training data for each previous domain and include it in the current domain's training data. Table 1 : CL metric results with a on the reliance of other continual learning techniques.We compare models sequentially trained on 15 tasks from the SGD dataset and aggregate results across five different domain permutations.DST-EGQA achieves the best results without any additional parameters or regularization methods.The last two rows provide the multi-tasking results, which serve as an upper bound.In this table, results with retrievers are with a single in-context example and the indicated retriever is used for training and test time, while the Oracle retriever is used for the validation set.Memory here refers to samples that are added for the training data of subsequent services for memory replay.All rows that use memory are with memory budget of M = 50.† indicates statistically significant at p < 0.05 with the next best comparable value. Table 2 : Train-validation-test retrieval method comparison.Keeping the Training and Test-time retrieval methods the same while keeping the development set as the Oracle leads to the best results, except for the last row, which requires knowing the correct answer ahead of time. †indicates statistically significant at p < 0.05 with the next best value. Table 4 : Memory size analysis for DST-EGQA.Sampling at the dialogue-level is much more effective than sampling at the turn-level, especially for a constrained memory budget. Table 5 : Number of in-context examples analysis.Small models are unable to leverage more than one in-context example when explicitly finetuned to perform in-context learning.
7,372.4
2023-05-23T00:00:00.000
[ "Computer Science" ]
Green Synthesis of Superparamagnetic Iron Oxide Nanoparticles with Eucalyptus globulus Extract and Their Application in the Removal of Heavy Metals from Agricultural Soil The green synthesis of metal oxide nanoparticles is presented as an excellent sustainable alternative for achieving nanostructures, with potential applications. This research provides important information regarding the influence of the type of solvent used in extracting organic reducing agents from E. globulus on the FeO NPs green synthesis protocol. A broad approach to characterization is presented, where UV-vis spectrophotometry suggests the presence of this type of nanoparticulate material. Likewise, the reduction mechanism was evaluated by FT-IR and the magnetic properties were evaluated by PPSM. In addition, characterizations were linked via elemental analysis (EDX), crystallographic characterization (XRD), electron microscopy (SEM/STEM), and Z potential to evaluate colloidal stability. The results show the influence of the type of solvent used for the extraction of organic reducing agents from E. globulus, and the effect on the synthesis of FeO NPs. In addition, the nanostructure material obtained showed excellent efficiency in the remediation of agricultural soil, eliminating metals such as Cr-VI, Cd, and, to a lesser extent, Pb. Consequently, the application of nanoparticles is proposed as an innovative solution for certain needs of society. The techniques for obtaining these NPs focus particularly on three methods: physical, chemical, and biological [15]. In the development of physical and chemical methods, limitations such as cost, low productivity, and high energy consumption are evident, as well as negative impacts on the environment and human health due to the use of solvents and surfactants, which are characterized as toxic, corrosive, and flammable chemicals [16][17][18]. Accordingly, the use of chemical methods for the synthesis of nanoparticles in biomedical applications was restricted, due to the chemicals' toxicity, instability, and lower biocompatibility [19]. In addition, several methods and techniques have been linked to the reduction process of precursor salts using inorganic chemical agents [20], including the sol-gel method [21] and techniques such as hydrothermal synthesis [22], laser Fresh leaves of E. globulus were washed three times with ultrapure water to remove any type of impurities. Subsequently, a UNPA-MEMMERT model UM 55 plus a paraffin oven (Memmert GmbH Co. KG., Darmstadt, Germany) was used to dehydrate the prepared leaves at 70 • C for 36 h and to eliminate the moisture present. After the established time, the burned leaves were removed and shredded. Two samples were prepared separately with the solvents (alcohol 96% G.L. and absolute ethanol). For this process, 5 g of ground E. globulus were mixed with 50 mL of each solvent. Both prepared samples were placed under magnetic stirring (300 RPM) for around 30 min at room temperature (21 • C). Then, they were emptied into 15 mL falcon tubes and subjected to centrifugation at 3000 rpm (Hettich Zentrifugen, EBA 20C) for 15 min. The supernatant was separated using a diaphragm vacuum pump (GAST DOA-P704-AA). Finally, the extract obtained was covered with aluminum foil and stored at 4 • C for later use and analysis. Figure 1 shows the scheme of the green synthesis protocol used in this investigation. The precursor iron nitrate nonahydrate (Fe(NO 3 ) 3 9H 2 O) was initiated at a concentration of 0.1 M, which was diluted using ultrapure water as the solvent. Two precursor samples of 50 mL each were prepared, to which 15 mL of each previously prepared extract were added dropwise. The samples were placed in magnetic stirring (400 rpm at 21 • C). Finally, the liquid was evaporated using a water bath, until a black sediment was obtained, which indicated the presence of FeO NPs. The nanoparticles obtained were washed with ultrapure water and vortexed for 10 min to homogenize the sample, and then at ultracentrifuge (7000 rpm) for 15 min. This process was repeated three times (see Figure S2). Characterization of FeO NPs Once the nanoparticulate material was obtained, the first analysis was via UV-vis spectrophotometry (Hewlett Packard, 8452, Palo Alto, CA, USA). The equipment was calibrated in the range of 300 to 900 nm. This analysis aimed to find the peak surface plasmon resonance (SPR) of the material under study and to evaluate the stability over time of both samples, analyzing aliquots in periods ranging from 1 to 39 days. Likewise, FeO NPs were analyzed by Fourier Transform Infrared Spectrophotometry (FT-IR) (Nicolet iS50, Thermo Fischer Scientific, Maryland, USA) to evaluate the presence of some functional groups and thereby to consider the possible reduction mechanism present in the green synthesis. Morphological analysis was also performed, using a 200 kV/130 µA transmission electron microscope (TEM) (JEOL 2100F, Tokyo, Japan) equipped with a CCD camera (one view) in three modes: high resolution (HRTEM), scanning (STEM with a dark field annular detector, ADF), and, for elemental analysis, energy scattering X-ray spectroscopy (EDS, STEM-DF mode, and the Oxford energy spectrometer, Xplore). The samples for TEM analysis were prepared by placing the nanoparticulate material with acetone in ultrasound for about 30 min. This solution was dropped on the TEM grids, covered with carbon. Structural analysis was performed by X-ray diffraction (XRD) (empyrean diffractometer, Panalytical) with Cu-Kα radiation (λ = 1.54056 Å) at 45 kV and 40 mA. The information was obtained in the range of 20 • < 2θ < 80 • in Bragg Brentano geometry, spinner mode, with a step size of 0.026 • . The stability analysis was complemented using characterization by Zeta potential from its electrophoretic mobility (Zeta plus-Zeta potential analyzer, Broookhaven Instrument Corporation, Holtsville, NY, USA). The results were reported as an average of ten different measurements and their standard deviation. Due to the nature of the nanoparticulate material (FeO), it was essential to characterize the magnetic properties using a DynaCool from the Quantum Design Physical Properties Measurement System (PPMS) of the Brazilian Center for Physics Research. Likewise, magnetization was evaluated as a function of temperature, which was carried out under zero field cooling conditions (ZFC) and field cooling (FC) (200 Oe probe), with hysteresis loops in the range of 5 K and 300 K, and field application up to 9 T. Evaluation of the Elimination of Heavy Metals Present in Agricultural Soil In the application stage, the influence of the FeO NPs colloid volume and its influence on the removal of heavy metals were evaluated. Samples of agricultural soil from the agricultural area of the Moche district, located in the province of Trujillo in Peru (an area currently affected by the presence of mining tailings in the water tributaries near the site) were used. Three similar samples were prepared by diluting 125 g of agricultural soil in 250 mL of ultrapure water, and maintaining magnetic stirring (600 rpm) for around 20 min/21 • C. Regarding the colloid FeO NPs, three volumes were considered duly coded, as follows: sample M1 = 5 mL, sample M2 = 10 mL, and sample M3 = 15 mL. Each of these volumes was measured to a value of 50 mL with ultrapure water so that the colloidal samples were homogeneous in volume. Subsequently, each diluted agricultural soil sample was mixed with its respective volume of colloid FeO NPs, obtaining a total volume of 300 mL of sample, which was homogenized in magnetic stirring (250 rpm) for 30 min. A control sample (without FeO NPs colloid) of agricultural soil was prepared, diluted in 300 mL of ultrapure water, and homogenized in the same conditions. For the quantification analysis of heavy metals (chromium, cadmium, and lead) by atomic absorption (Agilent Technologies, 200 series AA, Santa Clara, CA, USA), 5 mL of each of the samples were taken and 10 mL of HNO 3 + 3HCl solution were added, filling with ultrapure water to a volume of 50 mL. These samples were digested until the volume was reduced to a value of 10 mL for 50 min. Finally, all the samples obtained, volumetric to 50 mL with ultrapure water, were duly filtered with a diaphragm vacuum pump to avoid the presence of impurities. The samples were then ready for reading and analysis. Characterization by UV-vis Spectrophotometry FeO NPs usually present a characteristic to the naked eye that is linked to light brown and black colors, due to their excitation against the interaction with electromagnetic radiation and the effect on the surface plasmon resonance (SPR); likewise, they are directly linked in size and morphology. Organic extracts usually contain metabolites that act as potential reducers of metal salts. This reduction involves a process of formation of nanostructures. In this sense, as an initial characterization in a nanoparticle synthesis process, UV-vis spectrophotometric analysis was considered. Figure 2 provides the results of both colloids under evaluation, showing the presence of the SPR peak in typical ranges for this type of nanostructure (FeO NPs), specifically at 391.2 nm for the FeO NPs obtained using the E. globulus extract in the 96% alcohol solvent as the reducing agent, and at 393.4 nm using the extract in the absolute ethanol solvent. The difference between both spectra is related to the intensity of their absorbance. FeO NPs synthesized using the ethanol extract had more intensity, which is associated with the effect of absolute ethanol in extracting greater amounts of phenolic compounds. These compounds promote the reduction process of the precursor salt, generating a greater production of nanoparticles. Thus, it is understood that a higher intensity of the absorbance peak is synonymous with better production of nanoparticles. However, the final stabilization process and the plugging effect of the phenolic compounds are so excessive that a metal-phenolate complex with free charges is possibly generated, leading to aggregation processes and, therefore, instability over time. This effect does not occur with FeO NPs obtained using the 96% alcohol extract, where there is a balanced reduction process between the metal ions and the hydroxide ions of the extract. A successful plugging process may be considered, avoiding agglomerations and therefore generating stability over time. This is why the bandwidth is better than in the previous case. These procedures are demonstrated and corroborated by the other results described in this paper. The colloidal samples of the FeO NPs obtained were subjected to evaluation by spectrophotometric characterization to evaluate their stability over time. Figure 3a,b shows the behavior of the FeO NPs colloids obtained by green synthesis in both the 96% alcohol solvent and the absolute ethanol solvent during an interval from 1 to 39 days. In both cases, the dynamic behavior of the absorbance decreased as the days passed, indicating a possible agglomeration of the nanostructures. The bandwidth observed from inception for the sample with the ethanol solvent extract tended to decrease, which is linked to the formation of clusters of similar sizes; i.e., clusters with a tendency to be slightly monodisperse. Better stability and monodispersity behavior occurred when the 96% alcohol solvent is used. Green synthesis procedures are being increasingly studied in recent years. However, it is important to consider that the concentration of polyphenols, or some type of functional group, defines the correct formation and, especially, the stability of the nanostructures. Chemically, there are reaction procedures of complexes of the metal salt (in this case, iron salt) and the process of Fe (III) reduction by oxidized polyphenols, which is suggested as a mechanism [61]. For this reason, the solvent used for the extraction of organic compounds is important, and this research contributes to that consideration for the first time. Fourier Transform Infrared Spectroscopy (FT-IR) The FT-IR spectra of the leaves of E. globulus extracts in the 96% alcohol and the absolute ethanol solvents, and the respective FeO NPs, are shown in Figure 4. The peak is located at 1668 cm −1 , corresponding to the aromatic ring C=N, 1382 cm −1 and the vibration -CN-of the amides, or the -CO-stretching of alcohols, carboxylic acids, and 1082 cm −1 , corresponding to the C-O or C=O vibration. In addition, for the 96% alcohol solvent extract, peaks are located at 3400 cm −1 related to the H bond. Consequently, it could be concluded that the metabolites responsible for the reduction process are those that are related to the family of aromatic compounds. It has been shown that the eucalyptus extract in both types of solvents presents simple phenols and derivatives of phenolic compounds, which could be responsible for the process of reduction of metal ions and, therefore, the formation of FeO NPs. The reaction of the metal ion, iron nitrate nonahydrate, occurs with the presence of phenolic groups, as shown by the presence of aromatic groups (CH) from the extract of E. globulus. These include the monosubstituted benzene ring, the 1,4 disubstituted benzene ring, and the 1,2,3 trisubstituted benzene ring, attributable to the extract of E. globulus, which in turn contains phenolic components (3Ar-(OH)n) such as borneol, carvacrol, citronellal, etc. These components react with the metal ions (precursors), generating oxidation reduction and finally achieving stabilization, due to the presence of a greater amount of other hydroxide ions (OH) Equation (1). The result of the reductive reaction of the precursor nanoparticle is an Fe 0 -phenolate complex by a chelating effect (substance-forming complexes with heavy metal ions), causing the nucleation and growth of the nanoparticles. X-Ray Diffraction (XRD) Characterization by X-ray diffraction (XRD) was carried out. The results ( For FeO NPs synthesized using an extract in the absolute ethanol solvent, a 2θ is shown at 25 • , which can be attributable to the organic materials present in the extract and which acts as a stabilizing agent [61]. This peak was not observed for the other case under study. These results obtained are similar to those of other investigations, where a methodology for the green synthesis of FeO NPs was developed using other types of organic extracts. This similarity strengthens and sustains the conclusion that the synthesis was successful [17,62,63]. The information from the diffractograms obtained from FeO NPs was also used to determine the crystallite size. using the Debye-Scherrer equation. The results show measurements of 1.715 nm (FeO NPs using the absolute ethanol solvent extract) and 2.863 nm (using the 96% alcohol extract). The STEM results reinforce the sizes obtained with a range of similarity. Elemental Composition-Energy Dispersive Spectroscopy (EDS) To determine the presence of the elemental composition of FeO NPs, an EDS characterization was performed. The results ( Figure 6) show the presence of the elements iron and oxygen, without another type of element, confirming the quality and purity of the synthesis. The results show that the FeO NPs obtained with an extract in the absolute ethanol solvent have an elemental composition of 78.3% oxygen and 21.7% iron. In the sample obtained with an extract in the 96% alcohol solvent, the values of O and Fe were very close to the previous case, with values of 80% and 20%, respectively. An important detail to highlight is that both samples did not show the presence of other types of elements, which is attributable to an efficient process of reduction and formation of iron in its oxide form Iron tends to generate reaction processes with air, generating the formation of layers of FeO [64]. In addition, an analysis of the electronic structure of the FeO NPs was carried out by means of EELS characterization. Figure S1 shows the peaks for the oxygen K edges and the Fe-L 2,3 edges of both colloids under study. A difference in the intensity of peaks A and B is evident. Generally, a lower intensity is related to more oxygen vacancies in the nanomaterial, which allows us to deduce that the FeO NPs obtained with the extract in the absolute ethanol medium have more oxygen vacancies than the other study sample. Characterization of FeO NPs by Transmission Electron Microscopy (TEM/STEM) The STEM characterization confirmed the presence of nanoparticles with spherical morphology, and with sizes that are a function of the type of E. globulus extract used in this research. These observations confirmed the influence of the type of solvent. The FeO NPs obtained using an extract in the ethanol solvent ( Figure 7a) demonstrated an average size of 2.34 ± 0.53 nm. Likewise, the presence of small agglomerates is very possibly linked to traces of the organic extract, which coincidentally relates to the peak of 2θ at 25 • found in the XRD characterization, attributable to organic stabilizing agents, in addition to the organic radicals shown in the FT-IR. For FeO NPs using the 96% alcohol solvent extract as a reducer (Figure 7e), the same type of geometry was evidenced, with an average size of 4.17 ± 1.22 nm, without showing evidence of the presence of any type of organic trace. The results corroborate those already obtained by calculating the crystallite size using the Debye-Scherrer equation by XRD. Characterization of Magnetic Properties of SPIONs The characterization of the magnetic properties of the nanostructures under study ( Figure 8) showed a response attributable to superparamagnetic materials (SPIONs), which in turn had a very similar response when they were at 5 K with an Ms magnetization value of around 7.94 emu/g. However, a decrease in this value was evidenced when the temperature increased to 300 K (1.501 emu/g for FeO NPs using extract in the absolute ethanol solvent, and 2.059 emu/g in the 96% alcohol solvent). This tendency of magnetization to decrease was related to the decrease in the size of the nanoparticle, as corroborated by the results obtained by TEM/STEM (Figure 7). Several previous authors reported lower values of magnetization compared to the protocol presented in this research, using methods such as the thermal decomposition method (76 emu/g) [65] and coprecipitation (60 emu/g) [66]. Research reported that the reduction in magnetization was related to a crystalline disorder, i.e., spin inclination, as a consequence of the reduction of the coordination of surface cations or, in some cases, was linked to negative surface effects promoted by a broken exchange between spins in NP with tiny crystallite size [17,67]. Regarding the method of green synthesis of FeO NPs, previous researchers also reported various values for magnetization in SPIONs, with values lower than those obtained in this investigation, such as 23 emu/g [68], 5.35 emu/g [69], 7.78 emu/g [70], 11 emu/g [71], 0.015 emu/g [72], and 1.57 emu/g [73]. Figure 9 shows the results of the magnetization response of both of the nanoparticulate colloids under study as a function of the temperature variation. The increase in temperature generated the reduction of the magnetization values for both the measurements of zero field cool (ZFC, orange) and WFC (blue). The response of the WFC measure also described a similar behavior. In both cases, there was no evidence of thermal instability. Zeta Potential The stability of FeO NPs colloids is closely related to their surface charge [74]. Thus, it was important to characterize the aforementioned characteristic from electrophoretic mobility. For this characterization, KCl was used at a concentration of 1 mM at pH 6.5. The results of the characterization by Zeta potential showed, for FeO NPs synthesized using E. globulus extract in the absolute ethanol solvent, an average of 26.69 ± 2.83 mV and electrophoretic mobility 2.09 ± 0.22 (µ/S/V/cm). For the colloid FeO NPs obtained with the 96% alcohol solvent, an average value of 23.58 ± 2.51 mV and electrophoretic mobility of 1.84 ± 0.2 (µ/S/V/cm) were obtained, showing good colloidal stability. This result reinforced the result evaluated by spectrophotometry (Figure 3). It is important to highlight that colloids with high Z potential (−/+) values are electrically stable, whereas colloids with low Zeta potential (−/+) tend to coagulate [75]. The values obtained indicated minimal or no presence of functional groups and deprotonated biomolecules of the E. globulus extract in the colloid, which implied nanoparticles without subsequent reactions. The result obtained in this investigation was comparable to other investigations related to green synthesis, using other types of organic extracts [76,77]. Discussion The green synthesis protocols for metallic nanoparticles are mediated by the use of organic extracts, and these in turn are mediated by the presence of metabolites that act as reducing agents of some metallic salt. However, to achieve a complete process of metal precursor reduction, it is necessary to have a high concentration of metabolites. For this, it is important to consider the influence of the type of solvent used for extraction. This research provided information, for the first time, on the influence of the use of two types of solvents: the alcohol 96% solvent and the absolute ethanol solvent (99.9%). Extracts in these types of solvents were used in subsequent processes, having a higher alcohol content, On the other hand, aqueous extracts or extracts with a low percentage of alcohol tended to oxidize quickly, which complicated the process. The diversity of characterizations that were made with respect to the samples under study allowed the consolidation of important information. Thus, the FeO NPs obtained using the extract in the 96% alcohol solvent showed different, but important, characteristics with respect to the colloid obtained using the extract in the absolute ethanol solvent. This difference is specifically linked to colloidal stability, which was initially monitored by UV-vis spectrophotometry and reinforced with evaluation by Zeta potential, where nanoparticle size showed an invariance with respect to the SPR peak, with minimal variation in absorbance. Both colloidal samples presented spherical morphology. However, the nanoparticles obtained with the absolute ethanol extract showed the presence of organic traces from E. globulus, very possibly related to the fact that high alcohol contents allow the extraction of other types of functional groups that do not contribute to reducing activity, remaining in the colloid as traces and bringing variability in the other optical and magnetic properties of the sample. The results of the magnetization measurement showed the presence of SPIONs (superparamagnetic FeO NPs), and in turn it was observed that their properties are defined by the relationships between sizes, surfaces, and crystalline structures, and by an important relationship regarding the magnetic moment, where a lower value is produced due to a possible modification of the three-dimensional frame, evidenced as different magnetization values (emu/g) [78]. The nanoparticulate material obtained was characterized by reaching a state of saturation due to its superparamagnetic nature and consequent response to magnetic fields without delay, which makes it very applicable in regard to environmental remediation issues. In addition, it has excellent Langevin behavior, related to the ability to act in the face of external magnetic fields without maintaining residual magnetism when the same field is eliminated, which implies broadening the applicability in both magnetic resonance imaging and cell separation [79][80][81]. Regarding the mechanism of metal salt reduction by E. globulus extracts, the FT-IR characterization suggested that the C=C and C=O groups act as reducing agents, due to the presence of terpenoids and flavonoids. This research also evaluated the application of the nanostructured material in the removal of heavy metals present in agricultural soil. Only the FeO NPs sample obtained by synthesis using an extract in the 96% alcohol solvent was applied, due to its high monodispersity, smaller size, and colloidal stability, in addition to its better magnetic properties. The nanoparticles that are most useful for soil remediation are zero valent iron, titanium dioxide (TiO 2 ), zinc oxide (ZnO), and multi-walled carbon nanotubes. This is due to their excellent ability to immobilize or adsorb metal ions [82]. Zero valent iron nanoparticles are the most studied for soil remediation, due to their size, large surface area, high reactivity, and reduction capacity. On the other hand, one of the factors that affects the speed of the reaction is the size of the particle [83]. The results obtained from the characterization of the FeO NPs were fundamental in identifying the schematic model linked to the removal of metals. According to XRD analysis, two phases of FeO were identified: magnetite (Fe 3 O 4 ) and slight contributions of maghemite (γ-Fe 2 O 3 ). The presence of oxide in both phases (magnetite and maghemite) provided the active sites for the adsorption of metals. Additionally, maghemite (γ-Fe 2 O 3 ) presented a schematic model with reducing power, confirming the capacity of the obtained NPs in the green route. Previous research indicated that the amount of active sites on the surface is due to the organic functional groups derived from the extract of eucalyptus leaves [52]. This conclusion was reaffirmed in the FT-IR analysis carried out in characterizing the NPs obtained in the green route. Previous research showed the relationship between the available active sites of the adsorbent and the total amount of adsorbates [84]. The results obtained after 30 min of contact by atomic flame absorption (Table 1) indicated a variance in the adsorption capacity of colloids applied in different concentrations (M1 = 5 mL, M2 = 10 mL, and M3 = 15 mL) for the removal of Cr, since the concentration of the metal (adsorbate) decrease in direct proportion to the mL of NPs (adsorbent) applied. Table 1 shows the results obtained by atomic absorption. The initial concentration of hexavalent chromium (Cr-VI) was 204.43 ppm, a value that, when compared to the Environmental Quality Standards [85] provided by the Peruvian government for agricultural land, is very high (the maximum allowed concentration is 0.4 ppm). The percentage of chromium reduction by applying the FeO NPs colloids after 30 min was achieved by 100%. The reduction of this metal through the application of NPs resulted from reducing Cr (VI) to Cr (III), where the transfer of electrons took place and Cr (VI) was then reduced to Cr (III) with the oxidation of FeO to Fe (II) and Fe (III), yielding the general chemical Equation (2) [86]. On the other hand, the adsorption capacity of Cr (VI) depends on the pH, where the maximum adsorption occurs with pH 2-6. This is due to the dissociation of surface functional groups of the FeO NPs and the speciation of the Cr (VI) ions in the aqueous phase, which are directly influenced by the pH of the solution. In this research, E. globulus nanoparticles were obtained with pH 3. This finding was consistent with the research consulted, which indicated that when the pH increases, the adsorption of Cr (VI) decreases, since it would imply a higher concentration of OH − ions present in the dominant formation of chromium, H 2 Cr 4 − . This increases the electrostatic repulsion between the adsorbent and the form of the Cr (VI) anion dominant. In addition, adsorption tended to be higher when NPs had a longer contact time with the sample [57,84,86]. On the other hand, the initial concentration of cadmium (Cd) was 0.251 ppm. When compared with the Environmental Quality Standards (ECA), this figure is within the allowed value (1.4 ppm). However, with the application of FeO NPs colloids at different concentrations, with a contact time of 30 min, the removal of 100% of the total Cd was achieved. Previous research indicated that redox reactions dominated the transformation of cadmium from unstable fractions to more stable fractions; in addition, cadmium is immobilized by the combination of adsorption and/or precipitation by the iron oxides formed [56], which confirms the adsorption capacity of the phases found of NP Fe O obtained (γ-Fe 2 O 3 and Fe 3 O 4 ). Recent studies indicated desorption by soil particles and adsorption on the surface of the nanoparticles in the green route, as they present active binding sites on their surfaces (Equation (3)) [87]. This reaffirms the potential for 100% Cd removal by NP Fe in its two phases. A study revealed that adsorption reached equilibrium within 30 min [88], a finding consistent with the results obtained in the present investigation, since 100% cadmium adsorption was obtained in the same time. This indicated that Cd adsorption by NPs is primarily chemical adsorption. Another metal present in the soil sample was lead (Pb), a very common element in the mining industry, with high content in water tributaries that is finally reflected in the soil. Table 1 shows an initial concentration of 497.26 ppm of lead, a value well above what is allowed (70 ppm). As with the previous samples, the same volumes of FeO NPs colloids were used. Previous research tested the variation of the reducing agent's influence on the recovery capacity of Pb ions, since an increase in metal concentration occurs when the amount of biomass is increased. This is attributed to the formation of aggregates, due to the electrostatic interactions of the biosorbent that decrease the efficient surface area to allow biosoation [89]. This process explains the stability and increase of Pb concentrations that were found in the present investigation, since only the same concentration of biomaterial was used and there were no variations. Contact time is also an important factor for the metal sorption capacity of Pb. In this investigation, there was a contact time of only 30 min, which explains the lack of removal. According to previous studies, the ability to remove Pb gradually increases with increasing contact time, from 30 to 60 min [90]. Conclusions In this research work, FeO nanoparticles were synthesized using Eucalyptus globulus extract as an organic reducing agent to evaluate the influence of the type of solvent used (the 96% alcohol or the absolute ethanol solvent). The FT-IR results revealed that the presence of aromatic compounds are directly involved in the precursor reduction process. The nanostructures obtained showed spherical geometry in both cases, with sizes of 2.34 and 4.17 nm for FeO NPs in the absolute ethanol and alcohol 96% solvents, respectively. In addition, the presence of the magnetite and maghemite magnetic phases was confirmed by XRD characterization. Elemental analysis by EDS showed better iron purity when the solvent was used in low concentration alcohol. The magnetic response exhibited, in both cases, a behavior attributable to superparamagnetic materials. The application of FeO NPs obtained with the best synthesis (using the 96% alcohol solvent extract) in soil remediation was successful for metals such as chromium and cadmium. This mechanism was linked to the presence of oxygen in the nanostructure, providing active sites for the metal absorption. Supplementary Materials: The following supporting information can be downloaded online, Figure S1: EELS spectra of O-K edges (a) and Fe-L2,3 edges (b) acquired from a FeO NP 96% alcoholic extract (red curves) and FeO NP absolute ethanolic extract (blue curves), respectively; Figure S2: Green FeO NP synthesis protocol, using 96% alcoholic and absolute ethanolic extracts of E. globulus. Conflicts of Interest: The authors declare that they have no known competing financial interests or personal relationships that could have influenced the work reported in this paper.
7,077
2022-02-01T00:00:00.000
[ "Materials Science" ]
Product proliferation, complexity, and deterrence to imitation in differentiated-product oligopolies Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. Research Summary: Game theory suggests that, in oligopolistic markets characterized by nonprice competition, dominant incumbents can use product proliferation to occupy a region of the product space (i.e., a subspace) and deter rivals from imitating their products. In part, this is because product proliferation makes the introduction of close substitutes comparatively less profitable; in part, it is because the strategy conveys a threat of retaliation to potential imitators. Yet this threat is only credible if the proliferator has high costs of exit from the occupied region of space. We hypothesize that complexity, as a property of product (sub)spaces, generates exit costs for the proliferator and increases the deterrent power of its strategy. We test this hypothesis by studying sequential product introductions in the U.S. recording industry, 2004-2014. Managerial Summary: Differentiated-product markets are often concentrated in the hands of a few dominant organizations, which strive to keep on equal footing by offering similar products. In these markets, a product proliferation strategy can help one of the dominant incumbents claim a particular submarket as its territory. Investing heavily in that submarket communicates a threat that the proliferator will retaliate against invaders to protect these investments. However, this threat is not credible enough to deter rivals unless the occupied submarket is sufficiently complex in terms of product attributes, as precisely this kind of complexity makes it harder for proliferators to back down if challenged. We find evidence of this mechanism in an analysis of product competition among major record companies and discuss implications for strategic decision-making. We adopt the authors' approach to measuring complexity by way of heterogeneity, while interdependence is held constant throughout the study period. We argue that greater product-attribute heterogeneity generates coordination costs that proliferators tend to offset by developing more intricate organizational structures (Zhou & Wan, 2017). These intricacies, in turn, give rise to exit costs by impeding later structural modifications (Hannan, Pólos, & Carroll, 2003a, 2003b, and because of costly exit, a proliferator's threat of retaliation appears more credible to rivals (cf. Eaton & Lipsey, 1980). Hence, we posit that proliferating products in a more complex subspace have a stronger deterrent effect. Our findings contribute to strategic management theory by extending Barroso and Giarratana's (2013) arguments on the moderating role of product space complexity and examining its influence on rivals' product launch decisions. In addition, they point to promising but unexplored connections between studies on deterrence in industrial organization and the strategy literature on imitation (Ethiraj, Levinthal, & Roy, 2008). | THEORY AND HYPOTHESES Before explaining how product space complexity affects the relationship between product proliferation and deterrence, it is useful to provide an informal account of why deterrence is expected in the first place. 2 We begin by characterizing the market as a multidimensional space where each axis represents a product attribute (Lancaster, 1990). In this abstract variant of Hotelling's (1929) locational model, demand is represented by a set of points that correspond to consumers' ideal product specifications. Firms, instead, are represented by sets of points that correspond to the products they currently offer on the market. Competition takes the form of a sequential game that revolves around location choice (Bonanno, 1987). The proportion of demand captured by a firm at any given time (i.e., its payoff) depends on the distance between its products and consumers. Over time, both consumers and firms can alter their locations in the product space: in the one case, this is because consumer preferences are inherently dynamic (Barroso et al., 2016); in the other, it is because a firm can introduce new products to keep up with shifting preferences or respond to rival firms' behavior (Sorenson, 2000). It is generally advantageous for firms to occupy new locations in the product space if this enables them to steal demand from rivals, or if it prevents rivals from capturing demand that they intend to meet in the future. Deterrence involves altering the payoffs of other firms in such a way that hostile decisions appear less attractive (Schelling, 1956). There are two reasons why this can be achieved by a product proliferation strategy. 3 One reason is that this strategy allows a firm to tighten extant gaps within its product offer in a particular submarket, leaving smaller demand available to rivals' products (Lanzillotti, 1954). 4 The other reason, arguably more fascinating to game theorists, involves considerations about 2 For a formal account, we refer the reader to game-theoretic literature (Gilbert & Matutes, 1993). 3 Barroso and Giarratana (2013) distinguish at least two kinds of product proliferation, depending on whether the proliferator's products are brand new or they are merely new versions of existing products. Our analysis applies to the case where products are brand new. In case of versioning (see Giarratana & Fosfuri, 2007), additional factors can have a bearing on deterrence that we do not consider here, such as brand reputation (Choi & Scarpa, 1992). In our empirical study, we strictly consider original products. 4 For completeness, we should note that smaller demand remains available for the proliferator's own products as well, an effect that Lanzillotti (1954) termed "self-competition." Barroso and Giarratana (2013) refer to this as a form of cannibalization, and list it as a possible adverse effect of product proliferation when this strategy takes the guise of versioning. We assume that cannibalization is of little concern to firms in our analysis: we find this reasonable theoretically because we do not consider versioning (see Footnote 3), and empirically because we analyze an industry where product life cycle is notoriously short. threats and commitments (Eaton & Lipsey, 1980). By sinking its investments in a product subspace, a proliferator threatens its potential imitators with an escalation of competitive intensity, which can be achieved via price competition, if this is allowed, or more simply via relentless advertising (Roberts & Samuelson, 1988). If the proliferator's costs of exit from the occupied subspace are substantial enough to make the realization of this threat not only possible but rational, that is, if the proliferator is committed, then its threat is credible and effectively keeps competitors in check (Judd, 1985). As a result, a product proliferation strategy can lead to a subgame perfect Nash equilibrium (Gilbert & Matutes, 1993). We are interested in evaluating this game-theoretic argument in the context of nonprice competition among rival oligopolists who keep a foothold in multiple submarkets. Natividad and Sorenson (2015) highlighted some of the destructive implications a local increase in competitive intensity can have in this situation: studying U.S. film distributors during the period 1985-2009, the authors found that companies reacted to tighter competition in the submarket for theatrical releases by diverting resources away from the submarket for home videos, leading to a decrease in their home video sales. Thus, a threat that was originally localized to one submarket spiraled into market-wide disruption. Karnani and Wernerfelt's (1985) theory of multiple-point competition provides a general rationale for this dynamic: a local conflict can turn into a war that leaves participants vulnerable in other regions of the product space where they are active. Because firms are aware that imitating a proliferator's products can trigger such an escalation, their optimal strategy can be not to imitate, that is, to differentiate their products from the proliferator's. A literal application of Karnani and Wernerfelt's (1985) theory would require both the proliferator and its would-be imitator to be active in the same regions or subspaces, but in fact, it suffices for the latter to be active in at least one subspace other than the one targeted by the former. This is enough for the imitator to incur vulnerabilities that competitors can exploit, as in the case detailed by Natividad and Sorenson (2015). Hence, we hypothesize: Hypothesis 1 (H1) (Baseline) If an oligopolist introduces a greater number of new products in a subspace, then the probability of new product introductions in this subspace by rival oligopolists who are active in at least one other subspace decreases. Through the deterrence implied by Hypothesis 1, product proliferation can generate quasi-rents that improve the proliferator's competitive performance. However, much like the other effects of product proliferation on performance (see Barroso & Giarratana, 2013), deterrence can be moderated by product space complexity. Strategy scholars generally argue that interdependence, one of the two components of complexity, prevents imitation by inducing causal ambiguity (Reed & DeFillippi, 1990). As a result, complex strategies are hard to copy (Rivkin, 2000), complex information is resistant to transfer (Sorenson, Rivkin, & Fleming, 2006) and complex products are difficult to retroengineer (Pil & Cohen, 2006). But product space complexity can also deter imitation through its other component, that is, heterogeneity. This is because sinking one's investments in a region of the market where product attributes are more heterogeneous can create coordination problems in manufacturing (MacDuffie, Sethuraman, & Fisher, 1996), distribution (Zhou & Wan, 2017) and relations with external partners (Mol & Wijnberg, 2007). Firms tend to address these problems by setting up ad hoc routines (Gupta & Srinivasan, 1998) and decentralizing their decisions (Siggelkow & Rivkin, 2005): in this sense, the complexity of a product subspace engenders complexity in organizational structures. Once in place, a complex structure makes the firm averse to change, to the point that even small modifications can have disastrous repercussions (Hannan et al., 2003a). This creates exit costs from the occupied subspace, especially in terms of opportunity (Hannan et al., 2003b). As noted by Judd (1985), it is exactly in the presence of high exit costs that product proliferation makes a credible deterrent (cf. Thomas, 1996). If a firm deploys this strategy in a less complex subspace, it encounters fewer coordination problems and can maintain a more versatile structure, so that in the presence of an imitator it will be less motivated to put up a fight. Its rational course of action in this case could be to pivot to another region of the market. Rivals would be aware of this because they can also observe complexity or the lack thereof, experiencing its effects on their internal operations. If complexity is low, they can conclude that the proliferator is not committed to its threat. If complexity is high, however, the proliferator normally has to adapt its structure, and because it faces greater exit costs, it can no longer shy away from fighting-this option comes to dominate retreat (Eaton & Lipsey, 1980). Insofar as rivals are aware of this, the threat of retaliation appears credible to them. For this reason, we expect product proliferation to more strongly deter the introduction of imitative products in more complex subspaces. Hence: Hypothesis 2 (H2) (Moderation) If an oligopolist introduces a greater number of new products in a more complex subspace, then the probability of new product introductions in this subspace by rival oligopolists who are active in at least one other subspace decreases more. | METHODOLOGY We test our hypotheses by analyzing patterns of sequential product introductions in the U.S. recording industry, 2004-2014. This is an ideal setting because products' prices are conventionally fixed for particular formats (such as singles), there is a short product life cycle, and the costs of new product introductions for established incumbents are relatively low (Benner & Waldfogel, 2016). Much like in related industries (e.g., Berry & Waldfogel, 2001), these conditions motivate the firms to introduce marginally different products in order to deter their rivals. The U.S. recording industry is also a good setting because there is a well-defined and close-knit group of oligopolists, that is, the major record companies, who carefully monitor each other's strategies and use them as a basis for their own decisions (Huygens, van den Bosch, Volberda, & Baden-Fuller, 2001). As these firms imitate each other by default (cf. Kennedy, 2002), changes in their pattern of behavior are easier to observe empirically. Finally, this setting suits our purpose because the various regions or subspaces whereby the market for recorded music is partitioned-that is, genres and subgenres-can differ greatly in complexity (Percino, Klimek, & Thurner, 2014). We collect product and firm-level data from several online sources, including Billboard, Discogs, and MusicBrainz. Data on product attributes are obtained from AcousticBrainz, a platform that creates acoustic fingerprints of songs using machine-learning algorithms (Porter, Bogdanov, & Serra, 2016). 5 Our dataset consists of 8,263 original singles released during our study period, of which 416 were released directly or indirectly (i.e., through subsidiaries or imprints) by one of four majors: Sony, Universal, Warner, and Electric and Musical Industries (EMI). Original release dates in the U.S. are obtained from MusicBrainz. The submarkets to which singles belong are determined by genre and subgenre (or "style") tags retrieved from Discogs. To minimize the chance of error, our dataset includes only singles for which MusicBrainz provides an exact cross-reference to Discogs. The singles are distributed across 14 genres and 221 styles: both levels of classification serve to 5 See Askin and Mauskapf (2017) for a study of product differentiation based on similar acoustic data. divide the product space into submarkets in the eyes of record companies (Montauti & Wezel, 2016;Phillips & Kim, 2009), but we focus on styles because Discogs genres can be very broad (e.g., "pop," "rock," "jazz"), and at this level of analysis deterrent effects could be very difficult to identify. In addition, partitioning the market into styles affords a greater number of firm-submarket spells, thereby increasing the statistical power of our test. Because EMI exited the industry at the end of 2011, we tally 9,061 major-style-year combinations, which constitute observations in our preliminary sample. Our choice of yearly spells agrees with previous research on product strategies in differentiated-product markets (Giachetti & Dagnino, 2014). Our dependent variable is the number of singles released by a major in a given style-year (OwnSingles). Because this is a count variable that violates assumptions of equivariance (Cameron & Trivedi, 1990), we estimate it using a quasi-Poisson generalized linear model. To capture the effect of product proliferation, we compute the maximum number of singles released by a rival major in the focal styleyear (Proliferation). A higher value of this variable indicates that one of the rivals is pursuing a product proliferation strategy. The degree of heterogeneity in product attributes within a subspace, which corresponds to the subspace's level of complexity when interdependence is fixed (Barroso & Giarratana, 2013), is computed on the basis of nine audio properties that AcousticBrainz showcases to summarize the fingerprint of songs, including track length, primary key, scale and frequency of the primary key, most frequent key of the chord progression, scale of the most frequent chord progression key, "danceability" (see Streich & Herrera, 2005), average number of beats per minute, and total count of beats. With these dimensions in hand, we define a product space based on a Mahalanobis metric (see also Liu, Montauti, & Piazzai, 2018), which accounts for interdependence by learning it from the covariance of spatial coordinates (Xiang, Nie, & Zhang, 2008). We then calculate the centroid of each style during each year of observation and compute the Mahalanobis distance of each product in the style-year from this centroid. The mean distance increases with the degree of heterogeneity in product attributes and constitutes our measure of complexity (Complexity). We control for the total number of products introduced within the style-year by all the major (MajorSingles) and nonmajor record companies (IndieSingles). 6 We also control for the total number of products released by the focal major in any style during the current year (PortfolioSize). To account for variance in majors' performance, we compute a score based on the ranking of the major's singles on the Billboard Hot 100. This chart is universally considered an indicator of competitive success in the recording business (Anand & Peterson, 2000): it ranks singles on a weekly basis using Nielsen SoundScan data on physical and digital sales, downloads, and streaming. We use this information to compute a weighted score (Performance) for each major-style-year according to the formula: P s2S P W s i = 1 101 − r si ð Þ =100 ð Þ , where S is the set of singles released by the major in the style-year, W s is the number of weeks that single s was on the chart, and r si is the rank achieved by single s in week i, from highest (r si = 1) to lowest (r si = 100). 7 In addition, we control for majors' level of diversification by calculating the yearly concentration of their products across styles through a Herfindahl-Hirschman index (Diversification). Finally, we account for variance in demand through a weighted score (Demand) analogous to the one used for Performance except that S in this case represents the set of products released by any major or nonmajor within the style-year. Therefore, a greater value of this variable indicates that products in a given style-year sold more copies on the U.S. market. Every predictor is lagged by 1 year so that the value of OwnSingles at year t is estimated as a function of independent variables at t − 1. This causes the loss of 884 observations relative to 2004. 6 We subtract the value of Proliferation when computing MajorSingles to prevent collinearity. 7 See Piazzai and Wijnberg (2017) for another use of the same score in a study of firm performance. We specify fixed effects for majors, years, and genres. 8 To control for possible feedback effects, we include a lagged value of the dependent variable in the list of predictors. We note that many styles on Discogs are peripheral to the recording industry and witness no releases during some years of observations, which impedes the computation of Complexity. Wherever possible, we fill in the missing values by looking to the previous style-years and allowing the last known value of Complexity to carry over. As a result, our final sample includes 6,069 major-style-years with complete data. Postestimation diagnostics suggest that none of them has excessive leverage (Cook ' s D > 1). Table 1 reports the descriptive statistics and pairwise correlations of the variables involved in our regression models. Most predictors are significantly correlated with the dependent variable, and some of them are strongly correlated with each other, but collinearity is not of concern as the conditioning number of the data matrix (6.12, further reduced to 3.77 after mean-centering) is well below the threshold 30 recommended by Belsley, Kuh, and Welsch (1980). We standardize all the independent variables before regression to facilitate comparison of effect sizes in our table of estimates (Table 2). 9 We begin our analysis with a specification that includes only control variables (Model 1), then we add Proliferation (Model 2), Complexity (Model 3), and finally their interaction (Model 4). The coefficients in Table 2 represent additive effects on the logarithm of the expected count of new releases by the focal major. To compute effects on the probability of a release, the coefficients should be exponentiated to obtain incidence risk ratios (IRRs). For example, a coefficient of 0.50 corresponds to an IRR of e 0.50 = 1.65, which implies that a 1-unit increase in the predictor-or a one-SD increase, if the predictor is standardized-leads to a 65% greater probability of new product introduction. In presenting marginal effects below, we rescale the coefficients from Table 2 so that they can be interpreted as effects of 1 unit increases on the original (i.e., nonstandardized) scale of the independent variables. We automatically convert these into IRRS and report 95% confidence intervals (CIs). | RESULTS Our estimates are stable throughout hierarchical models, and tests of deviance show that each additional predictor significantly adds to model fit. For brevity, we only describe the results from Model 4. We find that all predictors except Diversification (IRR = 1.26, CI = [0.63, 2.53]) and MajorSingles (IRR = 0.98, CI = [0.89,1.08]) lead to significant changes in the value of the dependent variable: more specifically, we find positive effects for lagged OwnSingles (IRR = 1.32, CI = [1.24, 1.40]), which suggests that majors tend to replicate their previous launch decisions; for Demand (IRR = 1.01, CI = [1.01, 1.01]), which indicates that majors tend to target subspaces where consumer preferences are concentrated; and for IndieSingles (IRR = 1.06, CI = [1.04, 1.08]), which suggests that greater activity by independent record companies prompts majors to release products of their own. We find a negative effect for PortfolioSize (IRR = 0.95, CI = [0.92, 0.97]), which is likely to be a consequence of resource constraints (cf. Natividad & Sorenson, 2015), and Performance (IRR = 0.99, CI = [0.98, 1.00]), which indicates that greater sales for products in a particular subspace induce majors to abstain from further product introductions. This could be because successful singles continue to sell into the following year, and the majors do not need to release additional products to defend their current position in the subspace. With regard to our predictors of theoretical interest, we find Proliferation to be associated with a greater probability of new product introduction by the focal firm (IRR = 1.65, CI = [1.27, 2.14]). If the maximum number of products introduced by a rival is 1-unit higher, ceteris paribus, then the focal firm's probability to release a similar product increases by 65%. Consistently with previous research (Huygens et al., 2001), this is indicative of strong imitative tendencies among the major record companies, which just like oligopolists in other creative industries (Kennedy, 2002) tend to replicate each other's product launch decisions. All else being equal, product proliferation seems to trigger a reaction that prevents one of the majors from taking over a subspace. It remains to be assessed whether this relationship is moderated by complexity. In and of itself, Complexity is associated with a greater probability of new product introduction (IRR = 1.38, CI = [1.29, 1.47]), which makes sense because our measure of complexity is driven by heterogeneity and more heterogeneous subspaces allow for greater differentiation. However, Complexity also changes the effect of Proliferation. The interaction between these variables is negative (p = 0.005) and the size of the coefficients points to a complete reversal of Proliferation's effect as Complexity increases. Indeed, if Complexity is 1 unit above its mean, then a 1-unit increase in Proliferation leads to a 44% increase in the focal major's probability to introduce a similar product (IRR = 1.44, CI = [1.11, 1.87]). This is still a positive effect, but smaller than what we have at mean Complexity. If Complexity is 2 units above the mean, instead, then the positive effect disappears (IRR = 0.96, CI = [0.74, 1.25]). But there is no evidence of deterrence yet: this emerges only if the value of the moderator increases further. If Complexity is set to the observed maximum, then a 1-unit increase in the value of Proliferation leads to a 63% decrease in the probability of imitation (IRR = 0.37,CI = [0.28,0.48]). This effect is just as strong as what we have at mean Complexity but goes in the opposite direction. In such an extremely complex space, it takes as little as four releases by a rival to annihilate the focal major's tendency to follow suit (IRR = 0.02, CI = [0.01, 0.05]). Figure 1 visualizes this reversal: in this plot, the xaxis corresponds to the nonstandardized value of Proliferation, and the y-axis corresponds to the multiplicative effect on the probability of an imitative release by the focal major, and the color gradient represents Complexity. To check the robustness of our estimates, we replicate our analysis after excluding from our sample major-style-years where the value of Complexity is more than two SD away from the mean. The results are qualitatively identical, suggesting that our findings are not driven by styles with an extreme level of complexity. Our estimates are also robust to the use of alternative regression models for overdispersed count data, such as the negative binomial. Based on these results, we reject Hypothesis 1: the negative effect of product proliferation we expected at mean complexity is not supported. On the contrary, we find that product proliferation provokes a reaction whereby rivals get back on equal footing. Nevertheless, a negative effect occurs if the proliferator targets a sufficiently complex subspace. In this case the strategy averts rival product introductions, and for this reason we accept Hypothesis 2. We conclude that spatial complexity does not simply weaken or strengthen the effect of product proliferation on the probability of imitation but determines the effect's direction. This is negative only if complexity is sufficiently high, which is consistent with our argument that complexity makes the threat of retaliation more credible to rivals. FIGURE 1 The effect of product proliferation on the probability of imitation by oligopolistic rivals depends on the product subspace's level of complexity. At mean complexity (lighter edge) or lower, the effect is positive, meaning that product proliferation triggers imitation. As complexity increases, however, the effect becomes weaker, and at sufficiently high complexity it turns negative, which suggests that proliferation works as a deterrent. At maximum complexity (darker edge), the curve is so steep that a handful of products are already sufficient for the proliferator to wipe out the probability of imitation by its rivals | DISCUSSION Our results show that product proliferation strategies can prevent imitation under the conditions stipulated by our theoretical model; however, this effect may not be strong enough to stop rival oligopolists. If there are compelling reasons to imitate-such as fundamental uncertainty about market conditions (Lieberman & Asaba, 2006)-rivals are still likely to introduce similar products. The question a strategist should ask is then the following: Conditional on product proliferation occurring in a subspace, will this be enough to convince rivals that they should keep their distance? The answer to this question depends on the subspace's level of complexity, particularly on its degree of productattribute heterogeneity. The higher its degree of heterogeneity (and thus its level of complexity), the lower the probability that rivals will encroach on the proliferator's territory. The opportunity then presents itself to the strategist of manipulating a subspace's level of complexity precisely through product proliferation. If a firm designs new products so as to increase the heterogeneity of product attributes within the targeted submarket, then product proliferation will not only lead to a more defensible positioning in product space, but it will also automatically generate commitment. Naturally, the strategist should take into account that manufacturing and distributing products with more heterogeneous attributes can require more complex organizational structures, which come with their own sets of benefits and problems (Zhou & Wan, 2017). Firms should be aware that, by espousing complexity, they are most likely sacrificing some of their mobility (Hannan et al., 2003a(Hannan et al., , 2003b. Of course, no definitive answer to the strategist's question can be given without considering the firm's standing in the market. Our test considered oligopolistic competition, and specifically the situation where both the proliferator and its potential imitator are among the oligopolists. In this case, product proliferation works as a deterrent as long as rivals believe the proliferator is committed. Based on existing literature in industrial organization and management science, we may also expect the strategy to work against new entrants (see Mainkar et al., 2006) and smaller incumbents (see Caves & Porter, 1977), but the extent to which it has deterrent power when enacted by nonoligopolists remains in need of testing. Smaller firms are generally more vulnerable to change (Barron, West, & Hannan, 1994), and this makes commitment that much easier to achieve, but they are also better capable of repositioning in product space (Liu et al., 2018), and most importantly, they are unable to sustain vigorous competition against much larger firms. One could expect product proliferation to have deterrent power only vis-á-vis new entrants and small-sized incumbents in their case: it may not be sufficient to deter larger competitors. This note contributes to the strategy literature by (a) extending previous results by Barroso and Giarratana (2013) on the moderating role of product space complexity, (b) clarifying that this can also be a property of subspaces or submarkets, (c) proposing an empirical approach to measure product space complexity based on heterogeneity rather than interdependence, and (d) using this approach to study effects of complexity on imitation that cannot be ascribed to interdependence. On a more general level, our note points to fruitful connections between game-theoretic research on deterrence in industrial organization, which does not consider complexity, and research on imitation in strategic management, which considers both complexity and deterrence (Ethiraj et al., 2008) but does not account for game-theoretic dynamics predicated on threats and commitments. Further efforts in this direction seem to be germane, especially in the study of imitation within highly concentrated product markets. Nonetheless, future research should be mindful of our note's limitations. We argued that proliferating products in a complex subspace causes an increase in organizational complexity, and that by affecting exit costs, this makes the proliferator's threat of retaliation appear more credible to rivals. Still, we did not measure organizational complexity: we only measured spatial complexity (as moderator), product proliferation (as main independent variable), and rival product introductions (as dependent variable). We felt justified in this approach because previous research already established that greater heterogeneity in the attributes of products offered by a firm generates complexity in organizational structures (Zhou & Wan, 2017), but a more explicit test of our mechanism would need to treat organizational complexity as a mediating variable.
6,945.4
2019-02-20T00:00:00.000
[ "Economics" ]