text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Hepatocellular Carcinoma: The Role of MicroRNAs
Hepatocellular carcinoma (HCC) is the second leading cause of cancer-related deaths worldwide. HCC is diagnosed in its advanced stage when limited treatment options are available. Substantial morphologic, genetic and epigenetic heterogeneity has been reported in HCC, which poses a challenge for the development of a targeted therapy. In this review, we discuss the role and involvement of several microRNAs (miRs) in the heterogeneity and metastasis of hepatocellular carcinoma with a special emphasis on their possible role as a diagnostic and prognostic tool in the risk prediction, early detection, and treatment of hepatocellular carcinoma.
Introduction
Hepatocellular carcinoma (HCC) is the most common and deadly liver cancer and the second leading cause of cancer-related deaths worldwide [1,2]. HCC, aggressive in nature, accounts for 90% of primary liver cancers [3]. HCC normally develops in the setting of cirrhosis and the process of tumorigenesis is further promoted by chronic viral hepatitis related to the hepatitis B (HBV) and hepatitis C viruses (HCV), alcohol-induced injury, nonalcoholic fatty liver disease, exposure to aflatoxins or genetic disposition [4]. Diet-induced HCC is an emerging problem in developed as well as in developing countries [5,6]. Over the past 15 years, reported cases of HCC have more than doubled due to late diagnosis [7] with limited treatment options and marginal clinical benefits to patients.
Among the limited potential curative strategies are liver resection and liver transplantation as well as loco-regional therapies, such as radiofrequency ablation and transarterial chemoembolization [8]. Because of shortage of healthy livers, the transplants are provided crucially to patients that have the best chance of long-term survival. Cytotoxic systemic therapy is limited by tumor chemoresistance and patient intolerance [9].
Multiple molecular regulations set a precedence for the high rate of recurrence and may lead to tumor heterogeneity in HCC. In this regard, substantial heterogeneity has been reported between the multiple tumor foci in a single patient. Therefore, more reliable biomarkers are needed for the diagnosis, treatment and surveillance of HCC [8,10].
Heterogeneity in HCC
Cancer heterogeneity has been recognized as an important clinical determinant of patient outcomes, such as response or resistance to anti-cancer therapies [11,12]. Heterogeneity is prevalent in cancer, both between and within individuals. Multiple morphologic (differentiation status and cytologic features), genetic (mutational background), epigenetic (DNA methylation) and microenvironment (hypoxia gradients and local oxidative stress) variations create heterogeneity amongst different tumors [10,13]. HCC is a highly heterogenous cancer with significant intratumor heterogeneity. The pathologic classification of HCC is based on the degree of cellular differentiation. In a single tumor, the cancerous tissue of two different histological grades may be present. In addition, tissue obtained from HCC may exhibit different immunohistochemical characteristics in the same tumor. Furthermore, genetic heterogeneity has been described in HCC. Thus, HCC is less likely to be caused by a single driver mutation. The intratumor heterogeneity of HCC plays an important role in the prognosis of the disease. Hence, the detection of HCC intratumor heterogeneity is important for the development of effective targeted therapies. Though liver transplantation, surgical resection and radiofrequency ablation (RFA) offer a curative treatment for HCC, they are not an option for patients with intermediate/advanced stage HCC. Sorafenib, a multikinase inhibitor of several tyrosine protein kinases, is implicated in the treatment of patients with intermediate/advanced HCC. It has shown a modest increase in median survival in clinical trials [14]. However, the adequate concentration of this drug is not achieved because of vascular heterogeneity within the tumor, resulting in a reduced response to therapy. Thus, intratumor heterogeneity also plays a role in drug resistance. Therefore, the better understanding of the intratumor heterogeneity of HCC should provide critical knowledge about the prognosis of the disease and response to potential future therapy.
Morphologic Heterogeneity in HCC
HCC can be classified pathologically on the basis of the degree of cellular differentiation, which include well differentiated to moderately and poorly differentiated and undifferentiated tumors. The two histological grades can be present in one tumor, thereby exhibiting different immunohistochemical characteristics in the same tumor. Apart from differentiation, intratumor heterogeneity also decides the size of the tumor and lymphovascular spread. Some of the histochemical markers used for HCC diagnosis include pCEA, CD10, alpha fetoprotein (AFP), hepatocyte paraffin1 (hepPar1), cytoplasmic thyroid transcription factor-1 (TTF1), glutamine synthetase, GPC3, CK8 and CK18, but unfortunately none of them is specific of early stage HCC. AFP is most insensitive of all because it is not expressed by all HCC cells [15]. Zhang Q et al. recently proposed an immunophenotypic classification of HCC using two markers (CD45 and Foxp3) that will facilitate prognostic prediction and decision making for the choice of therapies [16].
HCC is well known for morphologic intratumor heterogeneity, but very few systematic analyses of this phenomenon have been performed. Intratumor heterogeneity is detectable in the majority of HCC cases (87%), with 26% of cases at the level of morphology. Further, 39% of cases are classified at the combined morphologic and immunohistochemical level and 22% at the combined morphologic and immunohistochemical treatment target level with known mutational status, for example, TP53 and β-catenin [4].
Intratumor heterogeneity poses a challenge for the development of a robust HCC classification as well as a targeted therapy and may contribute to treatment failure and drug resistance in many cases of HCC. HCC is known to frequently display heterogeneous growth patterns and/or cytologic features within the same tumor. This kind of plasticity of phenotypes or intratumor heterogeneity has already been described in several solid tumors, from skin, breast and kidney diseases [17][18][19][20][21]. In small HCC, measuring 3 to 5 cm in diameter, up to 64% of cases display intratumor heterogeneity at the level of histologic differentiation grade and proliferative activity, whereas in HCC smaller than 2 cm, intratumor heterogeneity is 25-47% [22,23]. In larger HCC, on the other hand, the true extent of intratumor heterogeneity with respect to morphologic, immunohistochemical and molecular features has not been systematically assessed [4].
Genetic Heterogeneity in HCC
Nault and Villanueva identified TERT promoter mutations with an overall frequency of 60%, as the most frequent somatic genetic defect in HCC [24]. It is also recurrently mutated in precancerous nodules as the earliest genetic alteration involved in the malignant transformation of HCC and possibly considered as a tumor "gatekeeper". They speculated Biomolecules 2022, 12, 645 3 of 21 that TERT promoter mutations are present in the common ancestor cell and transmitted to its progeny and, therefore, are present in most tumor cells. Further studies are still needed to decipher HCC intratumor genetic heterogeneity using unbiased approaches, such as whole-exome or whole-genome sequencing, preferable by ultra-deep sequencing [13,25].
Large and independent studies have validated several molecular prognostic signatures derived from the tumor, despite tumor heterogeneity [26,27]. All of the biomarker-based molecular therapies approved in solid malignancies also did not account for genetic heterogeneity and are mainly based on traditional genetic analysis, for example, vemurafenib for BRAF V600E-mutated melanoma, cetuximab for wild-type RAS colorectal cancer and orcrizotinib for ALK-translocated non-small cell lung cancer [28].
Within-patient heterogeneity in HCC is well studied because it often presents with multiple tumor foci. In patients with multifocal HCC, the individual lesions usually arise from either the local dissemination of the primary tumor or from the oncogenic predisposition of the diseased liver. In the latter case, a patient with multifocal HCC may have multiple tumors presumably because of distinct genomic profiles and clonal unrelation, a situation that poses a significant challenge to genomic analyses. A microRNA biomarker of HCC recurrence, following liver transplantation, accounting for within-patient heterogeneity has been described [8].
On the basis of a miRNome study in HCC tissues, several important deregulated miRs have been suggested, such as miR-361 [78], miR-122 [74] and miR-199 [75]. However, as more than one miR is deregulated in cancer cells and a single miR can target multiple mRNAs, it is an ongoing process to identify the deregulated miRs and uncover their roles in HCC development. MiR-500a, which is upregulated in HCC tissues, targets the BH3interacting death agonist (BID) protein and can also serve as a possible prognostic predictor and therapeutic target in HCC patients [91]. Wu H. et al. demonstrated that miR-206 is a robust tumor suppressor and strongly prevented the development of HCC in AKT/Ras and cMyc HCC mouse models [76]. Studies by Sun J.J. et al. demonstrated that miR-361-5p inhibits cancer cell growth by targeting CXCR6 in HCC. The knockdown of CXCR6 and the forced expression of miR-361-5p inhibited tumor growth both in vitro and in vivo. It, therefore, serves as a tumor suppressor and might serve as a novel therapeutic target for the treatment of HCC patients [78].
MicroRNAs in Metastasis of HCC
Increasing data suggest a role of miRs in liver development, regeneration and metabolism, in various liver diseases, including liver hepatitis, steatosis, cirrhosis and HCC [120]. The role of several miRs in human cancer onset and progression, including invasion and metastasis, has been demonstrated [71]. For the initiation and maintenance of a mobile phenotype, the responsible cellular mechanisms include the cessation of cell polarity, cytoskeletal reorganization, re-connection with the microenvironment and the activation of pro-migratory intracellular molecules. However, the upstream regulation, leading to a highly invasive cellular phenotype, is not completely understood. Recently, Chuang at al illustrated that the dysregulation of a single miR, miR-494, supports HCC invasiveness through the epigenetic regulation of a miR network [121]. Interestingly, the role of miR-494 is both tumor suppressing and oncogenic across different tumor types and HCC, which reflects its pleiotropic character targeting distinct mRNA sets according to the genomic background and microenvironment. Its target in HCC involves the tumorsuppressor gene PTEN and MCC [1].
In recent decades, several miRs have been associated with HCC progression and metastasis, for example, miR-148a [122], miR-124 and miR-203 [69], miR-138 [123], miR-122 [124] and miR-30a-5p [125]. However, there are many more miRs that play a role in the progression and metastasis of HCC. For example, miR-141-3p inhibits the progression and metastasis of HCC by inhibiting EMT through the targeting of the Golgi protein 73 (GP73). It induces the expression of E-cadherin (epithelial cell marker), occludin (a marker of tight junctions) and cytokeratin 18 (CK 18) (a noninvasive cell marker), but reduces the expression of two mesenchymal markers N-cadherin and vimentin [72]. GP73 restores the inhibitory effects of miR-141-3p on the invasion and metastasis of HCC cells. MiR-487a, on the other hand, promotes the proliferation and metastasis of HCC by binding to phosphoinositide-3-Kinase regulatory subunit 1 (PIK3R1) and Sprouty-related EVH1 domain containing 2 (SPRED2) [90]. MiR-874 negatively regulates δ opioid receptor (DOR), which can suppress the proliferation and metastasis in HCC tumor by targeting the DOR/EGFR/ERK pathway [80], whereas miR-501-3p controls the metastatic process of HCCs by targeting Lin-7 homolog A (LIN7A) [79]. Additionally, miR-219-5p promotes HCC cell proliferation, invasion and metastasis in nude mice models bearing human HCC tumors by targeting the cadherin 1 (CDH1) gene [87]. MiR-197, which is dysregulated in several cancers, including lung, breast, ovarian, colorectal, thyroid, prostate, head and neck carcinoma, HCC as well as in non-alcoholic fatty liver disease, plays an important role in EMT. It promotes the invasion and metastasis of HCC cells by activating Wnt/β-catenin signaling by targeting Axin-2, Naked cuticle 1 (NKD1) and Dickkopf-related protein 2 (DKK2) [86]. However, miR-197-3p, which is downregulated in HCC tissues, inhibits the metastasis of HCC cells both in vitro and in vivo. Its novel target in HCC cells is the zinc finger protein interacting with K protein 1 (ZIK1) [73]. MiR-424-5p, which is involved in the progression, invasion and intrahepatic metastasis of HCC regulates Tripartite motif-containing 29 (TRIM 29), a member of the TRIM protein family that participates in the formation of nucleic-acidbound homodimers or heterodimers, acting as transcriptional regulators of carcinogenesis and differentiation. MiR-221 and 222 also promote metastasis in HCC by targeting Plant homeodomain finger 2 (PHF2), AKT pathway, PTEN, CDK inhibitor p27 and DDIT4 [88,89]. Figure 1 depicts the reported miRs associated with HCC heterogeneity as well as metastasis. Interestingly, four reported miRs (MiR 221, miR 21, miR 203 and miR 214) are common to both heterogeneity and metastasis in HCC (Figure 1). All these miRs can serve as possible diagnostic and prognostic markers for HCC. [72]. GP73 restores the inhibitory effects of miR-141-3p on the invasion and metastasis of HCC cells. MiR-487a, on the other hand, promotes the proliferation and metastasis of HCC by binding to phosphoinositide-3-Kinase regulatory subunit 1 (PIK3R1) and Sprouty-related EVH1 domain containing 2 (SPRED2) [90]. MiR-874 negatively regulates δ opioid receptor (DOR), which can suppress the proliferation and metastasis in HCC tumor by targeting the DOR/EGFR/ERK pathway [80], whereas miR-501-3p controls the metastatic process of HCCs by targeting Lin-7 homolog A (LIN7A) [79]. Additionally, miR-219-5p promotes HCC cell proliferation, invasion and metastasis in nude mice models bearing human HCC tumors by targeting the cadherin 1 (CDH1) gene [87]. MiR-197, which is dysregulated in several cancers, including lung, breast, ovarian, colorectal, thyroid, prostate, head and neck carcinoma, HCC as well as in non-alcoholic fatty liver disease, plays an important role in EMT. It promotes the invasion and metastasis of HCC cells by activating Wnt/β-catenin signaling by targeting Axin-2, Naked cuticle 1 (NKD1) and Dickkopf-related protein 2 (DKK2) [86]. However, miR-197-3p, which is downregulated in HCC tissues, inhibits the metastasis of HCC cells both in vitro and in vivo. Its novel target in HCC cells is the zinc finger protein interacting with K protein 1 (ZIK1) [73]. MiR-424-5p, which is involved in the progression, invasion and intrahepatic metastasis of HCC regulates Tripartite motif-containing 29 (TRIM 29), a member of the TRIM protein family that participates in the formation of nucleic-acid-bound homodimers or heterodimers, acting as transcriptional regulators of carcinogenesis and differentiation. MiR-221 and 222 also promote metastasis in HCC by targeting Plant homeodomain finger 2 (PHF2), AKT pathway, PTEN, CDK inhibitor p27 and DDIT4 [88,89]. Figure 1 depicts the reported miRs associated with HCC heterogeneity as well as metastasis. Interestingly, four reported miRs (MiR 221, miR 21, miR 203 and miR 214) are common to both heterogeneity and metastasis in HCC ( Figure 1). All these miRs can serve as possible diagnostic and prognostic markers for HCC. (19) and metastasis (33). Overlapping region shows microRNAs (4) that are common to both.
Diagnostic and Prognostic MicroRNAs in HCC
The survival rate of HCC is at most 5 years, which is still very low, partly because of the unsatisfactory results of conventional biomarkers (e.g., DPC, AFP and AFP-L3) that are often unable to distinguish between cancer and inflammatory diseases, such as chronic hepatitis or liver cirrhosis [126]. On the other hand, miRs have a high specificity in cancer detection and classification. They are highly stable and can be accurately detected under extreme conditions in a wide variety of body fluids [127,128]. The dysregulation of miRs is considered an early event in tumorigenesis, so miRs are promising biomarkers for the early diagnosis of cancer [129][130][131]. However, variations in the isolation protocols, cohort specifications, detection platforms and tumor heterogeneity often result in poor consensus regarding circulating miR profiles in patients with HCC [132].
In cancer, miRs have shown promise as both diagnostic and prognostic biomarkers [133]. A recent study reported miR-718 from serum exosome samples serves as biomarker of HCC recurrence after liver transplantation [134]. Exosomes are a class of extracellular vesicles derived from most cell types and are present in biological fluids, such as serum, plasma, urine, saliva, ascites and cerebrospinal and amniotic fluids. Studies reported their role in mediating cell-to-cell communication. Several functions of exosomes have been characterized, including cellular proliferation, differentiation, apoptosis, angiogenesis and immune regulation. Exosomes exhibit these functions by interacting with the surface receptors of recipient cells, thus transmitting biomolecule miRNAs. Exosome miRs have the potential to be used as biomarkers for HCC diagnosis and prognosis. Sohn et al. used fluorescent quantitative PCR to detect the expression levels of serum exosomal miRs in patients with chronic hepatitis B, liver cirrhosis and HCC. They discovered that the serum levels of exo-miR-18a, exo-miR-221, exo-miR-222 and exo-miR-224 in patients with HCC were significantly higher than those in patients with chronic hepatitis B or liver, leading to the conclusion that serum exosomal miRs can be employed as novel biomarkers for HCC screening and diagnosis [135]. Patients with serum exo-miR-215-5p overexpression had a significantly lower disease-free survival than patients with low serum exo-miR-215-5p expression, according to a Kaplan-Meier analysis. Simultaneously, the expression level of exo-miR-215-5p rises with the progression of the tumor stage and can be employed as a predictive biomarker in HCC [136]. Another study indicated that, as compared to patients with liver cirrhosis, exo-miR-21 and exo-miR-96 expression levels in HCC patients' exosomes and plasma were significantly higher, while exo-miR-122 expression was significantly lower. Exo-miR-122, exo-miR-21 and exo-miR-96 are substantially more accurate in the diagnosis of HCC in diverse populations than plasma microRNA and AFP levels, and are prospective biomarkers for the early identification of HCC [137]. Some exosomal miRs have recently been identified as recurrence-specific indicators, particularly in HCC patients. Exo-miR-92b was more expressed in patients with recurrence after surgery than in patients without recurrence, and it can be employed as a useful biomarker for predicting the probability of HCC recurrence [138,139]. Other studies have proposed miRs as biomarkers of HCC recurrence from solid tumor biopsies based on their miR expression profiles [67,70,140,141]. Yang et al. did a meta-analysis of miR expression in HCC and identified a meta-signature of five upregulated (miR-221, miR-222, miR-93, miR-21 and miR-224) and four downregulated (miR-130a, miR-195, miR-199a and miR-375) miRs. These nine miRs are associated with cell signaling and cancer pathogenesis and could serve as potential diagnostic and therapeutic targets of this malignancy [142]. Table 2 lists diagnostic and prognostic miRs in HCC. In HCV-induced HCC, miR-1269, miR-224, miR-224-3p and miR-452 are upregulated, whereas miR-199a-5p, miR-199a-3p and miR-199b are downregulated as compared to healthy controls, HCV-induced cirrhosis and HBV-induced liver failure [143]. Furthermore, miR-122, miR-199a and miR-16 have been established as potential biomarkers of HCVinduced HCC in Egyptian patients [144]. Li et al. identified a 13-miR panel (miR-375, miR-92a, miR-10a, miR-223, miR-423, miR-23b, miR-23a, miR-342-3p, miR-99a, miR-122a, miR-125b, miR-150 and let-7c) as a novel noninvasive biomarker in HBV-mediated HCC and this panel has made possible the diagnosis and differentiation of HBV-induced HCC cases from healthy controls, HCV and subjects with HBV infection without HCC [146]. Recently, a panel of seven miRs (miR-29a, miR-29c, miR-133a, miR-143, miR-145, miR-192 and miR-505) is able to differentiate HCC patients from healthy volunteers, patients with cirrhosis and patients with chronic HBV infection [128,155]. Similarly, other studies represent the combination of AFP and a panel of three miRs (miR-92-3p, miR-107 and miR-3126-5p) as an effective diagnostic aid for early-stage and low-level AFP-HCC patients [147]. The overexpression of an eight-miR panel (miR-20a-5p, miR-25-3p, miR-30a-5p, miR-92a-3p, miR-132-3p, miR-185-5p, miR-320a and miR-324-3p) can be used to differentiate between HBV-positive cancer-free controls and HBV-positive HCC patients [148].
The use of the biomarkers for epigenetic changes involving miR for the early detection and risk prediction of HCC [156] and as prognostic or diagnostic markers in the clinical management of patients with HCC is a very promising area [157]. For example, miR-21 and miR-199a are potential biomarkers for HCC [149] and a panel of miRs (miR-192-5p, miR-21-5p and miR-375, alone or combined with AFP) may serve as a blood-based early detection biomarker for HCC screening. Circulating miR-21 is characterized as potential diagnostic biomarker for HCC because of some of its unique advantages over others. MiRs offer the advantages of being minimally invasive, the serum levels being stable and reproducible, and levels not being influenced by both cirrhosis and viral status with a significant overexpression even in early stage HCC patients. MiRs can serve as novel cobiomarkers to AFP to improve the diagnostic accuracy of early stage HCC [158]. Sorafenib administration has been reported to modulate the expression of miRs. Fourteen miRs are upregulated by Sorafenib treatment in HCC cell lines [159]. The overexpression of miR-122 in HCC cell lines makes them sensitive to Sorafenib treatment [160] and the overexpression of miR-122 in HCC cells makes them sensitive to doxorubicin treatment [161]. However, the decreased expression of miR-34a indicates the resistance of HCC cells to Sorafenib [162]. Recently, an artificial lncRNA was generated that overcomes the Sorafenib resistance of HCC cells by targeting multiple miRs [163].
MetastamiRs are miRs that promote or suppress the migration and metastasis of cancer cells, thereby exhibiting significant functional correlation with the prognosis of HCC. Unlike targeted therapy, metastamiRs have been shown to target multiple mRNAs and signaling pathways with the considerable suppression of cancer metastasis that might in future enable an anti-HCC miR drug development [164]. In addition to miRs, its upstream regulators and downstream target genes can also be used as alternative biomarkers and therapeutic targets for the diagnosis and therapy of HCC. Various miR target prediction software, such as MiRanda 3.0, TargetScan5.1 and miRecords, can be used to study miR targets and these targets can be analyzed further by gene ontology hierarchy (http://pantherdb.org, or https://david.ncifcrf.gov/ (accessed on 30 September 2021) [165]. RNA-seq and miR array analysis can add more miRs involved in HCC development and some of the specific regulated miRs can be used in targeted therapy. Instead of targeting specific miRs, signaling pathways involved in cancer development can also be used for precision treatment by miR and the possible method used for miR delivery include the sleeping beauty transposon via hydrodynamic tail vein injection [166]. The CRSIPR/CAS9 genome editor method and liver-specific gene knockout or knock-in mice are some of the approaches used for miRs and their target gene functions in HCC [167].
Although treatment options for patients with advanced HCC have improved in recent years, it is critical to develop prognostic markers that anticipate tumor growth and worsening liver function in order to move patients to more successful treatment lines [168,169]. The most important finding from Frundt et al. suggest that exosomal miR-192 levels in the plasma have a diagnostic and predictive value in an HCC patient cohort, and that exosomal miR-192 presence was linked to a lower overall survival rate (OS). Exosomal miR-192 levels were found to be higher in the blood of HCC patients by Xue et al. [150]. Furthermore, the high serum levels of exosome-and cell-free circulating miR-192 were linked to poor OS, according to Zhu et al. [170]. Because these patients were treated with surgical resection, microwave ablation (MWA) or transarterial chemoembolization (TACE), the enrichment of miR-192 in exosomes can offer predictive value, especially for patients at an early or intermediated tumor stage. Suheiro et al. recently found that changes in exosomal miR-122 expression are linked to survival in HCC patients treated with TACE, demonstrating miRNAs' ability to act as biomarkers for therapeutic monitoring [171]. miR-16 is known to be downregulated in HCC cells, and overexpression suppresses HCC cell proliferation, invasion and metastasis [172], implying that miR-16 functions as a tumor suppressor. miR-221 is an oncogenic miRNA that regulates the PTEN/PI3K/AKT and JAK-STAT3 signaling pathways, which are important in the development of HCC [173,174]. Exosomal miR-221 levels were found to be greater in HCC patients than in liver cirrhosis patients by Sohn et al. [135]. These findings point to miR-221's potential utility as a tumor marker for HCC screening in individuals with hepatic cirrhosis. Zhang et al. reported that miRs, such hsa-miR-139-3p, hsa-miR-760 and hsa-miR-7-5p, have independent prognostic relevance, and were found to be strongly linked with HCC patients' overall survival [175]. The above studies show that miRNAs are also predictive markers in patients with liver cirrhosis, which could aid assessment in these individuals.
Strategies for MicroRNA Potential Use in HCC as Therapeutic Targets
The main challenge faced by miR-based therapy is to reach the required drug levels in the tumor. However, they can be achieved with the chemical modification of therapeutic miRs [176]. Table 3 summarizes some of the miRs used in HCC therapeutics. Two strategies are involved in use of miR for cancer therapeutics. The first strategy involves inhibiting oncogenic miRs (OncomiRs) to gain function using miR antagonists, such as locked nucleic acids (LNA), antagomiRs and antimiRs. The most commonly used miR inhibitors are LNAs and antisense oligonucleotides [177]. LNAs are RNA analogs with very high affinity and specificity for complimentary miR. LNAs can be used in low doses and are more resistant to digestion by nucleases [178]. For example, the use of LNAs specific for miR-122 in non-human primates chronically infected with HCV suppressed long-term viral growth, supporting its use as therapeutic agent in HCC [179]. Further, the antagonist of miR-122, miravirsen, is used in a multi-center phase IIA trial in HCC patients and exhibited sequestration of mature miRNA and reduction in viral load [180]. Additionally, the use of Morpholino-anti-miR 487a oligomers effectively silenced miR-487a in mouse models, resulting in the inhibition of HCC tumor progression with no toxicity to mice in terms of weight loss, other visible impairments and animal death [90,181,182]. [190] miR-124 Liposomes DEN-induced HCC mouse model [191] Oncolytic Virus let-7 Adenovirus Xenograft [192] miR-122 Adenovirus Xenograft [193] miR-122; let-7 miR-124 Herpes Simplex virus Xenograft [194] miR-199 Adenovirus Xenograft; TG-221 [195] The second strategy involved in use of miR is replacement by re-introducing tumorsuppressor miRs to restore the loss of function [183][184][185][186][187][188][189][190][191][192][193][194][195]. The replacement miRs are either short double-stranded oligonucleotides or miR mimics, which are double-stranded RNA molecules with inverted bases and alkyl groups [196]. For example, the use of oligonucleotides in a pre-clinical study targeting miR-221 in an orthotopic HCC mouse model resulted in the inhibition of cell transformation and improved survival. Similarly, the AAV-mediated delivery of miR-122, miR-26a and miR-199a and the systemic restoration of miR-124, miR-29 and miR-375 (2 O-methyl-modified and cholesterol-conjugated form) could inhibit tumorigenesis in HCC animal models [179,197]. Additionally, miR mimics have been used successfully as a strategy. For example, a miRNA mimic of miR34 (MRX34) has been used in HCC patients in a phase 1 clinical trial. Despite the fact that Mirna Therapeutics terminated the trial early due to substantial immune-mediated side effects that resulted in four patient deaths, the dose-dependent regulation of key target genes shows that a miRNA-based cancer therapy can be effective. MiRs are functional in diverse cellular events and because of these properties, several clinical trials in cancer research utilizing miRs are currently underway (Available online: http://ClinicalTrials.gov (accessed on 15 August 2021)). Many studies show that miRNA-based treatments in cancer provide a proof-of-concept; however, this class of medications still needs to be researched further to prevent immune-related toxicity in patients.
Conclusions
HCC is a complex disease with the involvement of a variety of risk factors and usually diagnosed when cancer is in advanced stage with poor survival, frequent recurrence and limited therapy. Tumor heterogeneity, both at the clinical and molecular level, is well known in HCC and poses a challenge for the development of a targeted therapy. The lack of specific diagnostic markers for HCC presents challenges for the early detection of the disease and cancer therapy. There is an urgent need of novel diagnostic biomarkers to achieve the risk stratification and earlier diagnosis of HCC. MiRs are endogenous transcriptional and posttranscriptional regulators of gene expression and have a critical role in pathogenesis of HCC. They are expressed differentially even at very early stages of cancer and are involved in cancer heterogeneity and metastasis. The emerging role of miRs as novel clinical biomarkers is definitely going to change the face of HCC clinical evaluation through risk prediction, early diagnosis and determining the appropriate therapeutic course of action. | 6,375.6 | 2022-04-27T00:00:00.000 | [
"Biology",
"Medicine"
] |
Genomic Analysis of wig-1 Pathways
Background Wig-1 is a transcription factor regulated by p53 that can interact with hnRNP A2/B1, RNA Helicase A, and dsRNAs, which plays an important role in RNA and protein stabilization. in vitro studies have shown that wig-1 binds p53 mRNA and stabilizes it by protecting it from deadenylation. Furthermore, p53 has been implicated as a causal factor in neurodegenerative diseases based in part on its selective regulatory function on gene expression, including genes which, in turn, also possess regulatory functions on gene expression. In this study we focused on the wig-1 transcription factor as a downstream p53 regulated gene and characterized the effects of wig-1 down regulation on gene expression in mouse liver and brain. Methods and Results Antisense oligonucleotides (ASOs) were identified that specifically target mouse wig-1 mRNA and produce a dose-dependent reduction in wig-1 mRNA levels in cell culture. These wig-1 ASOs produced marked reductions in wig-1 levels in liver following intraperitoneal administration and in brain tissue following ASO administration through a single striatal bolus injection in FVB and BACHD mice. Wig-1 suppression was well tolerated and resulted in the reduction of mutant Htt protein levels in BACHD mouse brain but had no effect on normal Htt protein levels nor p53 mRNA or protein levels. Expression microarray analysis was employed to determine the effects of wig-1 suppression on genome-wide expression in mouse liver and brain. Reduction of wig-1 caused both down regulation and up regulation of several genes, and a number of wig-1 regulated genes were identified that potentially links wig-1 various signaling pathways and diseases. Conclusion Antisense oligonucleotides can effectively reduce wig-1 levels in mouse liver and brain, which results in specific changes in gene expression for pathways relevant to both the nervous system and cancer.
Introduction wig-1 is a p53-regulated gene (WT p53 induced gene 1; also known as PAG608 and ZMAT3) that was originally identified in a mouse cell line using a PCR-based differential display technique to find mRNAs induced by wild type p53 [1,2]. The wig-1 gene encodes a C 2 H 2 -type zinc finger protein that localizes mainly to the nucleus [3,4]. The wig-1 structural features are shared with a small group of proteins, such as JAZ, that can positively regulate p53 transcriptional activity in a positive feedback manner [5,6]. A rat homolog of wig-1, PAG608, was independently identified by Israeli et al. [2] and human wig-1 has also been cloned and characterized [3]. Mouse wig-1 is highly homologous to the rat and human orthologs, and shares 97.9% and 87% amino acid sequence identity, respectively. Rat wig-1 (PAG608) has weak proapoptotic activity when over-expressed in human tumor cells and human wig-1 can suppress cell growth by 25-30% in a colony formation assay [2,3].
wig-1 has also been shown to interact with heterogeneous nuclear ribonucleoprotein (hnRNP) A2/B1, RNA Helicase A (RHA), and dsRNA [4,7,8]. A relationship between wig-1 and p53 was also concluded in studies in which wig-1 was suppressed by siRNA in vitro. It was shown that wig-1 binds to p53 mRNA in vitro and stabilizes it by protecting it from deadenylation. It was suggested that this effect is mediated by the U-rich region in the 39 UTR of p53 mRNA [9]. Because p53 is involved in regulating cell death, it has the potential to play a significant role in the progression of neurodegenerative diseases including Huntington Disease (HD) where it has been found to affect phenotype in mouse models of HD [10]. Furthermore, a genetic interaction between the murine homologue of huntingtin (htt) and p53 has also been reported to cause significant reductions in the severity of the HD phenotype in mice [11]. Moreover, recent in vitro and animal studies have shown that activation of p53 can promote huntingtin transcription and up-regulation of wild-type HTT protein, suggesting that p53 and htt might interact functionally, and that changes in p53 status may alter the HTT levels and, presumably, the HD phenotype [12].
The role of p53 in HD pathogenesis will likely involve different pathways, and it seems that targets of p53 activity might be responsible for different aspects of p53-related effects within neurodegenerative pathways. In this study we decided to focus on wig-1 as a potential downstream target of p53 in neurodegenerative diseases by identifying genes that are potentially under regulation by wig-1. Microarray analysis was performed on the striatum of the HD mouse model, BACHD, treated locally with or without wig-1 antisense oligonucleotides (ASOs), and on the liver following systemic treatment with wig-1 ASOs [13,14,15]. Our studies identify a number of neuronal and non-neuronal genes that are affected by wig-1 expression in mouse brain and liver. Furthermore, genes found to be impacted by wig-1 suppression have been implicated in a wide range of critical pathways involved in neuronal function such as brain development and axon guidance, cancer, and mitochondrial function.
Identification and characterization of a wig-1 antisense inhibitor in vitro and in vivo
To study the effects of wig-1 on htt and p53, we designed antisense oligonucleotides to specifically target mouse wig-1 mRNA. These chimeric ASOs contain ribonuclease H-sensitive stretches of 29-deoxy residues flanked on both sides with a stretch of 29-O-(2-methoxyethylethyl) modifications, which increases RNA binding affinity and confers nuclease resistance. The central 29-deoxy domain produces a substrate for endogenous RNase H enzymes once hybridized to the complementary target RNA, resulting in cleavage of the target RNA, thereby causing lowering of target protein levels [16] ( Figure 1A). A series of ASOs were designed to bind within the coding region of the wig-1 mRNA sequence. Rapid-throughput screens were performed in the immortalized mouse brain endothelial cell line, bEND. Cells were transfected with ASOs, and harvested after 24 hours. The reduction of wig-1 expression was analyzed with real-time quantitative RT-PCR. Based on relative reduction in wig-1 mRNA levels, 4 ASOs were selected and further characterized in a dose-response screen. Wig-1 mRNA levels were significantly decreased in a dosedependent manner with 90-95% reduction in wig-1 mRNA levels observed at a concentration of 45 nM ( Figure 1B). These ASOs were further characterized in animals.
In vivo activity of wig-1 ASOs was characterized following intraperitoneal (IP) administration at 50 mg/kg or saline, twice a week for 4 weeks in BALB/c mice. Animals were euthanized 48 hours following the last dose. Serum transaminases, organ weights and organ histopathology were performed as a measure of overall tolerability. Liver samples were analyzed for changes in wig-1 mRNA levels and compared to saline treatment. All four ASOs reduced wig-1 mRNA levels by approximately 80% and ASO treatment was well tolerated ( Figure 1C). One wig-1 ASO (ASO3) was selected for further studies.
Characterization of wig-1 ASO activity in mouse brain Whole brain extracts of mice show relatively high expression levels of wig-1. To characterize the effects and the duration of END cells were transfected with indicated concentration of wig-1 ASOs. RNA was extracted 24 hours after transfection and analyzed by RT-PCR to determine wig-1 mRNA levels. C) Male BALB/c mice were injected intraperitoneally with 4 different wig-1 ASOs at 100 mg/kg body weight per week or with saline for 4 weeks. Total RNA was prepared from liver, and used for real-time quantitative RT-PCR analysis to evaluate wig-1 mRNA levels. Data are expressed as means 6 SEM (n = 4, ***p#0.01). doi:10.1371/journal.pone.0029429.g001 action of the wig-1 ASO in mouse brain, we administered wig-1 ASO to FVB mice (the background strain for BACHD) (8-weeksold) by a single bolus injection into the right striatum at 25, 50, and 75 mg and determined the effects on wig-1 mRNA and protein levels in brain tissue after one, two, three and four weeks post injection ( Figure 2). Immunohistochemistry using an antibody that reacts with ASO confirmed ASO distribution to different regions of the brain as well as ASO uptake by different cell types ( Figure 2A). Wig-1 ASO was taken up by both neuronal and glial cells, which is consistent with previous studies on ASO intrathecal administration in rats [17]. Immunostaining was present in sections of the brain, adjacent to the injection site as well as other regions, but staining decreased in intensity in a gradient as the distance increased from the injection site ( Figure 2A). Hematoxylin and eosin staining of brain sections did not show any noticeable morphological abnormalities following ASO injection nor did any animals display abnormal behavior (data not shown). The magnitude of reduction of wig-1 mRNA levels was dependent on the dose of injected ASO with approximately 40% wig-1 mRNA reduction observed at 25 mg ASO and maximal (,80%) reduction observed at 50 mg ASO. Furthermore, the duration of action of the wig-1 ASO lasted between 3 and 4 weeks following a single 75 mg dose ( Figure 2B).
Reduction in wig-1 levels promotes reduction in mutant HTT levels but fails to decrease p53 levels in vivo To determine the effects of wig-1 knockdown on htt expression, wig-1 ASO was administered as a single striatal injection (75 mg) in 5 month old BACHD mice, a model of Huntington's disease, and compared with PBS or a control oligonucleotide. BACHD mice express full length human mutant htt with 97 glutamine repeats under the control of endogenous htt regulatory elements. These mice exhibit progressive motor deficits, neuronal synaptic dysfunction, and late-onset selective neuropathology, which include significant cortical and striatal atrophy and neuronal degeneration. Animals were euthanized one or three weeks following the ASO administration, and wig-1 mRNA and protein levels were determined. Immunohistochemistry was used to confirm ASO distribution and to assess morphological changes in brain sections. Wig-1 mRNA levels were significantly reduced (,70%) at both one and three weeks following ASO treatment whereas treatment with control oligonucleotide had no effect on wig-1 mRNA levels ( Figure 3A). Furthermore, reduction in wig-1 protein levels was also observed in wig-1 ASO-treated samples with reductions comparable to the observed reductions in mRNA levels ( Figure 3B). Interestingly, treatment with wig-1 ASO resulted in small (,40%) but significant lowering of mutant htt Figure 2. Dose-dependent reduction of wig-1 mRNA levels in mice brain following striatal bolus administration. A) Distribution of Wig-1 ASO was confirmed by staining with an antibody that recognizes oligonucleotides. At higher magnification ASO is visible in variuos cells including neurons, astrocytes. B) Male FVB mice received 75 mg single bolus injection of wig-1 ASO, or PBS in striatum. Animals were sacrificed after one, two, three, or four weeks. Striatum were harvested and total RNA was prepared from these sections, and used for real-time quantitative RT-PCR analysis to evaluate levels of wig-1 mRNA. Histological examination with hemotoxylin and eosin (H & E) staining did not show any remarkable abnormality in the brain of treated animals compare with control (data not shown). Data are expressed as means 6 SEM (n = 4; **p#0.05; ***p#0.01). doi:10.1371/journal.pone.0029429.g002 protein levels ( Figure 3B). However, levels of wild-type HTT protein were not significantly changed following wig-1 ASO treatment ( Figure 3B).
The effects of wig-1 ASO treatment on p53 mRNA and protein levels was examined next in BACHD and FVB brain, and in BALB/c liver. Surprisingly, no reduction in p53 levels was observed in wig-1 ASO treated animals despite marked reduction in wig-1 levels. In fact, there was a slight trend for increased levels of p53 mRNA in BACHD and FVB striatum and p53 protein levels in FVB striatum in animals treated with wig-1 ASO ( Figure 4A & B). Furthermore, whole cell lysate of BALB/c liver samples did not show any increase or decrease in p53 levels Figure 3. Effects of wig-1 ASO intrastriatal treatment on wig-1 and HTT levels in BACHD striatum. 5-month-old male BACHD mice received 75 mg single bolus injection of wig-1 ASO, PBS, or control ASO in striatum. Animals were sacrificed after one, or three weeks. A) Striatum was harvested and total RNA was prepared from these sections, and used for real-time quantitative RT-PCR analysis to evaluate the expression of wig-1 mRNA. B) Tissue homogenates were prepared from striatum and used for analysis of Wig-1 and HTT protein levels. Data are expressed as means 6 SEM (n = 4; **p#0.05; ***p#0.01). Letters A1-A4, B1-B4, D1-D4 refer to individual animals. doi:10.1371/journal.pone.0029429.g003 following wig-1 ASO treatment ( Figure 4C). These findings suggest that wig-1 does not influence p53 mRNA or protein levels directly in mouse striatum and liver.
Expression profile analysis of brain samples treated with wig-1 ASO Gene array expression analysis was performed on BACHD mice treated with wig-1 ASO versus vehicle treated animals in order to gain insight into putative pathways affected by wig-1 suppression. Gene expression profiles of BACHD striatum were obtained in mice through whole genome microarray analysis. Genes with significant changes in expression following wig-1 ASO treatment were identified to determine potential gene pathways under control of wig-1 using, Ingenuity TM Pathway Analysis software (Ingenuity Systems, CA, USA).
To improve the probability of successfully identifying novel pathways regulated by wig-1, a large number of genes displaying altered expression following wig-1 ASO treatment was obtained to support gene network analysis. We filtered on statistically significant genes (FDR cutoff 0.1) having 2log 10 (p-values). = 3.0 and absolute fold-changes . = 1.5. This filtering approach created a probe list of 260 genes that exhibited altered expression upon wig-1 ASO treatment. A hierarchical cluster analysis [18] of the log2 intensity profiles of the 260 genes is shown as a heatmap in Figure 5. This heatmap shows that both up-regulation and downregulation of genes occurs in BACHD mouse striatum when treated with wig-1 ASO, with a larger portion of genes showing up-regulation. In fact, using the selection criteria described, 204 genes were found to be up regulated and 56 genes down regulated following wig-1 ASO treatment. Many genes which were significantly down-regulated are predicted to play a role in nervous system development and function, psychological disorders and cell-cycle control (table 1, and Figure S2). Furthermore, analysis of the up-regulated genes identified genes that are predicted to play a role in neurological disease and cell-to-cell signaling (table 2, and Figure S3).
PCR and western blot confirmation
RT-PCR confirmation of microarray results was performed on a subset of genes identified as being down regulated following wig-1 ASO treatment. Specifically, five genes (PKCe, AUTS2, ROBO2, PLEKHA5, and IMMPL2) were selected for confirmation. Consistent with microarray results, all of these genes showed down-regulation in BACHD and FVB brain as well as BALB/c liver in animals treated with wig-1 ASO, as compared to control samples. For example, PKCe which is highly expressed in both mouse brain and liver was significantly reduced at both the RNA and protein levels in BACHD and FVB brain, and in BALB/c liver (p#0.1 and p#0.01 respectively) ( Figure 6). Wig-1 ASO treatment produced significant reductions in AUTS2 mRNA levels in BACHD brain (p#0.01) (Figure 7). A trend in reduction of AUTS2 mRNA level was observed in BALB/c liver (p#0.1) whereas no reduction in AUTS2 levels was observed in non-BACHD (FVB) mouse brain. Autism susceptibility candidate 2 (Auts2) is a gene associated with autism and mental retardation [19,20] whose exact function is unknown, but is expressed in different regions of the mouse brain.
Roundabout axon guidance receptor homolog 2 (ROBO2) [21,22,23] is also expressed in brain and liver. Intrastriatal injection of wig-1 ASO in BACHD and FVB mice resulted in significant reductions of ROBO2 mRNA levels in both BACHD and FVB striatum (p#0.05 and p#0.1 respectively) ( Figure 7). Furthermore, systemic administration of the wig-1 ASO also resulted in marked reduction of ROBO2 mRNA levels in BALB/c liver (p#0.1).
Another gene that was found to be down regulated in BACHD brain and BALB/c liver (p#0.1 and p#0.05 respectedly) following treatment with wig-1 ASO was the Pleckstrin homology domain containing family A member 5 (PLEKHA5), which is expressed in different tissues, with the highest level of expression in prefrontal cortex, fetal brain, uterus, and adrenal gland (Figure 7). Furthermore, a statistically significant decrease in expression of the inner mitochondrial membrane peptidase 2-like (IMMP2L) [24,25] gene was also observed in BACHD (p#0.05) and FVB (p#0.01) striatum following wig-1 ASO treatment, which has been linked to a possible rare cause of Autism and Gilles de la Tourette syndrome (GTS) [25]. Reductions in IMMP2L mRNA levels were also Ingenuity TM Pathway Analysis was used to ascribe down-regulated genes with potential functions (additional genes can be found in Figure S2). doi:10.1371/journal.pone.0029429.t001 observed in BALB/c liver following systemic treatment with wig-1 ASO (p#0.01) (Figure 7). Glycoprotein transmembrane nmb (gpnmb) is expressed widely in tissues including brain, liver, retina, skin, placenta, and salivary gland. Following striatal wig-1 ASO treatment, a marked elevation of gpnmb mRNA levels was observed in BACHD (p#0.1) and FVB (p#0.05) striatum ( Figure 8). Furthermore, elevations in gpnmb mRNA levels were also observed in BALB/c liver following systemic treatment with wig-1 ASO (p#0.1) (Figure 8).
Discussion
wig-1 is a regulatory protein that has been shown to function as a transcription factor, an RNA binding protein, and a regulator of both RNA and protein stabilization [1,4,9]. Furthermore, wig-1 has also been shown to have a complex relationship with the tumor suppressor p53, with reports indicating that p53 can control levels of wig-1 and that wig-1 can regulate levels of p53, suggesting that p53 and wig-1 tightly control each other's expression in certain cell systems [9]. Moreover, p53 has been linked with expression levels of the Huntington gene and with the Huntington phenotype in mice [11,26]. Therefore, we investigated the effects of wig-1 on p53 levels, on htt expression, and on gene expression more broadly using microarray analysis in mouse brain and liver using highly specific wig-1 antisense oligonucleotides (ASOs). ASOs were administered to normal BALB/c mice systemically and in BACHD and FVB mice by intrastriatal injection, and effects on gene expression were examined. Since ASOs do not cross the blood brain barrier [27,28] local administration is required for ASO-mediated activity in the CNS. Direct ASO administration to the striatum resulted in broad distribution with strong uptake in both neuronal and glial cells, with the intensity of ASO uptake being greatest in the proximity of the ASO injection site. Moreover, marked reductions in wig-1 mRNA and protein levels was demonstrated in the BACHD and FVB striatum and in BALB/c liver, and this reduction in wig-1 levels was well tolerated over a four week period.
wig-1 has been reported to bind to p53 mRNA in vitro, causing stabilization of the p53 message [9]. Accordingly, siRNA-mediated reduction in wig-1 levels resulted in a corresponding reduction in p53 levels in cell culture [9]. However, in studies reported here, suppression of wig-1 levels in mouse striatum or in mouse liver had no significant effect on p53 mRNA or protein levels. The reason behind this discrepancy is unclear, but most likely is related to differences between the cell culture system and animals, or the specific cell types examined. Certainly, cell culture may not be an accurate representation of gene regulation in animals. Additionally, the effects of wig-1 on p53 mRNA levels may differ between different cell types, such as striatal cells (i.e., neuronal and glial cells) and hepatocytes as described here relative to the osteosarcoma cells and fibroblasts that were investigated in the prior report. Interestingly, ASO-mediated suppression of endogenous wig-1 levels in the striatum of BACHD mice led to a significant reduction (approximately 50%) in mutant HTT protein levels with no significant effect on the levels of endogenous wild type HTT levels. The mechanisms underlying this result are not yet clear. However, considering the fact that we have not seen significant changes in mutant HTT mRNA levels despite reductions in mutant protein levels suggests that wig-1 may be regulating mutant HTT levels post-transcriptionally (data not shown). Nevertheless, more studies are needed to establish the link between wig-1 and mutant HTT protein expression. Ingenuity TM Pathway Analysis was used to ascribe up-regulated genes with potential functions (additional genes can be found in Figure S3). doi:10.1371/journal.pone.0029429.t002 Despite the lack of effects on p53 mRNA and protein levels in mouse liver and striatum following wig-1 suppression, broad changes in mRNA levels was observed following wig-1 suppression in mouse striatum based on microarray analysis. Our results suggest that wig-1 can regulate gene expression through both gene repression and activation, as reflected by both down-regulation and up-regulation of genes following wig-1 ASO suppression. We identified 204 genes that were up-regulated and 56 genes that were down-regulated following wig-1 ASO treatment. These genes have been linked to a broad range of putative cellular functions including cell cycle regulation, DNA replication, cell survival, and neurological function.
Wig-1 suppression promoted confirmed changes in two genes linked to cancer. Wig-1 ASO treatment resulted in decreases in PKCe levels in both mouse brain and liver. PKCe signaling is involved in cell invasion, motility, proliferation, and survival, and has been linked to malignancies of the central nervous system [29]. The role of PKCe in cancer promotion is believed in part to involve the ras signaling pathway [30,31,32] and the regulation of expression of specific Bcl-2 family members [33]. Wig-1 suppression also caused a significant upregulation in expression of gpnmb in mouse brain and liver, a transmembrane glycoprotein also known as osteoactivin or HGFIN. Gpnmb has been implicated as a tumor suppressor, and has been reported to promote cell transformation and proliferation, and loss of cell contact dependency [34,35,36]. Furthermore, high expression of gpnmb has been associated with aggressive melanoma, glioma and breast cancer [37,38,39,40]. Interestingly, wig-1 has also been reported to regulate tumor cell apoptosis and cell proliferation [2,3]. Our findings suggest potential genes and mechanisms relevant to cancer pathways that wig-1 may be acting through.
Our results also suggest a role for wig-1 in regulating gene expression within specific pathways relevant to CNS function and disease. Wig-1 suppression led to a reduction in expression of numerous genes in mouse brain and liver. Genes down-regulated and confirmed by RT-PCR included AUTS2, ROBO2, and IMMP2L. AUTS2 (Autism susceptibility candidate 2) is a nuclear protein that is highly expressed in developing neurons of certain brain regions, notably the frontal cortex and cerebellum, and has been linked with the neuropathy of autism [19,41]. ROBO2 (roundabout axon guidance receptor homolog 2) is a receptor for SLIT1 ligand, which is critically important for axon guidance and in CNS development [42,43,44]. IMMP2L (inner mitochondrial membrane peptidase 2-like) is a mitochondrial peptidase believed to be involved in polypeptide precursor processing within the inner mitochondrial membrane and has been shown to be critical for normal mitochondrial function through knockout mouse studies (Lu et al.). Interestingly, the IMMP2L locus has been linked with Autism Spectrum Disorders (ASDs) [24] and with Tourette Syndrome [45,46,47]. Regulation of expression of both AUTS2 and IMMP2L by wig-1 suggests a possible role for wig-1 in autism.
The mechanism by which wig-1 regulates the expression of genes shown to be modulated following wig-1 ASO treatment is unknown. However, wig-1 has been shown to bind to the promoter of some genes through its C2H2 zinc finger motifs and thereby regulate transcription [1]. More recently, regulation of RNA stability through binding of wig-1 to putative RNA sequence motifs within the 39UTR of certain RNAs, including p53 [9], have been proposed based on similarities between the wig-1 zinc finger RNA binding motif and those of a small group of dsRNA-binding proteins [5]. Interestingly, examination of the 39UTR of wig-1 regulated mRNAs identified in this study revealed the presence of sequence motifs (e.g., UUAUUUAUU and AUUUAAUUUA) that have been linked with RNA: protein binding and regulation [48,49,50,51].
In summary, our studies indicate that wig-1 regulates gene expression broadly in mouse brain and liver through mechanisms that appear independent of p53. Our findings indicate that reduction of wig-1 by $80% in adult mouse brain and liver are generally well tolerated, and suggest that wig-1 can positively and negatively regulate the expression of genes implicated in various cellular pathways and pathologies including cancer and neurode-
nd generation Antisense Oligonucleotides (ASOs) chemistry
All oligonucleotides were 20 nucleotides in length and chemically modified with phosphorothioate in the backbone and 29-O-methoxyethyl (MOE) on the wings with a central deoxy gap (''5-10-5'' design). Oligonucleotides were synthesized using an Applied Biosystems 380B automated DNA synthesizer (Perkin Elmer-Applied Biosystems, Foster City, CA, USA) and purified. A negative control oligonucleotide, which has the same chemical composition as the wig-1 ASO but no complementarities to any known gene sequence, was also included in the studies.
Identification and characterization of ISIS wig-1 ASOs in vitro and in vivo
To identify mouse wig-1 antisense inhibitors, rapid-throughput screens were performed in the bEND cell line (ATCC, CRL-2299 TM ). In brief, 80 ASOs were designed to the wig-1 mRNA sequence, all of which targeted binding sites within the coding region of the wig-1 mRNA. The reduction of target gene expression was analyzed with real-time quantitative RT-PCR after transfection of the cells with ASOs for 24 h. Based on target reduction, 4 ASOs were selected and further characterized in a dose-response screen. The most potent ASOs from the screen were chosen, and their in vivo activity was confirmed by intraperitoneal (IP) administration in BALB/c mice. The most potent ASO was chosen as the wig-1 ASO for subsequent studies.
Animal studies
This study was conducted in accordance with the guidelines of the Institutional Animal Care and Use Committee at Isis Pharmaceuticals and approved by the committee. Protocol number and PHS Assurance numbers are P-0190 and A4318-01, respectively. Six-week-old male BALB/c mice were injected with four wig-1 ASOs at 100 mg/kg/week or saline twice/week for 4 weeks. ASOs were dissolved in PBS and administered intraperitoneally. Animals were euthanized 48 hours after the last dose. Blood, liver, kidney and spleen were collected to measure toxicity, PK, as well as RNA and protein levels.
Treatment and surgery
Groups of four FVB (8 weeks of age) and BACHD mice (5months-old) were treated with wig-1 ASO at a dose of 25, 50, or 75 mg delivered by single striatal bolus injection. Control groups of four FVB and BACHD mice were similarly treated with PBS or with negative control oligonucleotide. Mice were individually anaesthetized with 3% isoflurane and were maintained throughout the surgical procedure in an ASI small animal stereotaxic system (ASI Instruments, SAS-4100) with a gas nose cone delivering 2% isoflurane. The scalp of the animal was sterilized with iodine solution followed by 70% ethanol. Then, a longitudinal midsaggital incision 1 cm in length was made in the scalp. Hamilton gas tight 10 mL Syringe with removable 26 gauge needle by advancing the end of the needle through the scull to the appropriate coordinate. Coordinates used: 0.5 mm anterior, 2.0 mm lateral on right and 3.0 mm deep from Bregma with flat skull nosebar setting. Different concentrations of ASO in a total volume of two ml/concentration were administered by injection. After one, two, three and four weeks the mice were euthanized using isoflurane followed by decapitation. Brain tissue, including the striatum, was extracted for protein and RNA analysis. Histopathology was conducted to assure the distribution of ASO as well as safety.
Real-time quantitative PCR
RNA was extracted by using a QIAGEN RNAeasy kit (QIAGEN). The mRNA was transcribed to cDNA by using MuLV reverse transcriptase (New England Bio Labs). The abundance of transcripts was assessed by real-time PCR on a 7700 Fast Real-Time PCR System (Applied Biosystems). Each run was evaluated in triplicate for both the gene of interest and endogenous control for mRNA levels, Cyclophilin A ( Figure S1). The expression data for the gene of interest were normalized for the efficiency of amplification determined by the standard curve included for each data acquisition.
Immunohistochemistry
For hemotoxylin and eosin (H & E) staining, pieces of liver from BALB/c mice were fixed in 10% buffered formalin and embedded in paraffin wax. Brain sections were also fixed in 10% formalin. Multiple adjacent 4-mm sections were cut and mounted on glass slides. After dehydration, the sections were stained. Images of the histological sections were analyzed.
Microarray gene expression analysis
Total RNA was extracted from BACHD striatum, treated with wig-1 ASO, negative control ASO, or PBS, using RNeasy Plus mini kits (Qiagen). Gene expression was analyzed by hybridization to the MouseWG-6 v2 Expression BeadChip array (Illumina), and array data analysis was performed using Bioconductor packages lumi and siggenes [52]. The entire microarray data set has been submitted to GEO [53] and can be referenced via accession ID GSE29751. Siggenes was used to determine probes showing statistically significant differential expression at an FDR (False Discovery Rate) cutoff of 0.100. Gene network prediction was determined by using Ingenuity TM Pathway Analysis (IPA) (Ingenuity Systems, CA, USA). Figure S1 List of primers used for real-time PCR. (DOCX) Figure S2 Gene network pathways identified by Inegenuity Pathway Analysis for down-regulated genes. The table above identifies the genes within a network and includes a score, which is used to rank the networks. The genes within each network are comprised of genes identified by microarray analysis in addition to other genes within the network as identified by Ingenuity Pathway Analysis. The total number of genes identified as down-regulated via microarray analysis within each network is identified in the column labeled ''Focus Molecules''. The score for each network is obtained from the 2log 10 (p-value), where the pvalue is obtained from a Fisher Exact Test. The score ranks the networks based on the probability of obtaining the same networks by chance when sampling a similar number of genes from the Ingenuity Knowledge Base. Network scores with a high value (. = 2) are more significant. (DOCX) Figure S3 Gene network pathways identified by Inegenuity Pathway Analysis for up-regulated genes. The table above identifies the genes within a network and includes a score, which is used to rank the networks. The genes within each network are comprised of genes identified by microarray analysis in addition to other genes within the network as identified by Ingenuity Pathway Analysis. The total number of genes identified as up-regulated via microarray analysis within each network is identified in the column labeled ''Focus Molecules''. The score for each network is obtained from the 2log 10 (p-value), where the pvalue is obtained from a Fisher Exact Test. The score ranks the networks based on the probability of obtaining the same networks by chance when sampling a similar number of genes from the Ingenuity Knowledge Base. Network scores with a high value (. = 2) are more significant. (DOCX) | 7,047.6 | 2012-02-07T00:00:00.000 | [
"Biology"
] |
Boundary Integral Technique of 2nd Order Partial Differential Equation by Using Radial Basis
e solution of second order partial dierential equation, with continuous change in coe
cients by the formation of integral equation and then using radial basis function approximation (RBSA), has been developed in this paper. Use of boundary element method (BEM), which gives the solution of heat ormass diusion in non-homogenousmediumwith function varying smoothly in space is also the part of this article. Discretization of boundary of integral domain instead of entire domain of the problem concerned is also the distinction of the recent work.e numerical solution of some problems with known value of the variable has also been included at the end.
Equation (1) is valid for two-dimensional steady state ow in anisotropic medium or di usion of mass in anisotropic medium, where w temperature or concentration, ,Z ij diffusion or conduction coefficient and also Z ij Z ji . Furthermore, Equation (2) presents the constraints for the integral base scheme associated with equation (1). Reutskiy [3], Al-Jawary and Wrobel [4], Rangelov et al. [5], and Ferreira [6] studied the behavior of graded material in nonhomogeneous media.
E ciency in computation and accuracy in treatment makes the numerical method based on integral equation more advantageous for the treatment of such B.V.Ps.
To solve the integral equations derived from such numerical techniques, trial functions play a vital role. Trial functions are of many types such as polynomials, trigonometric functions, and radial basis functions. Radial basis functions as conical and multiquadric radial basis functions have been found useful recently by a number of researchers Lin et al. [7]. Use of radial basis functions in modern era as Lin and Reutskiy [8] has revolutionized the process of research in a number of elds. Clements [9] and Ooi et al. [10] established the solution of equation (1) with constant value of Z ij but when Z ij is continuously changing, then solution of equation (1) is really a challenging case.
Suitable fundamental solution of equation (1) for varying coefficient Z ij is difficult though not possible. If fundamental solution is used to form integral equation for the special case when Z ij � ŗ ij g(x 1 , x 2 ). When g(x 1 , x 2 ) is uniformly changing function and ŗ ij is constant, then the resulting formulation is not only boundary integral but domain integral containing w as integrand Brebbia and Nardini [11] used dual reciprocity method to find approximate solution in terms of boundary integrals. Ang [12] and Tanaka et al. [13] purposed dual reciprocity method for Z ij � ŗ ij g(x 1 , x 2 ).
is paper enhanced the use of boundary element method (BEM) for steady state diffusion equation by adding source term and taking Z ij as smoothly varying function. No restriction is imposed on Z 11 , Z 12 , and Z 22 as in the work of Ang [14], Dineva et al. [15], and Rangogni [16]. Condition of Z 11 , Z 12 , and Z 22 is to satisfy the definiteness condition (2) in solution domain. is paper also employs radial basis trial function to approximate w(x 1 , x 2 ) to convert the given diffusion equation to elliptic diffusion equation as Ang et al. [17], Fahmy [18] explain that integral formulation here does not involve any domain integral. At the end, specific problems are solved for unknown by converting the problem into a set of algebraic equations.
Steps for Solution
Steady state anisotropic diffusion equation is where
Reformation
We rewrite the system of equation (3) as Here, ij are functions of x 1 and x 2 and varies smoothly, and h (0) ij are constant terms. Let the substitution be where v 1 is related to w by v 2 is related to w by It is obvious that above equations satisfy equation (3). Discretize equation (8) into linear algebraic equations. Solve resulting algebraic equations using B.Cs.
Trial Function Substitution
Trial functions such as radial basis function (RBF) are used to approximate unknown and are also used to discretize the domain and numerically solve partial differential equations by considering following approximation: Here, α (r ′ ) i are constant and (r′) (x 1 , x 2 ) are radial basis functions centered at (η (r ′ ) 1 , η (r ′ ) 2 ). Using equation (9) in equation (7), 2 Mathematical Problems in Engineering where N is the number of interpolation points. We consider the following substitution: We rearrange this linear system of equations for constants β Here, the relation for w (m′) , υ (m′) , and Ψ (r′m′) are given as Equation (9) becomes with the use of equation (12): From the above equation, From equation (14), with Using equation (16) into equation (10), where g (n ′ ) We remember that w 1, 2, ..., N) is known as radial basis trial function for partial differential equation (7). Multiquadric radial basis function is We consider the following trial function found in the work of Zhang et al. [19].
Mathematical Problems in Engineering 3
Formation of Boundary Integral Equation
Partial differential equation given by equation (8) can be converted into boundary integral equation as where δ η 1 , η 2 � 1, when η 1 , η 2 lies in interior of domain D, By putting equation (6) into equation (22), where is taken as midpoint of _ C M By taking the following approximations, Also, By the use of equations (25) and (26) into equation (24), (28) It is the boundary integral approximation of partial differential equation (8).
Mathematical Methodology
Boundary conditions given in equation (4) can be expressed in terms of algebraic equations as (31) To obtain M-set of linear algebraic equations by using equation (12) and second part of equation (25) as So, for the solution of boundary value problem, we solve system of 4M + 2N linear algebraic equations given by in equations (18), (28), (29), and (32).
Numerical Application of Boundary Integral Technique (B.I.T)
Algorithm for the solution of specific problems by B.I.T using trial basis function introduced in equation (21) has been used in this numerical computation. PROBLEM#01: Considering the following specific values of Z ij and g i , Here, the domain of the problem is 0 < x i < 1(i � 1, 2). To solve the problem above, we apply the following B. Cs: e analytic solution of the above problem is ij denotes the average value of Z ij on the interior nodes, and its value is Analytical and numerical values of w are compared graphically. Figure 1 shows comparison of number of boundary elements, and interior collocation points has been drawn graphically. is is worth observing here that the accuracy of solution has improved with more element discretization of boundary curve. Value of w at some specific points like (10,4) and (40,19) is considered to compare with analytic solution of w at selected interior points. w used in equation (12) is differentiated w.r.t x i (i � 1, 2) in order to derive partial derivatives of first order for w.
PROBLEM#02: Now considering the following values of Z ij and g i : Here, the domain of the problem is 0 < x i < 1(i � 1, 2) . To solve the problem above, we apply the following B. Cs: Here, again Z (0) ij denotes the average value of Z ij on the interior nodes, and its value is same as equation (37). e analytic solution of concerned problem is Analytical and numerical values of w are compared graphically by taking x 2 � 0.50 and varying x 1 such that (x 1 � 0.1, 0.2, ..., 0.9) In the graph above, analytical solution of w at fixed value of x 2 � 0.50 is compared with approximated value of w by taking different values of x 1 . e graph predicts that analytical solution agrees well with the numerical value. (Figure 2) Mathematical Problems in Engineering 5
Conclusion
Numerical technique used in the article requires only the boundary to be discretized for the solution of 2-D steady state mass diffusion or heat conduction using trial function approximation. Specialty is that it does not include collocation points only but also interior points distributed in a mannered way. e accuracy and validity of the method are verified by applying to a problem with known solutions. e solution obtained numerically agrees well with the known results.
It is also noted here that in this paper, the boundary integral equation, obtained and used in this method, is discretized using elements with constant value and this makes the error as minimum as desired. Also, the reduction in error is observed by increasing the number of boundary elements and related interior collocation points. e selected method based on trial function and boundary integral approximation provides effective and reliable alternatives to all the existing mathematical techniques for the solution of heat and mass conduction in the anisotropic medium. e possibility of further improvement in the work to solve problems related to anisotropic media is also the part of this paper as was performed earlier by Fahmy [20], Marin and Lesnic [21], Baron [22] and Aksoy and Senocak [23], and Dobroskok and Linkov [24].
Data Availability
e data used to support the findings of this study are included within the article. | 2,192.4 | 2022-07-20T00:00:00.000 | [
"Mathematics"
] |
Algorithms of Cause-and-effect Approach to Increase Service Net Efficiency
Service nets distribute goods and services that is why their improvement is one of the important tasks of any production chain. There is many models related to the sphere however, in many of them it is possible to see some weaknesses. At the same time since the task mentioned is complicated and large scaled some systematical approach should be applied that needs to be modified taking into consideration presence in the systems objects, processes, events and phenomena of various nature and origin. As possible approach one consideres so called cause-and-effect one that provides a universal description of complex systems and possibility of descision making in undefined or under-defined situations. In the artcile below this approach is considered and informational logic diagrams and algorithms to increase service net efficiency are presented. Gas stations were taken as examples and sphere of practical application, that results are discussed.
Introduction
Service nets are important parts of present economy [1]. The statement is true for petroleum supply also [2,3] that may be considered as a good example. To develop it there should be improvements of the existing models and optimization methods [4][5][6] to make management and control more efficient considering last as one of the main tasks of the present century [7].
A Service net (company, structure, etc.) as any complex system works under requirements of the upper, goal-orienting and parents systems, follows demands of consumers, takes into consideration possibilities of competitors and suppliers, fulfills laws, makes innovations and so on. These demands, influences and restrictions (further-factors) on/to/from a system and its surroundings may be formalized as elements of mathematical sets that permit to create in a parameter/factor space some feasible regions, goal areas and so on.
Under the process approach if one considers gas stations as an example of service nets it is usually determined the station themselves (that serve consumers), station complexes or gas station nets (that provide work of the stations in a region) and companies as legal entities. The investigation object is a large scale complex territorially distributed hierarchical human-machine system [8] tasks to increase efficiency of which are the ones of many criteria optimization [9]. Existing models and methods to resolve modern practical tasks are frequently not enough since they are not systemized. In addition, there are mainly considered state-level or object-level structures without due attention to regional service nets (station complexes), transportation (clients) flows are modeled as simplistic that is non-adequate, modern petroleum equipment like OPT (Outdoor Payment Terminal) and so on is not considered, etc.
The task is to bring efficiency or key performance indicator (KPI) K to its maximum at given and perspective factors of the system and surroundings G during ∆t by developing structures S = X, U, GR and choosing control actions, C, A, X, U, R that is where X -set of control means, U -relation between them, GR -structure graphs, С -control functions, A -control algorithms, R -variant of a control structure. In given formulation the task is not resolved in general because of large scale, diversity of components and non-linearity of their interactions that demands a new approach.
Method: Causal-and-effect Approach to Increase Efficiency of Complex Systems
The most important regularity of complex system behavior is historicity or development in time [10,11]. These ideas are formulated in all fields of knowledge and practical activity [12]. At the same time relative simplicity and observability of cause-and-effect (causal) interactions is the reason for their particular «refusal», since for example from «the earlier» is not exactly derived «in accordance with/because of». This situation caused necessity of the following research that is done in some spheres [13].
In general, it is possible to consider that modern scientific knowledge is based on the determination of causal interactions between objects, processes, events and phenomena (further -objects). They unify visions from intuitive through scientific to philosophical permitting formal logical description at deep investigation of the functional spheres, taking into consideration origin of the events, their vicinity and development in time and provide matching of knowledge that comes from various spheres of theory and practical activity.
Every object of a system, a process realized, event as changing of state or phenomena of surroundings has its reason of origin and development that connects them with other objects. Goals as future states of a system are achieved at conditions determined by mentioned factors (condition 1). Results (effects) of causal interaction changes a system and surroundings that performs some new conditions (or condition 2). For analytical description the cause-and-effect cell operation algebra is used similar to finite-state machine one [6]. The model of the cell (general at the first level of decomposition) is as follows: Complements (extra-and interpolation of parameters between parts of the system with the most trustable data); Composition (developing of the whole system model by following flowcharts of processes of known systems and models, so-called base or etalon models); And substitution (synthesis of structures optimal on criteria).
To increase efficiency the model of an elementary causal cell structure mentioned was changed. Achieving of a simple goal by means of elementary control task solution is modeled by the elementary cause-and-effect cell. Part of the system at state SA with KPI K to achieve goals at the factors of surroundings GA under control СА by converting of resources WA is brought by means of functions and algorithms, contained in the kernel of a causal cell, to the state SB with conditions GB and control CB corrected accordingly to the goal achieving degree ( GA GB − ), new K* («*» -after interaction) and output resources flow WB. It is possible to express K, G, S through each others. For nonelementary cases some cause-and-effect (or reasone-andconsequence, RC) complexes are created for the part or whole of the system. Casual components interact accordingly to the flowcharts of base models using operations of OC set. There fore new model of RC-cell on the first level of decomposition at matrix view is presented, where А -before and B -after interaction: The solution of tasks in known situation is achieved by putting general, known or theoretically and eхperimentally proved RC-cells, its components decomposition till the level understandable by decision-makers using base models, practice performance checking, necessary feedback and correction.
If there are unknown situations or in case when there is no enough information about the system, surroundings and their interaction some parts with the most trustable data are determined. For them procedures mentioned to known situations are realized, general RC-models of the whole system are developed and optimization tasks are resolved considering known models with accuracy of data available. Results are step-by-step improved while the system is developed and/or one gets new data. Information about results put in some Petrol Data-Base (PDB). Since there are no restrictions on types of functions and algorithms of kernels, it is possible to describe interactions of objects of different nature.
Results (Algorithms and Diagrams) to Improve Service Station Net
Optimal control parameters task causal formulation is shown as folows: where n inp -input transport flow, n out -outpuit flow, G uv ∈ G -factors of the system and surroundings (u=1.. U -type, v=1.. V -kind), int -quasi-stationary time intervals, where linear dependence or constancy are adequate and possible, α -statistical significance, K station -efficiency indicator to be increased, К station ↑. Generalized algorithm to determine efficient parameters of gas stations at given factors of surroundings is presented on Figure 1, where R -factors/parameter space, R*-feasible region, X and Y -data of comparable objects, av -average, ∆K -error. «End» operator is dotted since the algorithm is cycled due to necessity of step-by-step improvement of the system. The task to synthesize the structure of complex multicircuit systems, optimal on K petrol at given G, where K petrol is brought to MAX by development of structures and selecting of control actions at causal formulation is as follows: ( ) where w 1..6 -resources (1 -staff, 2 -technology, 3 -energy, 4 -knowledge, 5 -finances, 6 -materials), P -set of processes, GR, GR 1 -GR 4 -structure graphs of, correspondingly, infra-system (non-active and needed control), control (1), decision making (2), organizationtechnical (3) and information (4) systems. It is supposed [14] that the models are enough to describe a whole system. A system structure is synthesized accordingly to the following informational logic diagram presented in written form.
On the I-st stage there is an analysis of system at surrounding conditions. a) Designation and specification the goals as components of the X vector given by decision-makers. It depends on factors of surroundings G, flow chart of processes S, control means characteristics X and relations between them U, i.e. X (G, S, X, U). Quantitively they are determined by data of real working objects. b) Specifying the system in surrondings adding to it some controllable components data of which descisionmakers can evaluate goals achievability. c) Determination of boundaries between controllable and control systems accordingly to activity (deliberate changing of information) of components. Non-active or infra-system does not have the property and needed control. areas Е ef (see Figure 1).
Figure 2. Typical relations between goals, processes and objects in gas station nets.
On the II-nd stage the structure of control system is formed. a) Formation of the process flow charts and object structures accordingly to the known models that are acumulated in PDB. b) Determination of permitted dominative and sequence relations between objects N, processes P and goals G using results of the sphere analisis [15] done ( Figure 2). On Figure 2 P 1 -petroleum product supply, P 2operational activity, P 3-sales and consumer service, P 4accounting and reporting, P 5 -maintenance and repairing, P 6 -staff training, P 7 -security, P 8 -energy provision, P 9 -transport, P 10 -information service, P 11 -purpose-oriented direction, P 12 -procurement, P 13analysis, P 14 -decision making, P 15 -control; X pqcontrol means (p=1.. P -type, q=1.. Q -level); N 1.. rinfra-system objects non-active at the view. c) Formation of controllable system structure as interconnected recource-converting objects accordingly to the processes structure of Figure 2. On III-rd stage control system structure is formed. a) Specifing the control time periods H k (k=1.. K), control functions C i (i=1.. I) and control means X pq accordingly to the models of PDB mentioned. b) Forming sets of elementary control tasks : and circuits Cartesian products. c) Control system structure model creation or Ω-synthesis and refusal circuits with the low efficiency, w/o necessary automation level or meaningless. On the IV-th stage there is a synthesis of control system structure by circuits convolutions. Some better X pq may be added and convolutions are done until limits of their properties, considering efficiency and level of automation. a) C-convolution (synthesis) as integration of control functions alongside control circuits and designation more C i to the smaller number of X pq . b) P-convolution as an integration of control functions belonging to various control circuits of processes P j by designating of more C i to be performed by the same X pq . c) Н-convolution as an integration of control functions C i on various time periods by lower number of X pq . On the V-th stage one looks optimal variants of the system structure.
d) Forming of control circuit set (Ω'synthesized) and determination of КPetrol. e) Designation as optimal those structures, KPI of which are closest to Eef. If it is not achieved, one goes to Stage I. f) Forming of organization-and-technical system structures by bringing new and/or improving existing control means accordingly to the control system structure (see p. 5.2) and requirements to control means Xpq. g) Forming the information system structure by putting data arrays and transmission channels to control and organizational-technical system structures already done (see pp. 5.2 and 5.3). h) Forming the decision-making structure by designation to the components of organization-and-technical system structures and the same for information system types of decision-making acts using PDB mentioned [15]. i) Proposal to improve the model and algorithm, transition to Stage II.
The task to form structures and choose optimal control actions for service nets using system cause-and-effect approach is resolved by the following informational logic diagram ( Figure 3). Figure 3. Information-logic diagram to form structure and choose control actions for service (gas) nets, optimal on the given criteria with using system reason-and effect approach.
Discussion
As a result of the causal approach application proposed it was developed the complex of inter-related informational logic diagrams and algorithms to impove service nets as a part of the methodology of rational development and continious improvement of service station nets and effecient automatical control of processes and objects in the systems (Methodology). Basic components of the methodology are presented on Figure 4.
On the base of the Methodology there were solved some practically important tasks [15].
In particular, there were found optimal gas station parameters at various types of street-and-road nets. It was shown that as optimal on the criterion of minimum outage of clients and service channels may be considered (if one minds fuel sales only) the structure of two dispensers with all of the fuels being sold on the station. Outdoor payment terminals built in dispencers provide at least 10% higher productivity. During 2000-2014 the model was applied on more than 150 objects.
For gas station nets it was proved that up to 80% of the modern and perspective flows of clients may be served by smaller quantity of stations. For various types of street-androad nets there were found some characteristics (quantity of cross-roads, distances between neighboring stations and their quantity), providing minimal redistribution of clients bertween objects of the same net for small (up to 500 thousand residents) and medium (up to 1,5 million residents) cities and towns for non-dominating petroleum supply company, operating smaller that 25% of a region stations, and the same on highways. Also in some regions of middle Russia there were developed optimal structures to serve card clients that increased sales in volume in 6 times. In these regions and some CIS countries there were changed technical maintanance systems that provided cost reduction in 3-15% at better service. Moreover there were prepared efficient system structures to serve clients near pumps, security, automation, procrurement counter-actions, capital construction, etc. Finally, it was done modelling of processes on stations that permitted to increase staff training skills in newly built trainng centers in Saratov and Volgograd-cities.
Conclusion
Service nets are important for an economy and require their continuous improvement.
A causal-and-effect approach was suggested and new informational logic diagrams and algorithms were developed. They are characterized by co-synthesis of controllable and control systems, descision-making in case of not enough trustable data from systems and surroundings, possibility to match objects, processes, events and phenomena of various nature and so on.
Adequatnes of the methodology is cofirmed by the proximity of the known and developed models on the similar feasible regions, reliability of results by statistical data for more than 15 years of observation, validity of conclusions by results of approbation and successful multiple applications. Said above permits to use it for other service nets and complex system at all.
Conflict of Interest Statement
All the authors do not have any possible conflicts of interest. | 3,542 | 2020-03-24T00:00:00.000 | [
"Computer Science"
] |
1/2 order subharmonic waves of two cavitation bubbles
In the work, the 1/2 order subharmonic wave of two coupling cavitation bubbles is investigated numerically via Fourier spectrum analysis. By analyzing the dynamics of bubble, we find that the mutual interaction between bubbles can affect the appearance of 1/2 order subharmonic. The results of parameter dependence show that the intensity of 1/2 order subharmonic would be promoted or inhibited with the increase of mutual interaction. The higher the driving amplitude or the smaller the distance between bubbles, the stronger the mutual interaction is, and also the greater the promotion or suppression of the 1/2 order subharmonic is. Moreover, while the 1/2 order subharmonic occurs, the energy of bubble would alternate between two different peaks, and the temperature inside bubble has a similar fluctuation while the bubble collapses. This qualitative analysis suggests that the bubble′s dynamics for multi-bubble case is complex. Understanding the generation of subharmonic of bubble′s dynamics is of great significance for helpful applying of cavitation bubble.
Introduction
Since the pioneering work of Lord Rayleigh [1], cavitation as a very typical hydrodynamic phenomenon has received a great amount of attention.Cavitation bubble has complex behavior, meanwhile, it is also very difficult to observe in the engineering field for its small size and short life cycle directly.However, as an interesting and significant issue, both experimentally and numerically, acoustic cavitation generated by ultrasonic field has been extensively studied and many articles have been published over the years.
The study of cavitation bubble mainly has two aspects: single bubble and multi-bubble.Obviously, in addition to the influence of the inherent properties of liquid, such as viscosity, for single bubble case, the dynamics of bubble is only affected by the driving sound field.In Refs.[2][3][4][5][6][7][8], the researchers have investigated the stable dynamics of single bubble in detail and found the peak of pulsation is constant throughout all driving sound cycles.Furthermore, in the investigation of sonoluminescence, the brightness period of bubble at the time of collapse is also consistent with the driving sound period.However, under the same condition of circumstances, for multi-bubble case, the dynamics of each bubble becomes much more complex, which is affected not only by the driving sound field, but also by the sound pressure radiated from the surrounding bubbles.Considering the complex interaction between many cavitation bubbles, the nonlinear oscillations of multi-bubble have been analyzed by many predecessors [9][10][11][12][13][14][15][16][17][18][19][20][21].For instance, in Ref. [21], the mutual interaction between bubbles was carefully investigated and the cavitation noises emitted in different sonication conditions were recorded to study the dynamical behavior of bubbles.The corresponding results suggest that the oscillations of bubbles could be severely influenced by the dispersing state of bubbles, and that the nonlinear feature of the dynamics of cavitation bubbles, imposed by the mutual interaction between bubbles, would be gradually developed with the decrease of the dispersing height.
As we all know, despite the numerous studies that shed light on the multi-bubble dynamics, due to the complexity of the system, the bubble behavior is not yet fully understood, such as doubling period and chaos, and so on.Fortunately, up to now, some researchers have carried out beneficial exploration in this aspect.Like the logistic and the Lorenz systems, as one of the typical nonlinear systems, the cavitation bubble is in deed typically characterized by the complex behavior of bifurcation and chaos.Author [ [22]] has investigated the chaos behavior of bubble oscillations by using many methods, such as Lyapunov exponent, Poincaré map and phase diagram, etc.Similarly, in Behnia and co-workers′ researches, the periodic and chaotic of bubble have been investigated by controlling specific ranges of parameters [23], and the suppressing chaotic oscillations of a spherical cavitation bubble is probed through applying a periodic perturbation [24].Sojahrood et al. has used a comprehensive bifurcation analysis method to study the nonlinear radial oscillations of the bubble oscillator by sonicating with different resonance frequency [25].
From regular motion to chaos, an interesting transition, period doubling which is concomitant with the generation of 1/2 order subharmonic (SH) has always played a key role in the dynamical system.For stable cavitation, 1/2 order SH as an indicator has been used in many aspects, such as monitoring treatments [26] and BBB opening [27], and so on.It is known that, for single bubble case, as the driving pressure increases, the nonlinear response could become chaotic and the bubble radius would grow beyond a limit that may lead to bubble destruction finally.However, the mechanism of period doubling still needs to be further investigated, especially in the case of multi-bubbles, due to the interaction between bubbles, this phenomenon would become more complex.Based on this consideration, this article would study the generation of 1/2 order SH in multi-bubble case.In fact, the larger the number of bubbles in liquid, the more difficult the study of bubble dynamics is.In the investigation, we would use the two-bubble model widely applied in many literature where it is recognized as the simplest representation for multi-bubble case, to study the relationship between the occurrence of 1/2 order SH and the interaction qualitatively without considering the translation of bubbles.
The article is organized as follows: In Section 2, we present the sketch of two coupling spherical bubbles and the corresponding equation which is used to investigate the dynamics of bubble.In Section 3, the numerical calculations for the two-bubble case are shown, which investigate the generation of 1/2 order SH via Fourier spectrum analysis.In order to help to understand the effect of mutual interaction between bubbles on the intensity of 1/2 order SH, the parametric dependencies of the 1/2 order SH would be investigated in Section 4 with different driving amplitude and different distance between bubbles.In Section 5, the temperature inside cavitation bubble and the energy of bubble are discussed while the 1/2 order SH occurs.In Section 6, there is a conclusion and discussion.
Theoretical model
Fig. 1 gives a sketch of two cavitation bubbles 1 and 2 which are labeled by B 1 and B 2 respectively.B 2 can be seen as a neighbor of B 1 , and vice versa.In this research, bubbles are subjected to driving ultrasound with the conventional single frequency source, with amplitude p a and frequency f.The dynamics expression of i-th spherical bubble with the instantaneous radius R i in a liquid is usually described by Keller-Miksis equation [19,15,21], a modified Rayleigh-Plesset model, ( where the over-dots mean the derivative respect with the time t.The parameters ρ and c express the density and sound speed of the liquid, respectively.And the pressure P i near the i-th bubble wall is given by, with σ and μ being the surface tension and viscosity of the liquid.P 0 is the hydrostatic pressure.For the i-th bubble, the symbols R 0i and h i represent the equilibrium radius and the hard-core radius of van der Waals gas (h i = R 0i /8.86 for argon [28] which is used in the following investigation.),respectively.The polytropic exponent γ = 5/3 describes the thermal conduction procedure of the adiabatic argon gas inside the bubble.
In Eq. ( 2), P represents the radiative pressure [29] from the neighbor of i-th bubble.For the case of single bubble, this term should be ignored.
where R 3− i is the neighbor′s radius of i-th bubble.The parameter D is the distance between two bubble centers.
In present investigation, all bubbles are assumed to retain its sphericity, and the vapor pressure and mass exchange are not taken into account.Because the pulsating of bubbles is investigated in a fixed position, the translational motion of bubbles which induced by the secondary Bjerknes force between them is not considered.
During the pulsation, the temperature inside the bubble is constantly changing.The temperature in inside i-th bubble can be calculated by [30] where N t is the number of molecules in the bubble, and k represents the Boltzmann constant, respectively.In the paper, Eq. ( 2) would be calculated to investigate the dynamics of the two bubbles by using the fourth Runga-Kutta routine in the standard MATLAB adaptive solver ode45 which has a fifth order error estimation.During investigating, we assume that the pure water is host liquid, and the calculations are carried out applying the constant values for c = 1.50 × 10 3 m/s and ρ = 1.00 × 10 3 kg/m 3 , σ = 7.28 × 10 − 2 N/ m, and μ = 1.002 × 10 − 3 kg/m⋅s.Unless otherwise stated, the driving amplitude p a of sinusoidal sound is set to be 1.35 atm (1 atm ≈ 1.01 × 10 5 Pa).To avoid transient oscillations and obtain a stable resolution in the frequency spectrum, for each simulation parameter, all analyses are performed within the last 100 cycles of a 1000 cycle acoustic pulse.
As a valuable method, the Fourier spectrum analysis is usually used to analyze the dynamics of nonlinear system, which can present the relation between the changes of dynamics of system and a wide range of control parameters effectively.In the work, we introduce this way to present the evolution of 1/2 order SH affected by mutual interaction between bubbles qualitatively.
The 1/2 order SH wave of two cavitation bubbles
The dynamic behavior of bubble is known to be affected by its coupled bubble, which is reflected in the interaction between bubbles.In this section, we would investigate the generation of 1/2 order SH, f/2, of two coupling cavitation bubbles.In the investigate, the spectrum structure only contains the 1/2 order SH and its SHs, beside that the fundamental frequency component.
Here, the distance between two bubble centers, D, is set to be 150 μm, and the two bubbles would be exposed to a 43 kHz external driving pressure field.Fig. 2(a) plots the pulsation of bubble 1 with respect to the acoustic cycles while the ambient radii of bubble 1 and 2 are R 0i = 10 μm (i = 1, 2).It is easy to see that, as time goes on, two different peaks alternate on the pulsation curve.This phenomenon indicates that, besides the fundamental frequency f, additional frequency component is produced.In Fig. 2(b), the spectrum corresponding to the radius-time variation of Fig. 2(a) is given.The generation of half the driving frequency component, namely f/2 SH, has been observed.That is to say, the period doubling occurs during bubble pulsation.Due to the same equilibrium radius, the pulsation of bubble 2 is similar to that of bubble 1 and is no longer given here.It is well known that, when the bubble pulsates in the sound field, it will produce radiation pressure which can be regarded as a secondary sound source.While the distance of two bubbles is small, their radiation pressures can influence each other′s dynamics.Therefore, we believe that the appearance of SH is related to the radiated sound pressure between bubbles.In order to illustrate this view, here, the pulsation curve, for the single bubble case, is also given and the corresponding spectrum is calculated.From Fig. 3 (a), we can see that the bubble can pulsate in a stable cavitation state in which the bubble expands during the rarefaction phase and collapses at the compression phase of ultrasound field, and the peak value of pulsation is the same in each driving cycle.It shows that, in the case of single bubble, there is no SH component would be created, which could be understood clearly from the corresponding frequency spectrum in Fig. 3(b) only containing fundamental frequency and its superharmonics.As one can imagine, for the two-bubble case, further increasing in value of distance between bubble centers, the pressure acting from one bubble on another would become small, the SH component would gradually become weaker and weaker, and finally disappear.Therefore, the mutual interaction between bubbles which is resulted by the radiation pressure is the cause of SH is quite reasonable.
Here, to further examine the dynamics of period doubling in Fig. 2(a), Fig. 2(c) shows the phase portrait of the bubble 1 for the two-bubble case.It is obviously that the phase diagram consists of a closed trajectory that passes two different peaks of pulsation of adjacent period, which is different from that in Fig. 3(c) for the single bubble case.
Of course, the interaction between bubbles does not always promote the generation of f/2 SH, and sometimes it plays a role in suppressing this SH.For example, when the driving frequency is set to be 39 kHz and other simulation parameters remain unchanged, the bubble′s pulsation for the two-bubble case has no f/2 SH component (see Fig. 4(a)).On the contrary, the pulsation of bubble for the single bubble case will contain the SH wave (see Fig. 4(b)).The corresponding spectrum diagrams of Fig. 4(a) and (b) are no longer given here.Therefore, we can conclude that the interaction between bubbles has two effects, that is, it sometimes promote the production of f/2 SH, and sometimes inhibit the appearance of f/2 SH.
The figure of pulsation where f/2 SH occur is obviously characterized by period-doubling, which is likely to be signs of chaos.For instance, while the single bubble above is exposed to a 35 kHz driving pressure field, the radius-time curve behaves in a totally chaotic way and displays no periodical characteristic at all, which can be explained by that there are a lot of points scattered on the Poincaré section (see Fig. 5).Similarly, for the two-bubble case, the bubble pulsation will also appear chaotic.The Fig. 6 shows the Poincaré section for bubble 1 when the drive frequency is set to be 46 kHz.It is clearly that there is no period on the bubble's pulsation.
Effect of amplitude p a on f/2 SH
Now, we will discuss the dependency of f/2 SH intensity on the driving amplitude p a for the two-bubble case.In this subsection, the equilibrium radius values of two bubbles used to simulate are the same as those of Fig. 2(a).The distance between two bubble centers remains constant, 150 μm, for different driving amplitudes.Fig. 7 gives the relations between the intensity of f/2 SH and the driving amplitude p a with two different driving frequencies 39 kHz and 43 kHz, respectively.It can be seen qualitatively that the f/2 SH is enhanced with the increase in the driving amplitude while the bubbles are exposed to a 43 kHz external driving pressure field.Conversely, it can be inferred that, at a certain small amplitude, this SH component will never be generated.However, when the driving frequency is adopted as 39 kHz, the intensity of 1/2 order SH would decrease with the increasing of driving amplitude (see red solid circle in Fig. 7), which is the opposite of the case at 43 kHz.This result shows that the mutual interaction between bubbles could also inhibit the generation of f/2 SH.As a matter of fact, while the driving amplitude, p a , changes, the strength of mutual interaction between bubbles can be known from the variation of radiative pressure qualitatively.For bubble 1, Fig. 8 plots the changes of average peak of radiative pressure P (1) rad with the variation of p a at two driving frequencies 39 kHz and 43 kHz.It can be seen that the radiative pressure would become stronger and stronger with the increasing of p a gradually.Therefore, we believe that both the promotion and suppression degree of the intensity of 1/2 order SH for the two-bubble case are related to the increasing value of mutual interaction.As we all know, the pulsation of cavitation bubble could be changed by using different driving frequency.Fig. 9(a) and (b) show the radius-time variations of bubble 1 for two-bubble case with driving frequencies 42 kHz and 41.4 kHz respectively.It is clearly that, from Figs. 2(a), 4(a), 9(a) and (b), a small change in the driving frequency causes a large difference in cavitation bubble pulsation, which confirms the results in Fig. 7, that is, the two frequencies in Fig. 7 are not very different, but the intensity of f/2 SH are very different.
Effect of distance D on f/2 SH
Here, we would investigate the intensity change of f/2 SH by adjusting the distance between bubble centers D for the two-bubble case.
The equilibrium radii of bubble 1 and 2 are all 10 μm.The driving amplitude, p a , is adopted as that in Fig. 2(a).While the driving frequency, f, is set to be 43 kHz, from Fig. 10, we can see the intensity of f/2 SH gradually decreases as the distance between bubble centers increases.In fact, for one bubble, while the distance D become large, the radiative pressure exerted on it from another bubble would become weak, which can be understood from Eq. ( 5).Therefore, the mutual interaction between bubbles shows a decreasing trend.This result reflects that the 1/2 order SH intensity could be enhanced with the increase of mutual interaction between bubbles.However, when the two bubbles are exposed to a 39 kHz external driving pressure field, we find that the intensity of 1/2 order SH would increase as the distance between bubbles increases (see Fig. 11).It shows that the strong interaction, when the distance between bubbles is close, could inhibit the generation of f/2 SH, which is consistent with one of the conclusion of the previous subsection.The change of radiative pressure on the distance is no longer given here.The results in this subsection qualitatively verify again that the interaction between bubbles can enhance or inhibit the generation of f/2 SH.
Temperature inside bubble
In this subsection, the change of temperature inside the bubble would be investigated qualitatively while the f/2 SH component generates.The calculating parameters are the same as those in Fig. 2(a).We assume that the temperature of host liquid, pure water, is T 0 = 293 K. Fig. 12 presents the change of instant temperature in the bubble for the two-bubble case.It is clearly that, as the acoustic cycle increases, the maximum of collapsing temperature inside bubble would alternate between two different peak values, which is similar to that of pulsation curve in Fig. 2(a).This can be easily understood that, because of the mutual interaction between bubble 1 and 2, the bubble′s pulsation in each acoustic cycle would be changed, which is labeled by the generating of f/2 SH.As we all know, while the pulsation is stronger, the temperature would be higher at the compression phase of ultrasound field.Therefore, we believe that, while the f/2 SH generates in the pulsation of bubble, the different temperature would lead to the difference of light bright in two consecutive acoustic cycles.
Energy of cavitation bubble
It is well known that, while a cavitation bubble pulsating in acoustic field, its volume undergoes periodic expansion and contraction.As a bubble expands from the equilibrium radius to the maximum one, the surrounding liquid will be displaced.For the two-bubble case, while the vapor pressure inside the bubble 1 is not taken into account, the work done by the liquid displaced by the expansion of bubble 1, E b1 , could be calculated by using Eq. ( 7) [31], which is also usually termed the energy of cavitation bubble 1.
When the parameters used to simulate remain the same as those in Fig. 2 (a), Fig. 13 shows the variation of energy of bubble 1 as time goes on while the f/2 SH component generates.Obviously, the energy value of bubble 1 changes between two different peaks 0.012 and 0.027 mJ.The energy change trend of bubble 2 is similar to that of bubble 1, which is no longer given here.By Fourier spectrum analysis of this energy curve, SH component, f/2, can be obtained, which is consistent with that in Fig. 13 and is no longer presented in this subsection.In essence, the generation of SH in energy curve is also related to the interaction between bubbles.From Eq. ( 7), we can see that the energy of cavitation bubble is related to the maximum value of pulsation per acoustic cycle.And from the previous subsection, it can be seen that the interaction between bubbles leads to the difference in the peak value of pulsation, which is the cause of the generation of SH.Therefore, in each acoustic period, the alternating change of energy of bubble 1 is easily understood qualitatively.
Conclusion
In the paper, the 1/2 order SH wave of two interacting cavitation bubbles is investigated numerically by using Fourier spectrum analysis.Firstly, comparing with the single bubble dynamics, we find that the f/2 SH wave in the case of two bubbles could be affected by the interaction between bubbles.This mutual interaction can sometimes promote the production of f/2 SH, and sometimes inhibit the appearance of f/2 SH.Secondly, the driving amplitude and the distance between bubbles have effects on the f/2 SH.The higher the driving amplitude, the larger the mutual interaction is, and also the greater the degree of promotion or suppression of the f/2 SH is.Similarly, as the distance between bubbles decreases, the mutual interaction will increase, and the promotion or suppression of the f/2 SH will also increase.The change of mutual interaction can be known from the radiative pressure between bubbles.It should be pointed out that, while the distance between bubbles is fixed and the driving amplitude exceeds a certain value, the bubble pulsation will appear the 1/4 order SH component, f/4.If the driving amplitude continues to increase, the pulsation will eventually tend to be chaotic, leading to bubble destruction.The f/4 SH and chaos are not the topics discussed in this paper and will be studied in another article.Thirdly, when the f/2 SH occurs, by analyzing the temperature evolution inside the pulsating bubble, we find the instant temperature in the bubble for the two-bubble case would vary between two different peaks.Furthermore, as time goes on, the energy of cavitation bubble would also alternate change between two different peak values.
Compared with Ref. [25], our work focuses on the generation of f/2 SH affected by mutual interaction between bubbles.Although the f/2 SH is used in many fields as an indicator for stable cavitation, as matter of fact, the interaction between the cavitation bubbles that affect the SH is very complex, especially when the number of bubbles is relatively large.We have qualitatively analyzed the production of f/2 SH for the twobubble case at present, and this interesting phenomenon still needs further investigations.The equilibrium radii of bubbles are both 10 μm.The driving amplitude and frequency, p a and f, are 1.35 atm and 43 kHz, respectively.red solid circles are the results calculated by using Eq. ( 7), and the black line is the correspondin.gfitting curve.
Fig. 1 .
Fig. 1.Sketch of two coupling spherical bubbles.R i is the instantaneous radius of bubble labeled by B i (i = 1, 2).
F
. Tao et al.
Fig. 2 .
Fig. 2. The change in bubble radius for the two-bubble case.(a) the radial pulsation of bubble 1; (b) the corresponding frequency spectra of (a); (c) the phase portrait diagram corresponding to (a).The equilibrium radii of two bubbles 1 and 2 are both 10 μm, and the driving frequency is 43 kHz.
Fig. 3 .
Fig. 3.The change in bubble radius for the single bubble case.(a) the radial pulsation of bubble; (b) the corresponding frequency spectra of (a); (c) the phase portrait diagram corresponding to (a).The equilibrium radius of bubble is 10 μm, and the driving frequency is 43 kHz.
F
. Tao et al.
Fig. 4 .
Fig. 4. The pulsations of bubble (a) for the two-bubble case and (b) for the single bubble case.The equilibrium radius of bubble is 10 μm, and the driving frequency is 39 kHz.
Fig. 5 .
Fig. 5.The Poincaré section for the single bubble at 35 kHz driving pressure field.
Fig. 6 .
Fig.6.The Poincaré section of bubble 1 for the two-bubble case while the driving frequency is 46 kHz.
Fig. 7 .
Fig. 7.The plots of the f /2 SH intensity of one bubble′s pulsation for the two bubble case as the function of driving amplitude p a while the driving frequency f is set to be two different values.
Fig. 8 .
Fig. 8.The relations between the average radiative pressure peak of bubble 1 and the driving amplitude p a with two different frequencies f for the two-bubble case.The radiative pressures are measured at 150 μm away from the center of the bubble 1.
F
. Tao et al.
Fig. 9 .
Fig. 9.The pulsations of bubble 1 for the two-bubble case with driving frequencies (a) 42 kHz and (b) 41.4 kHz.The other parameters used to simulate are the same as those in Fig. .2(a).
Fig. 10 .
Fig.10.The intensity of f /2 SH with different distance between bubble 1 and 2 while the driving frequency is adopted as 43 kHz.
Fig. 11 .
Fig.11.The relation between the intensity of f /2 SH and the distance between bubbles with the driving frequency 39 kHz for the two-bubble case.
Fig. 12 .
Fig. 12. Collapsing temperature curve of bubble 1 for the two-bubble case.The equilibrium radii of two bubbles are both 10 μm.The driving amplitude and frequency, p a and f, are 1.35 atm and 43 kHz, respectively.
Fig. 13 .
Fig. 13.Time-varying energy of cavitation bubble 1 for the two-bubble case. | 5,804.4 | 2024-08-19T00:00:00.000 | [
"Physics"
] |
A general stability result for second order stochastic quasilinear evolution equations with memory
The goal of this paper is to discuss an initial boundary value problem for the stochastic quasilinear viscoelastic evolution equation with memory driven by additive noise. We prove the existence of global solution and asymptotic stability of the solution using some properties of the convex functions. Moreover, our result is established without imposing restrictive assumptions on the behavior of the relaxation function at infinity.
Introduction
The quasilinear viscoelastic wave equation of the following form: describes a viscoelastic material, with u(x, t) giving the position of material particle x at time t, where D is a bounded domain in R d with a smooth boundary ∂D, ρ > 0, g is the relaxation function, f denotes the body force, and h is the damping term. The properties of the solution to (1.1) have been studied by many authors (see [1][2][3][4][5][6][7]). For instance, in [1], Cavalcanti et al. considered (1.1) for h(u t ) = -γ u t and f (u) = 0, where 0 < ρ ≤ 2/(d -2) if d ≥ 3 or ρ > 0 if d = 1, 2. They proved a global existence result when the constant γ ≥ 0 and an exponential decay result for the case γ > 0. Messaoudi et al. [4] studied (1.1) for h(u t ) =u tt and f (u) = 0, they proved an explicit and general decay rate result with some properties of the convex functions. Liu [5] considered (1.1) for h(u t ) = 0 and f (u) = b|u| p-2 u, where b > 0, p > 2. The author proved that, for a certain class of relaxation functions and certain initial data in the stable set, the decay rate of the solution energy is similar to that of the relaxation function. Conversely, he also obtained for certain initial data in the unstable set that there are solutions that blow up in finite time. In [6], Song investigated (1.1) for h(u t ) = |u t | q-2 u t and f (u) = |u| p-2 u, where q > 2, and ρ, p satisfy He proved the global nonexistence of the positive initial energy solutions of the quasilinear viscoelastic wave equation. Cavalcanti et al. [7] also studied (1.1) with h(u t ) = a(x)u t and f (u) = b|u| p-2 u, where a(x) can be null on a part of the boundary, they obtained an exponential rate of decay of solutions. In fact, the driving force may be affected by the random environment. In view of this, we consider the following stochastic quasilinear viscoelastic wave equations: where g is a positive function satisfying some conditions to be specified later, σ is local Lipschitz continuous, W (t, x) is an infinite dimensional Wiener random field, and the initial data u 0 (x) and u 1 (x) are F 0 -measurable given functions. To motivate our work, let us firstly recall some results regarding ρ = 0 and g ≡ 0, then (1.2) can be rewritten as the following stochastic wave equation: In [8,9], Chow considered the large-time asymptotic properties of solutions to a class of semi-linear stochastic wave equations with linear damping in a bounded domain. Under appropriate conditions, the author obtained the exponential stability of an equilibrium solution in mean-square and the almost sure sense by energy inequality. Using Lyapunov function techniques, Brzeźniak et al. [10] proved global existence and stability of solutions for the stochastic nonlinear beam equations. In [11], Brzeźniak and Zhu studied a type of stochastic nonlinear beam equation with locally Lipschitz coefficients. Using a suitable Lyapunov function and applying the Khasminskii test they showed the nonexplosion of the mild solutions. In addition, under some additional assumptions they proved the exponential stability of the solution. Kim [12] and Barbu et al. [13] investigated initial boundary value stochastic wave equations with nonlinear damping and dissipative damping, respectively. There are also many results on the stochastic wave equations, see the references in [10,[14][15][16][17][18][19][20]. When ρ = 0 and g = 0, (1.2) can be rewritten as the following stochastic viscoelastic wave equation: For the current equation (1.4), the memory part makes it difficult to estimate the energy by using these methods which are used in stochastic wave equation. Hence, Wei and Jiang [21] studied (1.4) with σ ≡ 1 and q = 2 in another way. They showed the existence and uniqueness of solution for (1.4) and obtained the decay estimate of the energy function of the solution under some appropriate assumption on g. In [22], Liang and Gao extended the existence and uniqueness results of [21] with σ = σ (u, ∇u, x, t). In the case of σ = σ (x, t), they proved that the solution either blows up in finite time with positive probability or is explosive in L 2 using the energy inequality. Furthermore, Liang and Gao [23] considered (1.4) driven by Lévy noise, they proved the global existence and uniqueness of the mild solution with the appropriate energy function and obtained the exponential stability of the solutions. Liang and Guo [24] studied (1.4) driven by multiplicative noise, the authors proved the global existence and asymptotic stability of the mild solution by the Lyapunov function.
We note that in the above literature, Messaoudi et al. [4], Liang and Gao [23], Chen et al. [24], and Kim et al. [25] did not discuss the optimality of the decay rate of (1.2) under the influence of random environment. We prove the stability of solutions to (1.2) by modifying the convex functions. The result of this paper provides an explicit energy decay formula that allows a larger class of functions g from which the energy decay rates are not necessarily of exponential or polynomial types. This paper is organized as follows. In Sect. 2, we present some assumptions and definitions needed for our work. Section 3 shows the statement and proof of our main result.
Preliminaries
Firstly, let us introduce some notations used throughout this paper. We set H = L 2 (D) with the inner product and norm denoted by (·, ·) and · 2 , respectively. Denote by ∇ · 2 the Dirichlet norm in V = H 1 0 (D). We consider the following hypotheses.
is a nonnegative and nonincreasing function satisfying There exists a positive function H ∈ C 1 (R+), with H(0) = 0, and H is a linear or strictly increasing and strictly convex C 2 function on (0, r] for some r < 1 such that , and such that (1.2) holds in the sense of distributions over (0, T) × D for almost all ω.
Similar to Theorem 4.1 of [25], we can explicitly drive the proof of the above theorem. Now, we introduce the "modified" energy associated with problem (1.2): where, for any w ∈ L 2 (D), Let (Ω, P, F) be a complete probability space for which {F t , t ≥ 0} of sub-σ -fields of F is given. A point of D will be denoted by D and E(·) stands for expectation with respect to probability measure P.
is a H-value Q-Wiener process on the probability space with the variance operator Q satisfying Tr Q < ∞. Moreover, we can assume that Q has the following form: are the corresponding eigenfunctions with c 0 := sup i≥1 e i ∞ < ∞ ( · ∞ denotes the super-norm). To simplify the computations, we assume that the covariance operator Q and -with homogeneous Dirichlet boundary condition have a common set of eigenfunctions, i.e., (2.5) and form an orthonormal base of H. In this case, For more details about the infinite-dimensional Wiener process and the stochastic integral, see in [26,27].
Stability properties of solutions
In this section, we state and prove our main stability result. Throughout this section, we As is well known, equation (1.2) is equivalent to the following Itô system: x).
In order to prove our stability result, we need the following lemmas.
Lemma 3.1 Let u 0 (x) and u 1 (x) be F 0 -measurable with u 0 (x) ∈ H 1 0 (D) and u 1 (x) ∈ L 2 (D). Assume (2.1) holds. Let (u, v) be a solution of system (3.2). Then we have d dt Proof Applying Itô's formula to 2 ρ+2 v ρ+2 ρ+2 , we get For the third term on the right-hand side of (3.4), we obtain Let From (3.1), we have satisfies, along the solution of (1.2), the estimate Proof Direct differentiation of Ψ , using (1.2), yields We take the expectation of the above formula to get the following result: for any general solution. With simple density parameters, this estimate is still applicable for weak solutions. Then we estimate that the second item on the right-hand side of (3.9) is as follows: By taking η = l 1-l , we obtain Inserting (3.11) in (3.9), we get (3.8).
Lemma 3.3 Let u be a solution of (1.2). The functional
satisfies the solution of equation (1.2) and, for any δ 1 , δ 2 > 0, the estimate Proof Differentiating (3.12) with respect to t and making use of (1.2), we arrive at We take the expectation of the above formula to get the following result: (3.14) Now we repeat the Cauchy-Schwarz inequality, Hölder's inequality and Young's inequality, to estimate each term on the right-hand side of equation (3.14). The first item on the right can be estimated as follows: As for the second item, we can get the following result from the previously obtained formula (3.10) and (a + b) 2 ≤ 2(a 2 + b 2 ): For the fourth term on the right-hand side of (3.14), it is easy to draw, ∀δ 2 > 0, For the fifth item, we can similarly get the following results: where C p is the Poincaré constant and δ 2 > 0. By using the Sobolev embedding and by (3.7), ∀t ≥ 0, we get Then (3.18) has the following form: Combining (3.14)-(3.17) and (3.21), we get (3.13). The proof is completed.
Theorem 3.4
Let u 0 (x) and u 1 (x) be F 0 -measurable with u 0 (x) ∈ L 2 (Ω; H 1 0 (D)) and u 1 (x) ∈ L 2 (Ω; L 2 (D)). Assume that (A1)-(A3) hold. Then there exist positive constants k 1 , k 2 , k 3 , and ε 0 such that the solution of (1.2) satisfies Moreover, if t 0 H 1 (t) dt < +∞ for some choice of J, then we have the improved estimate: 2. Our result is obtained under the very general assumption of the relaxation function g, which allows the processing of the larger class function g, which guarantees uniform stability of (1.2) and has a decay rate explicit formula energy. 3. The usual exponential and polynomial decay rate estimates have proven that g is satisfied (2.2) and g ≤ -kg p , 1 ≤ p < 3/2, it is a special case of our results. For these special cases, we will prove that this is a "simple" proof. 4. Our results allow the relaxation function to not necessarily exhibit exponential decay or polynomial decay. For example, if for 0 < q < 1 and a is chosen so that g satisfies (2.2), then g (t) = -H(g(t)) where, for t ∈ (0, r], r < a, which satisfies hypothesis (A3). Also, by taking J(t) = t α , (3.23) is satisfied with any α > 1. For this reason, we can use Theorem 3.4 and perform some calculations to infer that the energy is attenuated by the same g, i.e., 5. With (A2) and (A3), we can easily infer lim t→∞ g(t) = 0. This means that lim t→+∞ (-g (t)) cannot be equal to a positive number, so it is natural to assume lim t→+∞ (-g (t)) = 0. Therefore, there is t 1 > 0 big enough so that g(t 1 ) > 0 and max g(t), -g (t) < min r, H(r), H 0 (r) , ∀t ≥ t 1 . (3.25) As g is nonincreasing, g(0) > 0 and g(t 1 ) > 0, then g(t) > 0 for any t ∈ [0, t 1 ] and Hence, since H is a positive continuous function, then for some positive constants a and b. Consequently, for all t ∈ [0, t 1 ], which gives, for some positive constant d, we have Now let us prove Theorem 3.4.
It can be known from (A4) that it is obviously equivalent to EE(t). Therefore we have, for some a 0 > 0, F 0 (t) ≤ -a 0 F 1+α 0 (t), from which we easily infer that (3.33) By recalling that p < 3 2 and using (3.33), we find that +∞ 0 E(s) ds < +∞. Therefore, noting that Hence, repeating the above steps, with α = p -1, we obtain Thus the proof of Theorem 3.4 is completed.
This means that it is progressively stable and degenerates to (M + 2c)E 1 . | 3,061.8 | 2020-03-18T00:00:00.000 | [
"Mathematics"
] |
VerifyThis 2015 A program verification competition
. VerifyThis 2015 was a one-day program ver-ification competition which took place on April 12th, 2015 in London, UK as part of the European Joint Conferences on Theory and Practice of Software (ETAPS 2015). It was the fourth instalment in the VerifyThis competition series. This article provides an overview of the VerifyThis 2015 event, the challenges that were posed during the competition, and a high-level overview of the solutions to these challenges. It concludes with the results of the competition, and some ideas and thoughts for future instalments of VerifyThis.
Introduction
VerifyThis 2015 took place on April 12th, 2015 in London, UK, as a one-day verification competition in the European Joint Conferences on Theory and Practice of Software (ETAPS 2015).It was the fourth edition in the VerifyThis series after the competitions held at FoVeOOS 2011, FM2012 and Dagstuhl (Seminar 14171, April 2014).
The aims of the competition were: to bring together those interested in formal verification, and to provide an engaging, hands-on, and fun opportunity for discussion to evaluate the usability of logic-based program verification tools in a restricted setting that can be easily repeated by others.
This article provides an overview of the VerifyThis 2015 event, the challenges that were posed during the competition, and a high-level overview of the solutions to these challenges.While we do not provide guidance on how to perform an in-depth evaluation of the participating tools, we highlight the main tool features that were used in solutions.We conclude with the results of the competition, and some ideas and thoughts for future instalments of VerifyThis.
Before the VerifyThis competitions (and the related online VS-Comp competitions) were launched, verification systems were only evaluated according to the size of the completed project.However, due to the size and complexity of the verification efforts, such experiments could not be reproduced.Furthermore, the efficiency of the verification could not be measured, as they were carried out over prolonged periods of time, by multiple people, with different background, and without proper time accounting.
VerifyThis, in contrast, shifts the measurement to efficiency.Typical challenges in the VerifyThis competitions are small but intricate algorithms given in pseudocode with an informal specification in natural language.Participants have to formalise the requirements, implement a solution, and formally verify the implementation for adherence to the specification.There are no restrictions on the programming language and verification technology used.The time frame to solve each challenge is quite short (between 45 to 90 minutes) so that anyone can easily repeat the experiment.Thus, the competition setup can be easily reproduced by anyone, the challenges are self-contained, time is controlled, and establishing the relation between specification and implementation is straightforward.
The correctness properties which the challenges present are typically expressive and focus on the inputoutput behaviour of programs.To tackle them to the full extent, some human guidance within a verification tool is usually required.At the same time, considering partial properties or simplified problems, if this suits the pragmatics of the tool, is encouraged.The competition welcomes participation of automatic tools as combining complementary strengths of different kinds of tools is a development that VerifyThis would like to advance.Submissions are judged for correctness, completeness, and elegance.The focus includes the usability of the tools, their facilities for formalising the properties and providing helpful output.As each solution depends on different tools and different participants, creativity is an important factor in the competition.However, correctness and completeness are relatively objective criteria, and one can estimate approximately how close the team is to a completely verified solution of the challenge.
Experiences with earlier editions of VerifyThis have shown that participation leads to insight in: (i) missing tool features, (ii) useful features, which helped other teams to develop their solutions, and (iii) tool features which are awkward to use and need further improvement and testing.It is difficult to quantify the concrete effects on tool development, but when judging, we see that insights obtained during earlier competitions actually lead to new tool developments.Moreover, the Ver-ifyThis challenges are also used as verification benchmarks outside of the competition.
VerifyThis 2015
VerifyThis 2015 consisted of three verification challenges.Before the competition, an open call for challenge submissions was made.As a result, six challenges were submitted, of which one was selected for the competition (see also Section 5.5 for more details about this call and the selection criteria).The challenges (presented later) provided reference implementations at different levels of abstraction.For the first time, one of the challenges centered around concurrency.
Fourteen teams participated (Table 1).Teams of up to two people were allowed and physical presence on site was required.We particularly encouraged participation of: student teams (this includes PhD students) non-developer teams using a tool someone else developed several teams using the same tool Teams using different tools for different challenges (or even for the same challenge) were welcome.
As in the VerifyThis 2012 competition, after the competition a post-mortem session was held, where participants explained their solutions and answered the judges questions.In parallel, the participants used this half-day session to discuss details of the problems and solutions among each other.
The website of the 2015 instalment of VerifyThis can be found at http://etaps2015.verifythis.org/.More background information on the competition format and the rationale behind it can be found in [HKM12].Reports from previous competitions of similar nature can be found in [KMS + 11, BBD + 11, FPS12], and in the special issue of the International Journal on Software Tools for Technology Transfer (STTT) on the VerifyThis competition 2012 (see [HKM15] for the introduction).
Rules
In order to ensure that the competition proceeded smoothly, the following rules were established: 1.The main rule of the competition is: no cheating is allowed.The judges may penalise or disqualify entrants in case of unfair competition behaviour and may adjust the competition rules to prevent future abuse.2. Solutions are to be submitted by email.3. Submissions must state the version of the verification system used (for development versions, internal revision, time-stamp, or similar unique id). 4. It is permitted to modify the verification system during the competition.This is to be noted in the solution(s).5.All techniques used must be general-purpose, and are expected to extend usefully to new unseen problems.6. Internet access is allowed, but browsing for problem solutions is not.7. Involvement of other people beyond those on the team is not allowed.8.While care is taken to ensure correctness of the reference implementations supplied with problem descriptions, the organisers do not guarantee that they are indeed correct.
2 Challenge 1: Relaxed Prefix (60 minutes) This problem was submitted by Thomas Genet, Université de Rennes 1, in response to the open call for challenges.
Verification Task
Verify a function isRelaxedPrefix determining if a list pat (for pattern) is a relaxed prefix of another list a.The relaxed prefix property holds iff pat is a prefix of a after removing at most one element from pat. Examples: pat = {1,3} is a relaxed prefix of a = {1,3,2,3} (standard prefix) pat = {1,2,3} is a relaxed prefix of a = {1,3,2,3} (remove 2 from pat) pat = {1,2,4} is not a relaxed prefix of a = {1,3,2,3} Implementation notes: One may implement lists as arrays, e.g., of integers.A reference implementation is given below.It may or may not contain errors.
Comments on Solutions
Eleven teams (Verifast, Why3, AutoProof, KeY, Dafny (3 teams), mCRL2, F*, KIV, and VerCors) submitted a solution to this challenge.Difficulties that had been encountered by the participants were mainly at the specification level: getting the prefix definition correct, and making sure that all cases were covered in the postconditions.In particular, several teams forgot the case where the length of the array was less than the length of the prefix, or where the method returned false.In the overall evaluation, the solution provided by the Why3 team was the only one to obtain full marks from the judges.
During the verification, the main challenge was to find an appropriate instantiation for the existential quantifier.Different solutions for this were used: the Why3 team brought the specification into a particular syntactical shape that enabled the SMT solver to guess the instantiation (in a post-competition solution, this trick was replaced with an explicit assertion); the KeY team and the AutoProof team used an explicit return value, which avoided the need for existential quantification (witness computed by the program); Tim Wood, using Dafny, used an explicit hint in the form of a trigger annotation; the KIV team tried to address this by manual instantiation; while Robert Kelly and Marie Farrell, using Dafny, provided a recursive definition of a relaxed prefix.
Future Verification Tasks
For those who had completed the challenge quickly, the description included a further challenge, outlined below.No submissions attempting to solve the advanced challenge were received during the competition.
Verification task: Implement and verify a function relaxedContains(pat, a) returning whether a contains pat in the above relaxed sense, i.e., whether pat is a relaxed prefix of any suffix of a.
3 Challenge 2: Parallel GCD (60 minutes) Various parallel algorithms for computing the greatest common divisor GCD(a,b) exist (cf.[Sed08]).In this challenge, we consider a simple Euclid-like algorithm with two parallel threads.One thread performs subtractions of the form a:=a-b, while the other thread performs subtractions of the form b:=b-a.Eventually, this procedure converges on the GCD.
In pseudo-code, the algorithm is described as follows:
Comments on Solutions
Five teams (Verifast, mCRL2, KIV, CBMC and Ver-Cors) submitted a solution to the concurrent version of this challenge; six teams (Why3, AutoProof, KeY, Dafny (2 teams) and F*) submitted a solution to the sequentialised variant of the challenge.
The solutions to the concurrent version of the challenge were all very different in spirit.The submissions assumed varying degrees of atomicity.In some formalisations, atomic operations were the individual loads and stores, in others-the loop body, yet in othersthe evaluation of the loop condition followed by the loop body.All formalisations assumed sequential consistency though.All were also in some aspect partial.
Bart Jacobs (VeriFast) developed a fine-grained concurrent solution, showing the specified postcondition but assuming the necessary properties of the mathematical GCD predicate.The solution uses "shared boxes", which integrate rely-guarantee reasoning into the separation logic of VeriFast.Despite its significant complexity, the solution was judged as the best among the submitted ones.
Closely following was a solution by Jan Friso Groote with mCRL2.mCRL2 and CBMC are both bounded verification tools and thus only checked correctness within limits on the range of input parameters resp.the loop unwinding depth.The mCRL2 solution elegantly used quantifiers to specify the GCD postcondition.
The KIV team used a global invariant proof approach but got stuck on the necessary GCD properties (while later realising that they actually could have used GCD lemmas from the KIV libraries).Finally, the VerCors team submitted a solution, making use of the recently added support for parallel blocks, but proving absence of data races only.Post-competition they extended this to a full solution.
The judges were also impressed by the attempt of the AutoProof team.In addition to proving correctness of the sequentialised algorithm, they almost succeeded in proving termination of the sequential version, assuming an appropriate fairness condition.
The KIV team was not the only team that experienced that well-developed libraries can help while solving a challenge.The Why3 team solved the sequentialised version of this problem within 15 minutes, because their tool has a powerful library with the necessary GCD lemmas, while Rustan Leino (Dafny) struggled with this (sequentialised) challenge because of the lack of appropriate Dafny libraries.
After the competition, Rustan Leino developed a Dafny solution for the concurrent program, by writing a program that explicitly encoded all the possible interleavings between the different threads, while using explicit program counters for each thread.
4 Challenge 3: Dancing Links (90 minutes) Dancing links is a search technique introduced in 1979 by Hitotumatu and Noshita [HN79] and later popularised by Knuth [Knu00].The technique can be used to efficiently implement a search for all solutions of the exact cover problem, which in its turn can be used to solve Tiling, Sudoku, N-Queens, and other related problems.
Suppose x points to a node of a doubly linked list; let L[x] and R[x] point to the predecessor and successor of that node.Then the operations remove x from the list.The subsequent operations will put x back into the list again.Figure 1 provides a graphical illustration of this process.
Verification Task
Implement the data structure with these operations, and specify and verify that they behave in the way described above.
Doubly linked list
x Fig. 1.Graphic illustration of dancing links operations (inspired by Wim Bohm)
Comments on Solutions
Several participants reported that this had been a difficult challenge, and in particular it had taken them time to understand the full details of the intended behaviour.Ten solutions (Verifast, Why3, AutoProof, KeY, Dafny (2 teams), mCRL2, F*, KIV and CBMC) to the challenge were submitted.During the competition the organisers clarified that the main challenge was in managing the remove and unremove -several elements can be removed and unremoved but they must be unremoved in reverse order (otherwise the references are not maintained).Furthermore, the list should either be considered circular or removal of the first and last element not allowed.
Rustan Leino (using Dafny) was the only one to address this challenge completely within the allocated time.He reported that he found it much easier to reason about the list using integers and quantifiers rather than using recursively defined predicate(s).This observation was confirmed by the Why3 team after completing a postcompetition solution.
The AutoProof team welcomed the Dancing Links challenge, as it was ideally suited for demonstrating their recently developed technique called semantic collaboration [PTFM14].Semantic collaboration is intended to improve reasoning about objects collaborating as equals to maintain global consistency (rather than doing so in a strictly hierarchical manner).
Results, Statistics, and Overall Remarks
We conclude this report with various data points and summaries of results.
Awarded Prizes and Statistics
The judges unanimously decided to award prizes as follows: -Best team: team Why3 -Jean-Christophe Filliâtre and Guillaume Melquiond -Best student team: team KIV -Gidon Ernst and Jörg Pfähler -Distinguished user-assistance tool feature -awarded to two teams: -Why3 for the lemma library (as demonstrated by its use in the competition) -mCRL2 for a rich specification language in an automated verification tool -Best challenge submission: Thomas Genet for the Relaxed Prefix problem, which was used as Challenge 1 in the competition -Tool used by most teams: Dafny The best student team received a 500 Euro cash prize donated by our sponsors while the best overall team received 150 Euros.Smaller prizes were also awarded for the best problem submission and the distinguished userassistance tool feature.
Statistics per Challenge
-Relaxed Prefix: 11 submissions were received, of which only the submission by Jean-Christophe Filliâtre and Guillaume Melquiond (Why3) was judged as correct and complete.-Parallel GCD: 11 submissions were received, of which the submission by Bart Jacobs (Verifast) was judged as correct and most complete.Six of the submitted solutions were restricted to the sequential version of the challenge.-Dancing Links: 10 submissions were received, of which only the submission by Rustan Leino (Dafny) was judged as correct and complete.
Travel Grants
The competition had funds for a limited number of travel grants for student participants.A grant covered the incurred travel and accommodation costs up to EUR 250 for those coming from Europe and EUR 500 for those coming from outside Europe.Evaluation criteria were qualifications (for the applicant's career level), need (explained briefly in the application), and diversity (technical, geographical, etc.).Six travel grants were awarded.
Post-mortem Sessions
Two concurrent post-mortem sessions were held on the afternoon of the competition (stretching to the day after the competition, given the large number of participants).These sessions were much appreciated, both by the judges and by the participants.It was very helpful for the judges to be able to ask the teams questions in order to better understand and appraise their submissions.Concurrently, all other participants presented their solutions to each other.We would recommend such a post-mortem session for any on-site competition.In future editions of the competition we intend to extend this aspect of the event as participants reported the time used as invaluable, providing lively discussions about the challenges, gaining knowledge about tools through presenting challenge solutions to each other and exchanging ideas about future tool developments and solution strategies.
Soliciting Challenges
After much discussion at the previous competition, on how to extend the problem pool and tend better to the needs of the participants, we issued a call for challenges to extend the problem pool.The call stipulated that a problem should contain an informal statement of the algorithm to be implemented (optionally with complete or partial pseudo-code) and the requirement(s) to be verified a problem should be suitable for a 60-90 minute time slot submission of reference solutions is strongly encouraged problems with an inherent language-or tool-specific bias should be clearly identified as such problems that contain several subproblems or other means of scaling difficulty are especially welcome the organisers reserve the right (but no obligation) to use the problems in the competition, either as submitted or with modifications submissions from (potential) competition participants are allowed We received six suggestions for challenges, and decided that one was suited for use during the competition.This challenge was practical, easy to describe to participants, suitable in duration for the competition and could be easily adapted to suit different environments.However, even though we decided not to use all of the submitted challenges directly 1 , the call for submissions provided inspiration for further challenges as well as insight in what people in the community consider interesting, challenging and relevant problems for state-of-the-art verification tools.
Session Recording
This year, for the first time, the organisers encouraged the participants to record their desktop during the competition (on a voluntary basis).The recording would give insight into the pragmatics of different verification systems and allow the participants to learn more from the experience of others deriving a solution.The organisers provided a list with recording software suggestions, 1 primarily due to challenges being too complex for the available time slots though so far only a solution for Linux (Freeseer) could be successfully tested.The main criteria are free availability, ease of installation, and low CPU load.
In general, participants agreed that recording could provide useful information, but, as far as we know, only the KIV team actually made a recording.
Related Events
VerifyThis 2015 is the 4th event in the VerifyThis competition series.Related events are the Verified Software Competition (VSComp, http://vscomp.org) held online, the Competition on Software Verification (SV-COMP [Bey15], http://sv-comp.sosy-lab.org)focusing on evaluating systems in a way that does not require user interaction2 , and the RERS Challenge ([HIM + 14], http://www.rers-challenge.org),which is dedicated to rigorous examination of reactive systems, using different technologies such as theorem proving, model checking, program analysis, symbolic execution, and testing.
VerifyThis is also a collection of verification problems (and solutions).Its counterpart is VerifyThus (http: //verifythus.cost-ic0701.org/)-a distribution of deductive verification tools for Java-like languages, bundled and ready to run in a VM.Both were created with support from COST Action IC0701.
A workshop on comparative empirical evaluation of reasoning systems (COMPARE2012 [KBBS12]) was held at IJCAR 2012 in Manchester.Competitions were one of the main topics of the workshop.
Judging Criteria
Limiting the duration of each challenge assists the judging and comparison of each solution.However, this task is still quite subjective and hence, difficult.Discussion of the solution with the judges typically results in a ranking of solutions for each challenge.In future editions of the competition we envisage that each team would complete a questionnaire for each challenge on submission.This would assist the judging and would also encourage teams to reflect on their solutions.
Based on earlier experiences, the criteria that were used for judging were: -Correctness: is the formalisation of the properties adequate and fully supported by proofs?In essence, this is a two-valued criterion, and a correct formalisation is a must to consider the solution.-Completeness: are all tasks solved, and are all required aspects covered?The judges used a rough estimate how much of the proof was finished to come to a complete solution.For example, if a team can show a full solution developed the next day, this is used an indication of being relatively close to the full solution within the time frame of the competition.-Readability: can the submission be understood easily, possibly even without a demo?Clearly, this is a more subjective criteria, but as all the judges participated in the post-mortem session, and have ample experience with formal specification, therefore the number of questions about the formalisation is a good indication for this.Teams that used novel features are usually eager to provide this information during the post-mortem session.
A novelty for VerifyThis this year was the inclusion of a judge with a background in software model checking (the fourth author of this paper).He observed that the participants could have been more critical, reflecting on their solutions.To use the tools, often expert knowledge is necessary, and the tools are not very good at providing feedback when a proof attempt fails.For future competitions, he felt that the most interesting aspect would be new insights, leading to further improvements to the tool.This aspect was also mentioned by some of the competition participants.It will be worthwhile investigating what novelties have resulted from earlier competitions.
Post-Competition Discussion
Directly after the competition, before starting the postmortem session, a plenary discussion was held to gather the opinion of the participants about the organisation of future competitions.The following topics were discussed: Challenges: The participants agreed that it was interesting and timely to have a concurrency-related challenge.In general the feeling was that it is good to have modular challenges, which can be broken down into smaller subproblems.There was also a suggestion to have challenges in the form: given a verified program, extend it to... Tool vs. user: An interesting aspect remains regarding what we are actually measuring: the tool or the user.
To focus more on measuring the tool, the challenge descriptions could include an informal description of the necessary invariants.However, it was also remarked that this might restrict the variety of tools participating in the competition.Another possibility, to help focus the competition on the tools, would be to create mixed teams, where you use a tool that you do not know in advance (possibly with a tutor).As a result of this discussion, in the next edition of this competition, we plan to start the day with a Dafny tutorial, followed by an out-ofcompetition challenge, open to anybody interested in participating.Timing: The program as it is now, is very dense.A slightly larger break between the challenges would be welcome.
Since participants often continue working on their solutions after the competition, a post-competition deadline to submit solutions would also be welcomed.
The possibility of providing all challenges to the competitors at the same time was discussed, such that participants can organise their own time to work on a challenge.In that case, to avoid two person teams having an advantage over single person teams, because they can distribute the work, all teams would be allowed the use of only one computer.Reporting: There was much discussion about the possibility of publishing details from these competitions.
There have been several competition report papers, and there has been a special issue of STTT on the VerifyThis competition in 2012.New publications need to provide new insights.One possibility is to encourage several participants to write a joint paper about one particular challenge, where they compare their different solutions.Another possibility is to reach an agreement with an editor to publish a series of competition reports, summarising the main facts of the competition.In general, the participants agreed that it is important to make the (polished) solutions publicly available for others to inspect and compare.The solutions of the best student team prize winners, the KIV team, are available at https://swt.informatik.uni-augsburg.de/swt/projects/verifythis-competition-2015/, while solutions of the best overall team prize winners, the Why3 team, are available at http://toccata.lri.fr/gallery/why3.en.html.
For further solutions to competition challenges we refer to http://etaps2015.verifythis.org/.
Final Remarks
The VerifyThis 2015 challenges have offered a substantial degree of complexity and difficulty.A new development compared to earlier editions of the competition was the introduction of a concurrency-related challenge.Furthermore, we are happy to note that this year two teams participated using bounded verification tools (CBMC and mCRL2) to check functional properties.We hope such participation contributes to better understanding of the strengths of different kinds of tools and opens new avenues for combining them.Two further insights demonstrated by the solutions this year were the importance of a good lemma library, and of a good specification language.The similarity between the specification language of mCRL2 and of "autoactive" verification systems was nothing but remarkable.
A new edition of the VerifyThis competition will be held as part of ETAPS 2016.
(
WHILE WHILE WHILE a != b DO DO DO IF IF IF a>b THEN THEN THEN a:=a-b ELSE ELSE ELSE SKIP SKIP SKIP FI FI FI OD OD OD || WHILE WHILE WHILE a != b DO DO DO IF IF IF b>a THEN THEN THEN b:=b-a ELSE ELSE ELSE SKIP SKIP SKIP FI FI FI OD OD OD ); OUTPUT OUTPUT OUTPUT a 3.1 Verification Task Specify and verify the following behaviour of this parallel GCD algorithm: Input: two positive integers a and b Output: a positive integer that is the greatest common divisor of a and b Synchronisation can be added where appropriate, but try to avoid blocking of the parallel threads.Sequentialisation: If a tool does not support reasoning about parallel threads, one may verify the following pseudo-code algorithm: WHILE WHILE WHILE a != b DO DO DO CHOOSE CHOOSE CHOOSE( IF IF IF a > b THEN THEN THEN a := a -b ELSE ELSE ELSE SKIP SKIP SKIP FI FI FI, IF IF IF b > a THEN THEN THEN b := b -a ELSE ELSE ELSE SKIP SKIP SKIP FI FI FI ) OD OD OD; OUTPUT OUTPUT OUTPUT a | 5,992 | 2016-10-18T00:00:00.000 | [
"Computer Science"
] |
On Dual Definite Subspaces in Krein Space
Extensions of dual definite subspaces to dual maximal definite ones are described. The obtained results are applied to the classification of C\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {C}$$\end{document}-symmetries. The concepts of dual quasi maximal subspaces and quasi bases are introduced and studied. It is shown that complex shift g(·)→g(·+ia)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g(\cdot )\rightarrow {g}(\cdot +ia)$$\end{document} of Hermite functions gn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_n$$\end{document} is an example of quasi bases in L2(R)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_2({\mathbb {R}})$$\end{document}.
A (closed) subspace L of the Hilbert space H is called nonnegative, positive, uniformly positive with respect to the indefinite inner product [·, ·] if, respectively, Nonpositive, negative and uniformly negative subspaces are introduced similarly. In each of the above mentioned classes we can define maximal subspaces. For instance, a closed positive subspace L is called maximal positive if L is not a proper subspace of a positive subspace in H. The concept of maximality for other classes of closed subspaces is defined similarly. A subspace L of H is called definite if it is either positive or negative. The term uniformly definite is defined accordingly.
Subspaces L ± of H are called dual subspaces if L − is nonpositive, L + is nonnegative, and L ± are orthogonal with respect to [·, ·], that is [ f + , f − ] = 0 for all f + ∈ L + and all f − ∈ L − . The subject of the paper is dual definite subspaces. Our attention is mainly focused on dual definite subspaces L ± with additional assumption of the density of their direct At first glance, the density of D in H should imply the maximality of definite subspaces L ± in the Krein space (H, [·, ·]). This assumption is true when L ± are uniformly definite. For the case of dual definite subspaces, Langer [13] constructed a densely defined sum (1.2) for which there exist various extensions to dual maximal definite subspaces L max ± : The decomposition (1.2) is often appeared in the spectral theory of PT -symmetric Hamiltonians [9] as the result of closure of linear spans of positive and negative eigenfunctions and it is closely related to the concept of C-symmetry in PT -symmetric quantum mechanics (PTQM) [7,8]. The description of symmetries C is one of the key points in PTQM and it can be successfully implemented only in the case where the dual subspaces in (1.2) are maximal. This observation give rise to a natural question: how to describe all possible extensions of dual definite subspaces L ± to dual maximal definite subspaces L max ± ? In Sect. 2 this problem is solved with the use of Krein's results on non-densely defined Hermitian contractions [4,11]. The main result (Theorem 2.6) reduces the description of extensions (1.3) to the solution of the operator equation (2.10).
Each direct sum D max of dual maximal definite subspaces L max ± generates an associated Hilbert space (H G , (·, ·) G ) defined in Sect. 3. If L max ± are uniformly definite, then D max coincides with H and H = H G [since the inner product (·, ·) G is equivalent to the original one (·, ·)]. On the other hand, if L max ± are definite subspaces, then H = H G and (·, ·) G is not equivalent to (·, ·). In this case, the direct sum D ⊂ D max may be nondensely defined in the Hilbert space (H G , (·, ·) G ) constructed by D max .
We say that dual definite subspaces L ± are quasi maximal if there exists an extension (1.3) such that the set D remains dense in the Hilbert space (H G , (·, ·) G ) constructed by D max .
In Sect. 4, dual quasi maximal subspaces are characterized in terms of extremal extensions of symmetric operators: Theorems 4.2, 4.5, Corollary 4.6. The theory of extremal extensions [2,3] allows one to classify all possible cases: a Hilbert spaces H G which preserve the density of D exists and is uniquely defined (A); exists and is non-uniquely defined (B); does not exist (C).
Section 5 deals with the operator of C-symmetry. Each pair of dual definite subspaces L ± determines by (5.1) an operator C 0 such that C 2 0 = I and J C 0 is a positive symmetric operator in H. The operator C 0 is called an operator of C-symmetry if J C 0 is a self-adjoint operator in H. In this case, the notation C is used instead of C 0 .
Let C 0 be an operator associated with dual definite subspaces L ± . Its extension to the operator of C-symmetry is equivalent to the construction of dual maximal definite subspaces L max ± in (1.3). This relationship allows one to use the classification (A)−(C) in Sect. 4 for the solution of the following problems: (i) how many operators of Csymmetry can be constructed on the base of dual definite subspaces L ± ? (ii) is it possible to define an operator of C-symmetry as the extension of C 0 by continuity in the new Hilbert space (H G , (·, ·) G )?
The concept of dual quasi maximal subspaces allows one to introduce quasi bases in Sect. 6. The characteristic properties of quasi bases are presented in Theorem 6.3 and Corollaries 6.4-6.6. The relevant examples are given. In particular, complex shifts of eigenfunctions of the harmonic oscillator are considered.
In what follows D(H ), R(H ), and ker H denote, respectively, the domain, the range, and the kernel space of a linear operator H . The symbol H D means the restriction of H onto a set D. Let H be a Hilbert space. Sometimes, it is useful to specify the inner product (·, ·) endowed with H. In this case the notation (H, (·, ·)) will be used.
Dual Maximal Subspaces
Let (H, [·, ·]) be a Krein space with fundamental symmetry J . Denote The subspaces H ± of H are orthogonal with respect to the initial inner product (·, ·) as well as with respect to the indefinite inner product where K + : H + → H − and K − : H − → H + are strong contractions 2 with domains D(K + ) = M + ⊆ H + and D(K − ) = M − ⊆ H − , respectively. Therefore, the pair of subspaces L ± is determined by the formula and P + = 1 2 (I + J ) and P − = 1 2 (I − J ) are orthogonal projection operators on H + and H − , respectively.
By the construction, T 0 is a strong contraction in H such that Proof It sufficient to establish that the orthogonality of L ± with respect to [·, ·] is equivalent to the symmetricity of T 0 . For every x ± ∈ M ± , This relation is equivalent to for every f = x + + x − and g = y + + y − from the domain of T 0 . Therefore, T 0 is a symmetric operator.
The operator T 0 characterizes the 'deviation' of definite subspaces L ± with respect to H ± and it allows to characterize the additional properties of L ± . Lemma 2.2 [10] Let L ± be dual definite subspaces (i.e., the operator T 0 satisfies the condition of Lemma 2.1). Then By virtue of Lemmas 2.1, 2.2, the extension of dual definite subspaces L ± to dual maximal definite subspaces L max ± is equivalent to the extension of T 0 to a self-adjoint strong contraction T anticommuting with J . In this case cf. (2.2), In what follows we assume that the direct sum D = L + [+]L − of dual definite subspaces L ± is a dense set in H.
The next result is well known and it can be established by various methods (see, e.g., [6,15] Proof For the construction of L max ± we should prove the existence of a self-adjoint strong contractive extension T ⊃ T 0 which anticommutes with J . The existence of a self-adjoint contractive extension T ⊃ T 0 is well known [5,11]. However, we cannot state that T anticommutes with J . To overcome this inconvenience we modify T as follows: The operator T is a self-adjoint contraction which anticommutes with J . Moreover, T is an extension of T 0 (since J T 0 = −T 0 J ). Therefore, the nonnegative/nonpositive subspaces L max ± defined by (2.5) are dual and L max ± ⊃ L ± . The existence of neutral elements (i.e.,
Remark 2.4
It follows from the proof of Theorem 2.3 that each self-adjoint contractive extension T ⊃ T 0 is a strong contraction.
The set of all self-adjoint contractive extensions of T 0 forms an operator interval [T μ , T M ] [5, §108], [11]. The end points of this interval: T μ and T M are called the hard and the soft extensions of T 0 , respectively. Corollary 2.5 Let L ± be dual definite subspaces. Then their extension to the dual maximal subspaces L max ± can be defined by (2.5) with Proof Let us prove that By virtue of [5, p. 380], the operators T μ and T M have the form: where T is a self-adjoint contractive extension of T 0 anticommuting with J (its existence was proved in Theorem 2. Due to the proof of Theorem 2.3, for the construction of T in (2.6) we use an arbitrary self-adjoint contraction T ⊃ T 0 . In particular, choosing T = T μ and using (2.7), we complete the proof.
In general, the extension of dual definite subspaces L ± to dual maximal definite subspaces L max ± is not determined uniquely. To describe all possible cases we use the formula [2,11] which gives a one-to-one correspondence between all self-adjoint contractive extensions T of T 0 and all nonnegative self-adjoint contractions X in the subspace Let X 0 = 1 2 I be a solution of (2.10). Then the nonnegative self-adjoint contraction X 1 = I − X 0 is also a solution of (2.10). Moreover, each self-adjoint nonnegative contraction X α = (1 − α)X 0 + α X 1 , α ∈ [0, 1] is the solution of (2.10) Therefore, either dual maximal definite subspaces L max ± ⊃ L ± are determined uniquely or there are infinitely many such extensions.
II.
The above results as well as the results in the sequel can be rewritten with the use of the Cayley transform of T 0 : (2.11) The operator G 0 is a closed densely defined positive symmetric operator in H with and such that It follows from Remark 2.4 that every nonnegative self-adjoint extension Self-adjoint positive extensions G of G 0 are in one-to-one correspondence with the set of contractive self-adjoint extensions of T 0 : (2.13) In particular, the Friedrichs extension G μ of G 0 corresponds to the operator T μ , while the Krein-von Neumann extension G M is the Cayley transform of T M . The relation (2.7) between T μ and T M is rewritten as follows [12]: 14) It follows from (2.11) and Lemmas 2.1, 2.2 (see also [12,Proposition 4.2]) that dual maximal definite subspaces L max ± ⊃ L ± is in one-to-one correspondence with positive self-adjoint extensions G of G 0 satisfying the additional condition:
The Case of Maximal Uniformly Definite Subspaces
Let L max ± be dual maximal uniformly definite subspaces. Then: Relation (3.1) illustrates the variety of possible decompositions of the Krein space (H, [·, ·]) onto its maximal uniformly positive/negative subspaces. This property is characteristic for a Krein space and, sometimes, it is used for its definition [6].
With decomposition (3.1) one can associate a new inner product in H: . By virtue of (2.5), the relations f ± = (I + T )x ± , g ± = (I + T )y ± , x ± , y ± ∈ H ± hold. Taking (2.13) into account we rewrite (3.2) as follows: Here G is a bounded 3 positive self-adjoint operator with 0 ∈ ρ(G). Therefore, the dual subspaces L max ± determine the new inner product which is equivalent to the initial one (·, ·). The subspaces L max ± are mutually orthogonal with respect to (·, ·) G in the Hilbert space (H, (·, ·) G ).
Summing up: the choice of various dual maximal uniformly definite subspaces L max ± generates infinitely many equivalent inner products (·, ·) G of the Hilbert space H but it does not change the initial Krein space (H, [·, ·]).
The Case of Maximal Definite Subspaces
Assume that L max ± are dual maximal definite subspaces. Then the direct sum is a dense set in the Hilbert space (H, (·, ·)). The corresponding positive self-adjoint operator G is unbounded.
Similarly to the previous case, with direct sum (3.4) one can associate a new inner product (·, ·) G = (G·, ·) defined on D(G) = D max by the formula (3.2). The inner product (·, ·) G is not equivalent to the initial one and the linear space D max endowed with (·, ·) G is a pre-Hilbert space.
Let H G be the completion of D max with respect to (·, ·) G . The Hilbert space H G does not coincide with H. The dual subspaces L max ± are orthogonal with respect to (·, ·) G and, by construction, the new Hilbert space (H G , (·, ·) G ) can be decomposed as follows: whereL max ± are the completion of L max ± with respect to (·, ·) G . The decomposition (3.5) can be considered as a fundamental decomposition of the new Krein space (H G , [·, ·] G ) with the indefinite inner product where Let D[G] be the energetic linear manifold constructed by the positive self-adjoint operator G. In other words, D[G] denotes the completion of D(G) = D max with respect to the energetic norm The set of elements D[G] coincides with D( √ G) and the energetic linear manifold is a Hilbert space (D[G], (·, ·) en ) with respect to the energetic inner product Proof Indeed, taking (3.2) and (3.6) into account, The obtained relation can be extended onto Summing up: the choice of various dual maximal definite subspaces L max ± generates infinitely many Krein
Definition and Principal Results
Let L ± be dual definite subspaces and let G 0 be the corresponding symmetric operator. Each positive self-adjoint extension G of G 0 with condition (2.15) determines the Hilbert space (H G , (·, ·) G ). Obviously, each dual maximal definite subspaces are quasi maximal. For dual uniformly definite subspaces, the concept of quasi-maximality is equivalent to maximality, i.e., each quasi maximal uniformly definite subspaces have to be maximal uniformly definite.
In general case of definite subspaces, the closure of dual quasi maximal subspaces L ± with respect to (·, ·) G coincides with subspacesL max ± in the fundamental decomposition (3.5), i.e., the closure of . It is natural to suppose that the quasi maximality can be characterized in terms of the corresponding positive self-adjoint extensions G of G 0 . For this reason, we recall
Remark 4.3
In general, an extremal extension G of G 0 is not determined uniquely. Let G i , i = 1, 2 be extremal extensions of G 0 that satisfy (2.15). By virtue of (3.2), the operator defined as W f = f for f ∈ D(G 0 ) and extended by continuity onto (H G 1 , (·, ·) G 1 ) is a unitary mapping between H G 1 and H G 2 . Moreover, W J G 1 = J G 2 W , where J G i are the fundamental symmetry operators corresponding to the fundamental decompositions . Therefore, the indefinite inner products of these spaces satisfy the relation Proof Let G 0 be a unique nonnegative self-adjoint extension G. Then, G = G μ = G M and, by virtue of (2.14), J G = G −1 J . Therefore, the operator G determines dual maximal subspaces L max ± ⊃ L ± . Furthermore, G is an extremal extension (since the Friedrichs extension and the Krein-von Neumann extension are extremal). In view of Theorem 4.2, L ± are quasi maximal. Thus, the condition (i) ensures the quasi maximality of L ± .
The condition (ii) is equivalent to (i) due to (2.14) and (2.15). The equivalence (i) and (iii) follows form [11,Theorem 9]. The condition (i) reformulated for the Cayley transform T 0 of G 0 [see (2.11)] means that T 0 has a unique self-adjoint contractive extension T = T μ = T M . The latter is equivalent to (iv) due to [11,Theorem 6].
Assume that dual subspaces L ± do not satisfy conditions of Proposition 4.4. Then Conversely, let a hypermaximal neutral subspace M 1 be given. Then M = M 1 ⊕ J M 1 and the orthogonal projection X on M 1 is the solution of (2.10). The formula (2.9) with given X determines the self-adjoint strong contraction T anticommuting with J and its Cayley transform G defines the Hilbert space (H G , (·, ·) G ) in which D(G 0 ) is a dense set. Proof It follows from the proof of Theorem 4.5 that a hypermaximal neutral subspace M 1 determines the dual maximal subspaces L max ± ⊃ L ± such that D is a dense set in the Hilbert space (H G , (·, ·) G ) associated with D max . Precisely, the orthogonal projection X on M 1 defines the required subspaces L max ± by the formulas (2.5) and (2.9). Therefore, one can construct infinitely many such extensions L max ± because there are infinitely many hypermaximal neutral subspaces in the Krein space (M, [·, ·]).
If X is the orthogonal projection on M 1 , then I − X is the orthogonal projection on the hypermaximal neutral subspace J M 1 . These operators are solutions of (2.10). In this case, the nonnegative self-adjoint contractions X α = (1 − α)X + α(I − X ), α ∈ (0, 1) are solutions of (2.10) and they also determine dual maximal subspaces L max ± (α) ⊃ L ± via (2.5) and (2.9). The linear manifold D cannot be dense in the Hilbert space (H G , (·, ·) G ) associated with L max + (α)[+]L max − (α) because X α loses the property of being projection operator (since X 2 α = X α ). subspaces ⇐⇒ the subspaces L ± are not quasi maximal, the total amount of possible extensions L ± → L max ± is not specified (a unique extension is possible as well as infinitely many ones), the linear manifold D is not dense in the Hilbert space (H G , (·, ·) G ) associated with D max .
Auxiliary Statement
Let L ± be dual definite subspaces and let L max ± ⊃ L ± be dual maximal definite subspaces. The subspaces L max Therefore, { f n } ( f n ∈ D(G)) is a Cauchy sequence in (H G , (·, ·) G ) if and only if { x n } is a Cauchy sequence in (H, (·, ·)). This means that a one-to-one correspondence between H G and H can be established as follows: Let as assume that F ∈ H G is orthogonal to where x 0 runs D(T 0 ). By virtue of (2.3) and (4.4), γ = 0. Therefore, F γ = 0.
How to Construct Dual Quasi Maximal Subspaces?
We consider below an example (inspired by [1,13]) which illustrates a general method of the construction of dual quasi maximal subspaces.
Let {γ + n } and {γ − n } be orthonormal bases of subspaces H ± in the fundamental decomposition (2.1). Every φ ∈ H has the representation where {c ± n } ∈ l 2 (N). The operator is a self-adjoint strong contraction anticommuting with the fundamental symmetry J of the Krein space (H, [·, ·]). The subspaces L max ± defined by (2.5) with the operator T above are dual maximal definite. But they cannot be uniformly definite since T = 1, see Lemma 2.2.
Let us fix elements χ ± ∈H, and define the following subspaces of H ± : are quasi maximal for 1 2 < δ ≤ 3 2 . In particular, 1 2 < δ ≤ 1 corresponds to the case (A); the case (B) holds when 1 < δ ≤ 3 2 . Proof First of all we note that D = L + [+]L − is a dense set in H for 1 2 The subspaces L ± in (4.8) are the restriction of dual maximal subspaces L max ± = (I + T )H. Let us show that, for 1 2 < δ ≤ 1, the set D is dense in the Hilbert space (H G , (·, ·) G ) associated with L max ± . Due to Lemma 4.7, one should check (4.4). In view of (4.5), (4.6): On that is impossible for 1 2 < δ ≤ 1. Therefore, relation (4.4) holds and L ± are quasi maximal subspaces. By [1, Proposition 6.3.9], the dual subspaces L ± have a unique extension to dual maximal subspaces when 1 2 < δ ≤ 1 that corresponds to the case (A). If 1 < δ ≤ 3 2 , then the set D cannot be dense in the Hilbert space (H G , (·, ·) G ) considered above. However, for such δ, the dual subspaces L ± can be extended to different pairs of dual maximal subspaces [1, Proposition 6.
The Uniqueness of Dual Maximal Extension L max
± ⊃ L ± Does Not Mean that L ± are Quasi Maximal Let us assume that χ + = 0 and χ − is defined as in (4.7). Then M + = H + and the subspace L + in (2.14) coincides with L max + . This means that the dual maxi-mal definite subspaces L max ± ⊃ L ± are determined uniquely. Precisely, L max where L [⊥] + denotes the maximal negative subspace orthogonal to L + with respect to the indefinite inner product [·, ·].
Reasoning by analogy with the proof of Proposition 4.8 we obtain that the dual definite subspaces L max + and L − are quasi maximal (the case (A) of the classification above) for 1 2 < δ ≤ 1. If 1 < δ ≤ 3 2 , then the direct sum L max + [+]L − cannot be dense in the Hilbert space (H G , (·, ·) G ) constructed by L max ± . Since the extension L max ± ⊃ L ± is determined uniquely, the subspaces L max + and L − cannot be quasi maximal (the case (C) of the classification above).
Operators of C-Symmetry Associated with Dual Maximal Definite Subspaces
An operator C 0 associated with dual definite subspaces L ± is defined as follows: its domain D(C 0 ) coincides with L + [+]L − and If C 0 is given, then the corresponding dual subspaces L ± are recovered by the formula L ± = 1 2 (I ± C 0 )D(C 0 ).
Proposition 5.1
The following are equivalent: (i) C 0 is determined by dual subspaces L ± with the use of (5.1); (ii) C 0 satisfies the relation C 2 0 f = f for all f ∈ D(C 0 ) and J C 0 is a closed densely defined positive symmetric operator in H.
Proof In view of (5.1), and Taking into account the definition (2.11) of G 0 and formulas (3.2), (3.3), we obtain that G 0 = J C 0 , where G 0 is a closed densely defined positive symmetric operator in The Cayley transform T 0 of G 0 = J C 0 is a symmetric strong contraction in H (since G 0 is positive symmetric). Moreover, the condition C 2 0 = I on D(C 0 ) means that T 0 satisfies (2.4). Substituting T 0 into (2.2) we obtain the required dual definite subspaces L ± which generate C 0 with the help of (5.1).
We say that C 0 is an operator of C-symmetry if C 0 is associated with dual maximal definite subspaces L max ± . In this case, the notation C will be used instead of C 0 . An operator of C-symmetry admits the presentation C = J e Q , where Q is a self-adjoint operator in H such that J Q = −Q J [12]. Let C 0 be an operator associated with dual definite subspaces L ± . Its extension to the operator of C-symmetry C is equivalent to the construction of dual maximal definite subspaces L max ± ⊃ L ± . By Theorem 2.3, each operator C 0 can be extended to an operator of C-symmetry which, in general, is not determined uniquely. Its choice C ⊃ C 0 determines the new Hilbert space (H G , (·, ·) If L ± are quasi maximal, then there exists a dual maximal extension L max ± ⊃ L ± such that C 0 is extended to C by continuity in the new Hilbert space (H G , (·, ·)
Quasi Bases
A Therefore, { f n } is an orthonormal basis in the Hilbert space (H G , (·, ·) G ). The inverse implication (ii) → (i) is obvious.
(ii) → (iii). In view of (5.2), C = J e Q and G = e Q , where Q is a selfadjoint operator anticommuting with J . In this case, the relation (6.2) takes the form (e Q/2 f n , e Q/2 f m ) = δ nm . Hence, {g n = e Q/2 f n } is an orthonormal sequence in H. Its completeness will be established with the use of Lemma 4.7. Before doing this we note that the dual maximal definite subspaces L max ± = 1 2 (I ± C)D(C) corresponding to C = J e Q are also given by (2.5) with T = − tanh Q 2 [12]. Therefore, the bounded operator in (4.3) coincides with cosh −1 Q/2, where cosh Q/2 = 1 2 (e Q/2 + e −Q/2 ).
Since g n = e Q/2 f n and f n ∈ D(G) = D(e Q ), every g n belongs to the domain of definition of cosh Q/2 and (I − tanh Q/2) cosh Q/2g n = (cosh Q/2 − sinh Q/2)g n = e −Q/2 g n = f n . (6.3) Comparing the obtained relation with (2.2) and taking into account that the subspaces L ± ⊂ L max ± coincide with the closure of span{ f ± n }, we conclude that D(T 0 ) = M − ⊕ M + coincides with the closure of span{cosh Q/2g n }. Therefore, Let u ∈ H be orthogonal to {g n }. Then 0 = (u, g n ) = (cosh −1 Q/2u, cosh Q/2g n ) and hence, cosh −1 Q/2u ∈ R( ) ∩ (H (M − ⊕ M + )). By virtue of Lemma 4.7, cosh −1 Q/2u = 0. This means that u = 0 and {g n } is a complete orthonormal sequence in H, i.e., {g n } is a basis in H.
It follows from (2.2) and (6.3), that cosh Q/2g n belongs to one of the subspaces H ± (depending on either f n ∈ L + or f n ∈ L − ). The same property is true for g n because g n = cosh Q/2g n and : H ± → H ± .
(iii) → (ii). Since g n = e Q/2 f n belongs to H ± , we get J g n = ±g n = J e Q/2 f n = e −Q/2 J f n . Therefore, g n ∈ D(e Q/2 ) ∩ D(e −Q/2 ). This means that the sequence {cosh Q/2g n } is well defined and f n ∈ D(e Q ).
For given Q we define the operator of C-symmetry C = J e Q and G = e Q . By analogy with (6.2), Therefore, { f n } is an orthonormal sequence in (H G , (·, ·) G ). It follows form (6.4) that the sequence {G f n } is biorthogonal to { f n } (in the Hilbert space H). Hence, G f n = sign([ f n , f n ])J f n and C f n = sign([ f n , f n ]) f n . The latter means that C is an extension of the operator C 0 defined by (5.1) and the dual maximal definite subspaces L max ± determined by C are extensions of the dual definite subspaces L ± generated as the closures of span{ f ± n }. This fact and (6.3) lead to the conclusion that L ± are determined by (2.2), where D(T 0 ) = M − ⊕ M + coincides with the closure of span{cosh Q/2g n }.
Assume that { f n } is not complete in (H G , (·, ·) G ). Then the direct sum of L ± cannot be dense in H G and, by Lemma 4.7 (since = cosh −1 Q/2) there exists p = cosh −1 Q/2u = 0 such that, for all g n , 0 = ( p, cosh Q/2g n ) = (cosh −1 Q/2u, cosh Q/2g n ) = (u, g n ) that is impossible (since {g n } is a basis of H). The obtained contradiction implies that { f n } is an orthonormal basis of H G . [g, C f n ] f n , e Q/2 g = ∞ n=1 [g, C f n ]e Q/2 f n (6.5) where the series converge in the Hilbert spaces (H G , (·, ·) G ) and (H, (·, ·)), respectively. Proof
Corollary 6.6
If eigenfunctions { f n } of a J -symmetric operator H form a quasi basis in H, then there exists an operator of C-symmetry C such that the operator H restricted on span{ f n } turns out to be essentially self-adjoint in the Hilbert space (H G , (·, ·) G ) generated by C.
Proof Due to Theorem 6.3 there exists an operator C such that { f n } is a basis of (H G , (·, ·) G ). The restriction of C on span{ f n } coincides with the operator C 0 defined by (5.1) (here L ± are the closures of span{ f ± n }). It is easy to see that Taking for all f , g ∈ span{ f n }. Hence H is symmetric in (H G , (·, ·) G ) and eigenvalues of H corresponding to eigenfunctions f n must be real. Therefore, R(H ± i I ) ⊃ span{ f n } and the operator H is essentially self-adjoint in H G .
Examples
Quasi-bases can be easy constructed with the use of Theorem 6.3. Indeed let us consider an orthonormal basis {g n } of H such that each g n belongs to one of the subspaces H ± in the fundamental decomposition (2.1). Let Q be a self-adjoint operator in H, which anticommutes with J . If all g n belong to the domain of definition of e −Q/2 then f n = e −Q/2 g n is an J -orthonormal system of the Krein space (H, [·, ·]). Assuming additionally that { f n } is complete in H, we get an example of quasi basis. I. Let H = L 2 (R) and let J = P be the space parity operator P f (x) = f (−x). The subspaces H ± of the fundamental decomposition (2.1) coincide with the subspaces of even and odd functions of L 2 (R).
The Hermite functions g n (x) = 1 2 n n! √ π H n (x)e −x 2 /2 , H n (x) = e x 2 /2 (x − d dx ) n e −x 2 /2 are eigenfunctions of the harmonic oscillator and they form an orthonormal basis of L 2 (R). The functions g n are either odd or even functions. Therefore, g n ∈ H + or g n ∈ H − . Since Hermitian functions are entire functions, the complex shift of g n can be defined: f n (x) = g n (x + ia), a ∈ R\{0}, n = 0, 1, 2, . . . The operator Q is self-adjoint in L 2 (R) and in anticommutes with P. Therefore, { f n } is a quasi basis of L 2 (R). The functions { f n } are simple eigenfunctions of the P-symmetric operator Therefore, H restricted on span{ f n } is essentially self-adjoint in the new Hilbert space (H G , (·, ·) G ), where H G is the completion of span{ f n } with respect to the norm: The eigenfunctions g n are either even or odd functions. | 7,912.8 | 2018-08-23T00:00:00.000 | [
"Materials Science"
] |
Hamilton type gradient estimates for a general type of nonlinear parabolic equations on Riemannian manifolds
In this paper, we prove Hamilton type gradient estimates for positive solutions to a general type of nonlinear parabolic equation concerning V-Laplacian: (∆V − q(x, t) − ∂t)u(x, t) = A(u(x, t)) on complete Riemannian manifold (with fixed metric). When V = 0 and the metric evolves under the geometric flow, we also derive some Hamilton type gradient estimates. Finally, as applications, we obtain some Liouville type theorems of some specific parabolic equations.
Introduction
Gradient estimates are very powerful tools in geometric analysis. In 1970s, Cheng-Yau [3] proved a local version of Yau's gradient estimate (see [25]) for the harmonic function on manifolds. In [16], Li and Yau introduced a gradient estimate for positive solutions of the following parabolic equation, which was known as the well-known Li-Yau gradient estimate and it is the main ingredient in the proof of Harnack-type inequalities. In [10], Hamilton proved an elliptic type gradient estimate for heat equations on compact Riemannian manifolds, which was known as the Hamilton's gradient estimate and it was later generalized to the noncompact case by Kotschwar [15]. The Hamilton's gradient estimate is useful for proving monotonicity formulas (see [9]). In [22], Souplet and Zhang derived a localized Cheng-Yau type estimate for the heat equation by adding a logarithmic correction term, which is called the Souplet-Zhang's gradient estimate. After the above work, there is a rich literature on extensions of the Li-Yau gradient estimate, Hamilton's gradient estimate and Souplet-Zhang's gradient estimate to diverse settings and evolution equations. We only cite [1,8,11,12,18,19,24,28,31] here and one may find more references therein. An important generalization of the Laplacian is the following diffusion operator on a Riemannian manifold (M, g) of dimension n, where V is a smooth vector field on M. Here ∇ and ∆ are the Levi-Civita connenction and Laplacian with respect to metric g, respectively. The V-Laplacian can be considered as a special case of V-harmonic maps introduced in [5]. Recall that on a complete Riemannian manifold (M, g), we can define the ∞-Bakry-Émery Ricci curvature and m-Bakry-Émery Ricci curvature as follows [6,20] where m ≥ n is a constant, Ric is the Ricci curvature of M and L V denotes the Lie derivative along the direction V. In particular, we use the convention that m = n if and only if V ≡ 0. There have been plenty of gradient estimates obtained not only for the heat equation, but more generally, for other nonlinear equations concerning the V-Laplacian on manifolds, for example, [4,13,20,27,32]. In [7], Chen and Zhao proved Li-Yau type gradient estimates and Souplet-Zhang type gradient estimates for positive solutions to a general parabolic equation on M×[0, T ] with m-Bakry-Émery Ricci tensor bounded below, where q(x, t) is a function on M×[0, T ] of C 2 in x-variables and C 1 in t-variable, and A(u) is a function of C 2 in u. In the present paper, by studying the evolution of quantity u 1 3 instead of ln u, we derive localised Hamilton type gradient estimates for |∇u| √ u . Most previous studies cited in the paper give the gradient estimates for |∇u| u . The main theorems are below. Theorem 1.1. Let (M n , g) be a complete Riemannian manifold with , some fixed point x in M and some fixed radius ρ. Assume that there exists a constant D 1 > 0 such that u ∈ (0, D 1 ] is a smooth solution to the general parabolic Eq Then there exists a universal constant c(n) that depends only on n so that Remark 1.1. Hamilton [10] first got this gradient estimate for the heat equation on a compact manifold. We also have Hamilton type estimates if we assume that Ric V ≥ −(m − 1)K 1 for some constant K 1 > 0, and notice that Ric V ≥ −(m − 1)K 1 is weak than Ric m V ≥ −(m − 1)K 1 . Since we do not have a good enough V-Laplacian comparison for general smooth vector field V, we need the condition that |V| is bounded in this case. Nevertheless, when V = ∇ f , we can use the method given in [23] to obtain all results in this paper, without assuming that |V| is bounded.
If q = 0 and A(u) = au ln u, where a is a constant, then following the proof of Theorem 1.1 we have Corollary 1.2. Let (M n , g) be a complete Riemannian manifold with , some fixed point x in M and some fixed radius ρ. Assume that u is a positive smooth solution to the equation in Q ρ 2 ,T 1 −T 0 with t T 0 . Using the corollary, we get the following Liouville type result. Corollary 1.3. Let (M n , g) be a complete Riemannian manifold with Ric m V ≥ −(m − 1)K 1 for some constant K 1 > 0. Assume that u is a positive and bounded solution to the Eq (1.6) and u is independent of time.
We can obtain a global estimate from Theorem 1.1 by taking ρ → 0.
We also suppose that Then there exists a universal constant c that depends only on n so that Let A(u) = a(u(x, t)) β in Corollary 1.4, we obtain Hamilton type gradient estimates for bounded positive solutions of the equation (1.10) there exists a universal constant c that depends only on n so that In the next part, our result concerns gradient estimates for positive solutions of ) with the metric evolving under the geometric flow: where ∆ t depends on t and it denotes the Laplacian of g(t), and S(t) is a symmetric (0, 2)-tensor field on (M n , g(t)). In [31], Zhao proved localised Li-Yau type gradient estimates and Souplet-Zhang type gradient estimates for positive solutions of (1.12) under the geometric flow (1.13). In this paper, we have the following localised Hamilton type gradient estimates for positive solutions to the general parabolic Eq (1.12) under the geometric flow (1.13).
Assume that there exists a constant L 1 > 0 such that u ∈ (0, L 1 ] is a smooth solution to the general parabolic Eq (1.12) in Q 2ρ,T . Then there exists a universal constant c(n) that depends only on n so that (1.14) in Q ρ 2 ,T . Remark 1.3. Recently, some Hamilton type estimates have been achieved to positive solutions of under the Ricci flow in [26], and for under the Yamabe flow in [29], where p, q ∈ C 2,1 (M n × [0, T ]), b is a positive constant and a, α are real constants. Our results generalize many previous well-known gradient estimate results.
The paper is organized as follows. In Section 2, we provide a proof of Theorem 1.1 and a proof of Corollary 1.3 and Corollary 1.5. In Section 3, we study gradient estimates of (1.12) under the geometric flow (1.13) and give a proof of Theorem 1.6.
Basic lemmas
We first give some notations for the convenience of writing throughout the paper. Let h := u .
3u . To prove Theorem 1.1 we need two basic lemmas. First, we derive the following lemma.
Proof. Since h := u 1 3 , by a simple computation, we can derive the following equation from (1.4): By direct computations, we have By the following fact: it yields The partial derivative of µ with respect to t is given by (2.6) It follows from (2.2), (2.5) and (2.6) that which is the desired estimate.
The following cut-off function will be used in the proof of Theorem 1.1 (see [2,16,22,30]).
4). The inequalities −
for every b ∈ (0, 1) with some constant C b that depends on b.
Throughout this section, we employ the cut-off function Ψ : where r(x) := d(x, x) is the distance function from some fixed point x ∈ M n .
Case 2. Suppose that d(x, x 1 ) ≥ ρ 2 . Since Ric m V ≥ −(m − 1)K 1 , we can apply the generalized Laplace comparison theorem (see Corollary 3.2 in [20]) to get Using the generalized Laplace comparison theorem and Lemma 2.2, we have at (x 1 , t 1 ), which agrees with Case 1. Therefore, we have for some universal constant c > 0. Here we used Lemma 2.2, 0 ≤ Ψ ≤ 1 and Cauchy's inequality.
Proof of Corollary 1.3 and Corollary 1.5
Proof of Corollary 1.3.
Proof of Corollary 1.5.
From Corollary 1.4, we just have to compute Λ. By the definition, we have 3. Gradient estimates for (1.12) under geometric flow: Proof of Theorem 1.6 In this section, we consider positive solutions of the nonlinear parabolic Eq (1.12) on (M n , g) with the metric evolving under the geometric flow (1.13). To prove Theorem 1.6, we follow the procedure used in the proof of Theorem 1.1.
Basic lemmas
We first derive a general evolution equation under the geometric flow.
Next, we derive the following lemma in the same fashion of Lemma 2.1. Ric g(t) ≥ −K 2 g(t), S g(t) g(t) ≤ K 3 in Q ρ,T . If h := u 1 3 , and µ := h · |∇h| 2 , then in Q ρ,T , we have (3.1) Proof. Since u is a solution to the nonlinear parabolic Eq (1.12), the function h = u 1 3 satisfies As in the proof of Lemma 2.1, we have that On the other hand, by the equation ∂ t g(t) = 2S(t), we have where we used (2.8), the assumption on bound of Ric + S and The proof is complete.
Finally, we employ the cut-off function Ψ : where r(x, t) := d g(t) (x, x) is the distance function from some fixed point x ∈ M n with respect to the metric g(t).
Then, it follows that at (x 2 , t 2 ), which agrees with Case 1. | 2,346.4 | 2021-01-01T00:00:00.000 | [
"Mathematics"
] |
SSA-LSTM: Short-Term Photovoltaic Power Prediction Based on Feature Matching
: To reduce the impact of volatility on photovoltaic (PV) power generation forecasting and achieve improved forecasting accuracy, this article provides an in-depth analysis of the characteristics of PV power outputs under typical weather conditions. The trend of PV power generation and the similarity between simultaneous outputs are found, and a hybrid prediction model based on feature matching, singular spectrum analysis (SSA) and a long short-term memory (LSTM) network is proposed. In this paper, correlation analysis is used to verify the trend of PV power generation; the similarity between forecasting days and historical meteorological data is calculated through grey relation analysis; and similar generated PV power levels are searched for phase feature matching. The input time series is decomposed by singular spectrum analysis; the trend component, oscillation component and noise component are extracted; and principal component analysis and reconstruction are carried out on each component. Then, an LSTM network prediction model is established for the reconstructed subsequences, and the external feature input is controlled to compare the obtained prediction results. Finally, the model performance is evaluated through the data of a PV power plant in a certain area. The experimental results prove that the SSA-LSTM model has the best prediction performance.
Introduction
In recent years, to address problems such as energy shortages and environmental pollution, the development of renewable energy has become the main direction of the global energy revolution and a key response to climate change [1].Solar energy has developed rapidly as an efficient, renewable and clean energy source.The global installed photovoltaic (PV) capacity has grown swiftly.According to the global PV report released by the International Energy Agency, by the end of 2021, the cumulative installed capacity reached 942 GW, which is an increase of 22.8% over that in 2020; as such, PV energy has great developmental potential [2].However, with the continuous increase in the proportion of PV energy, the randomness and volatility of PV outputs have become increasingly prominent, which brings certain difficulties to the operation of a power grid.Therefore, the accurate prediction of PV power generation can help the grid dispatching department to better avoid risks, improve the safety and economy of the power system and be of great significance to the stable operation of the power grid.
In numerous previous studies, scholars carried out research on photovoltaic power generation forecasting, which is mainly divided into two categories: physical models and statistical models.In physical models, the forecast value of solar irradiance and geographic location information, combined with the operation mode of photovoltaic modules, are used to carry out mathematical modeling [3], and the energy storage system is used to solve the negative effects of unstable power generation and low power supply reliability.In practical applications, errors due to power loss and other issues will inevitably occur when using photovoltaic power.Improving material properties is the most direct way to improve photoelectric conversion efficiency [4,5].At present, scholars have studied the structural characteristics of composite materials to improve the status of photovoltaic applications [6,7].The rapid development of the photovoltaic industry has brought broad application prospects to the research field of photovoltaic composite materials.In the statistical model, the historical data of photovoltaic power plants is mainly relied upon.Therefore, artificial intelligence algorithms have been favored by scholars.These include machine learning algorithms such as artificial neural networks [8,9] (ANNs) and support vector machines [10,11] (SVMs).These algorithms have been widely used in the field of PV power generation forecasting.For example, the article in [12] proposed an efficient ANN prediction model to study the relationship between meteorological data and PV power generation.The authors of [13] proposed an extended model based on an SVM to obtain a more accurate dataset.The prediction accuracy of machine learning models often depends on the quality of the given dataset and the settings of the internal hyperparameters.Likewise, small dataset differences can lead to significant changes in prediction results [14].Therefore, hybrid forecasting models have appeared one after another.By optimizing the utilized dataset and calculating the best hyperparameters, a forecasting model can obtain its best forecasting effect.Experiments have shown that the use of the SVM algorithm, after performing particle swarm optimization (PSO) for the parameters, can obtain more accurate prediction results [15].Usman et al. [16] developed an evaluation framework for short-term PV power prediction and conducted a comparative analysis among various machine learning models and feature selection methods, and the results showed that the extreme gradient boosting (XGBoost) method outperformed individual machine learning methods.According to the authors of [17], by combining XGBoost with feature engineering technology, important information was extracted from weather forecasts to achieve improved prediction accuracy.
Compared with traditional machine learning techniques, deep learning models have better fitting performance and are able to discover intrinsic connections in high-dimensional data [18].Therefore, a PV prediction model based on deep learning can better mine the intrinsic value of feature data.Deep learning models include convolutional neural networks (CNNs) [19], deep belief networks (DBNs) [20], recurrent neural networks (RNNs), generative adversarial networks (GANs) [21] and other classic models, as well as their variants and combined models.As a variant of an RNN model, a long short-term memory (LSTM) network can effectively capture the long-term dependencies of time series and has become very popular in the field of short-term PV output power prediction.For example, the experimental results in [22] showed that the performance of an LSTM-based PV power generation prediction method is better than that of multilayer perceptrons (MLPs) and deep convolutional networks.The authors of [23] used an LSTM network to predict the solar irradiance on the previous day, and its result was better than those of the backpropagation (BP) neural network and linear least-squares regression.The authors in [24] proposed a CNN-LSTM hybrid deep learning model, which uses a multilayer CNN for feature extraction and an LSTM layer for prediction, thereby effectively improving the prediction effect of the LSTM.
Regardless of the chosen prediction algorithm, the data processing step is a challenge that cannot be ignored.A PV output power sequence has nonlinear characteristics.Decomposing such a time series into multiple subsequences can effectively reduce the complexity of the data and is an effective means for improving the prediction accuracy of the utilized model [25].Common sequence decomposition methods include empirical mode decomposition (EMD), ensemble EMD (EEMD) and wavelet decomposition (WD) [26,27].However, the results of the above sequence decomposition methods cause modal aliasing, which increases the difficulty of prediction.As a method that performs sequence decomposition and reconstruction [28], singular spectrum analysis (SSA) can effectively decompose a sequence into a trend sequence, a periodic sequence and a noise sequence without selecting an a priori basis function or a complex operation process, and this technique achieves better objectivity and adaptability [29].It is suitable for various engineering disciplines and has been widely used in wind power forecasting and power load forecasting [30,31].For example, [32] decomposed a wind power series into two subsequences (a trend series and a noise series) through SSA and used the hybrid Laguerre neural network to predict the decomposed signals.In [33], a multistep advance wind speed prediction model was proposed by combining variational mode decomposition (VMD) and SSA with an LSTM model.
The processing of weather characteristic data is also an important link in PV power forecasting.Although the PV output power fluctuates, the fluctuation range of the PV output power is similar under the same weather type.Therefore, when constructing a dataset for PV forecasting, clustering the data on similar days according to the associated weather types can reduce data redundancy and forecasting errors [34].Commonly used clustering methods include K-nearest neighbors (KNN) [35] and K-means clustering (K-means) [36].The authors of [37] used the fuzzy C-means (FCM) clustering algorithm to cluster and analyze historical meteorological data and weather forecast information, and used the whale optimization algorithm and a least-squares SVM (LSSVM) to make predictions.In [38], K-means clustering was used to select similar historical data from forecasting days as training samples, and then, complete EEMD with adaptive noise (CEEMDAN) and a gated recurrent unit (GRU) were used to forecast PV power.The simulation results showed that the proposed model outperformed other models.It can be seen that when processing PV power generation datasets, whether clustering weather types or searching for similar days, establishing corresponding models for different types of data can improve the resulting prediction accuracy.The above methods slice an entire dataset into many smaller datasets for training a prediction model.When the amount of data is insufficient, the decomposed dataset may be very small, which can easily lead to an insufficient number of training samples for the algorithm and overfitting of the prediction results [39].
In summary, this paper proposes a hybrid forecasting model based on SSA-LSTM.SSA decomposition is performed on the given PV output power sequence with strong volatility; the trend sequence, periodic sequence and noise sequence of the PV output power sequence are extracted; and principal component analysis is performed on the sequence.The important components are extracted for sequence reconstruction, and LSTM prediction models are separately established for the reconstructed sequences.The purpose of this is to enable the LSTM to directly learn regular sequence data, reduce the complexity of the model and improve the prediction accuracy.Existing research lacks in-depth studies on feature information and the law of PV output power.This paper fully mines the characteristics of PV meteorological data, extracts high-quality features and improves the data quality.
To verify the validity of the model, this paper utilizes data from the Ningxia Wuzhong Sun Mountain PV power station [40].At the same time, we conduct comparative experiments under two frameworks.Model 1 is a time series prediction model, and model 2 incorporates weather features and the feature data constructed in this paper into LSTM prediction.The purpose of this test is to gain insight into the impact of feature data on prediction performance and to verify the effectiveness of the developed method.
The contributions of this paper can be summarized as follows: • To improve the quality of the utilized dataset, the PV output power obtained under different weather conditions is analyzed, the law of PV output power is summarized, and a new feature is constructed by combining the PV output law and weather data.
The aim is to achieve improved prediction accuracy by mining higher-quality feature data;
•
A short-term PV prediction model (SSA-LSTM) is proposed, in which SSA decomposes nonlinear PV sequences into more regular trend sequences, periodic sequences and noise sequences, reducing the learning complexity of LSTM; the model is combined with feature data to achieve improved prediction accuracy.
The rest of the paper is organized as follows: Second 2 analyzes the characteristics of PV output power and performs feature extraction; Section 3 introduces the forecasting methods and technical descriptions used in this paper; Section 4 presents a case study that validates the validity of the prediction model proposed in this paper using data from the Sun Mountain PV power plant in Wuzhong, Ningxia, China; and Section 5 draws conclusions.
PV Power Generation Feature Extraction
There are many factors that affect PV output power.Among them, weather factors have direct impacts on PV output power.This chapter divides weather conditions into four types (sunny, partly cloudy, cloudy and rainy); analyzes the PV output power law in detail under different weather types; and extracts eigenvalues according to the PV output power law.
Typical Form of PV Power Generation
Figure 1 depicts the PV output power produced for five days under four typical weather types: sunny, partly cloudy, cloudy and rainy.The daily comparison is conducted from 5:00 to 18:45, and the sampling interval is 15 min, with a total of 56 nodes per day.Among them, the PV output power levels on sunny days exhibit the highest similarity and are close to the same value.Due to changes in climatic conditions, the fluctuation of PV output power on cloudy, and cloudy and rainy days increases and becomes extremely irregular, and the maximum daily PV output power gradually decreases.When dealing with such problems, some scholars use algorithms to find historically similar days as a training set.The dataset is clustered and analyzed according to its meteorological features, the weather types are divided based on this, and the forecast days are predicted using the data obtained under the same weather type.However, it can be seen from the figure that even under the same weather type, the PV output law exhibits obvious differences.Therefore, it is difficult to capture the power fluctuation characteristics for a whole day based only on the daily matching of similar weather characteristics.At the same time, in a case with a small amount of data, the division of the dataset will reduce the amount of training data, which will reduce the model prediction accuracy to a certain extent.Based on the above two points, it is necessary to conduct a more detailed analysis of the characteristics of PV output power, and conduct feature screening and matching at a finer time granularity to achieve improved prediction accuracy.
As seen from the above figure, the output PV power has a strong trend on sunny days and gradually decreases after gradually increasing to the peak output, showing a hemispherical shape.Although there are no such obvious features for other weather types, from a short-term point of view, the PV output power also forms a short-term increasing or decreasing trend after fluctuation.Therefore, this feature is called the short-term trend of PV output power in this paper.Although it is difficult to find days with similar PV output power, under the same type of weather, the PV output power fluctuates within roughly the same interval.Therefore, it is easier to find similar output points at the same time in history, and at the same time, the quality of the dataset can be improved (that is, made more accurate).Based on the above analysis, feature data for the short-term trend of PV output power and the similarities to power are simultaneously constructed.
Short-Term Trend Correlation Analysis
According to the characteristic that PV output power forms an increasing or decreasing trend in a short period of time, this paper takes the power at N moments before the PV output power point as a feature and conducts a correlation analysis on it.The purpose is to determine that the PV output power at time t has a strong correlation with the outputs at the previous time points.P(t) represents the power at time t, and P(t−1) represents the power at the previous time node before time t.The historical measured data are constructed in turn to construct power features, and SPSS software is used to carry out a correlation analysis on the constructed dataset.The results are shown in Table 1.In this paper, data with correlations exceeding 0.7 are retained.The power at time t has a strong correlation with the power at the previous seven time stamps, and the correlation strength decreases in turn.It is proven that the change in the current power has a certain internal relationship with the power at the previous moments, which is in line with the hypothesis of this paper and can be used as a prediction feature.In this paper, the power levels at the first three moments with the strongest correlations are selected as the features.
Power Similarity Matching at the Same Moment
The purpose of similarity matching is to find similar power points at the same moment in history.When selecting the power features at the same time, the output power is greatly affected by meteorological features such as global horizontal irradiance, the ambient temperature, the humidity, etc.The above features are selected to calculate the grey correlation degree.Considering that the similarity between the forecast date and the historical date is affected by seasonality, the closer to the forecast date, the higher the probability of finding similar outputs is.Therefore, this paper only analyzes the grey correlation degree at the same time 30 days before the PV output power point and selects the three power data with the highest grey correlation degrees as the prediction features.
Relevance Calculation
The formula for calculating the correlation coefficients between the comparison sequence x i (k) and the reference sequence y(k) is shown in Equation (1).
ξ i denotes the correlation coefficient of element k; min i min k |y(k) − x i (k)| is the minimum value of the absolute difference between all comparison sequence values and the reference sequence values.Similarly, max i max k |y(k) − x i (k)| is the maximum value of the absolute difference between the sequences; the resolution coefficient ρ is taken as 0.4 in this paper.
Grey Relation Analysis
After calculating the relation coefficient for each element in x i (k), the grey relation degree r i can be calculated by Equation (2).
r i > 0.7 indicates that the two datasets are strongly correlated; 0.5 < r i < 0.7 indicates some correlation; r i < 0.5 indicates little correlation.
Process of Feature Selection
The algorithmic flow is shown in Table 2. Utilizing Equation (1) to calculate the correlations between the prediction points and the meteorological features at the same moment for the previous 30 days, the first three power points with the highest correlations are selected as the prediction features, and the power levels with the largest-to-smallest correlations are Pa, Pb and Pc.Performing feature construction for specific similar moments can make the prediction model training process more targeted.Output: PV power, Pa, Pb, Pc, complete feature construction.
Optional Feature
To quantify the quality of the matched feature data constructed in this paper, the Pearson correlation coefficient was introduced to compare the correlation between the matched features and the original data.The original data include the actual power, global horizontal irradiance (GHI), the ambient temperature (AT), the component temperature (CT) and the relative humidity (RH).The matching features include: the power at moment t−1; power at moment t−2; power at moment t−3; and similar powers Pa, Pb and Pc.There are 10 vectors.The specific results are shown in Table 3.It is not difficult to see that in the meteorological data, global horizontal irradiance has the highest correlation, followed by the ambient temperature.The component temperature and the relative humidity are weakly correlated with the actual power.The short-term power trend has been analyzed in a previous article and will not be repeated here.Among the similar powers, Pa has the strongest correlation with the actual power, which is larger than the correlation coefficient of global horizontal irradiance.The correlation between Pb, Pc and actual power decreases, but is still stronger than the correlation coefficient of the ambient temperature.It can be seen from the correlation results that the feature data constructed in this paper can improve the quality of the dataset, and most of the data belong to the strong correlation level.
According to the correlation calculation results in Table 3, all the above matched feature data can be used as prediction data, and the specific feature quantity selection results are shown in Table 4.
Forecasting Methods
This chapter mainly introduces the forecasting method used in this paper.This paper introduces the basic principles of singularity analysis and LSTM networks, and the process of their combined use; it also briefly introduces the eigenvalue function and the model prediction process.
Singular Spectrum Analysis
SSA is an effective method that is used to analyze and predict nonlinear time series data, and its adaptive filtering property is suitable for dealing with data containing complex periodic components [41].For PV power samples with volatility and nonlinear characteristics, SSA can decompose the original time series into several smoother series and build separate prediction models according to different volatility characteristics.The specific process is as follows.
Embedding
PV data are extracted with a sample size of N x = (x 1 , x 2 , . . .; x N ); the length of the sequence is N (N > 2), the embedding dimensionality is set to L and the value range of L is usually an integer with 1 < L < N/2.The trajectory matrix G of L × K is generated, as shown in Equation (3).
where the number of columns in G is expressed as K = N − L + 1.
Decomposition
Decomposition is performed on the trajectory matrix G, and the decomposition process is represented by Equation (4).
where e is the number of nonzero eigenvalues in the matrix GTG, and the eigenvalues are ranked from largest to smallest, i.e., λ 1 ≥ λ 2 ≥ • • • ≥ λ e ≥ 0; λ i and the matrices U i and V i are called the eigentriad of the trajectory matrix G, where V i = G T U i √ λ i and U i is the eigenvector of eigenvalue λ i .
Diagonal Averaging
The purpose of diagonal averaging is to convert the above reorganization matrix into a time series.Let Y be a matrix of size L × K and x rs be a matrix element, where L* = min(L,K), K* = max(L,K), and N = L+K−1; when L < K, x * rs = x rs ; that is, Y is converted into a time series y 1 , y 2 , y 3 , . . .; y N , and the formula is shown in Equation ( 6).
The SSA decomposition and reconstruction process is shown in Figure 2.For a fluctuating PV power series, the choice of the embedding dimensionality L directly determines the performance of the model.The larger L is, the better it is at extracting the PV power components with regular periodic changes, and at the same time, the model generates redundant components; a smaller embedding dimensionality can effectively reflect the fluctuating dynamics of the given PV power series, but its ability to mine key information is limited.Therefore, L is generally set based on the periodicity of the data.For a PV output power series with short-time-scale fluctuation characteristics and long-time-scale variation trends, L must be reasonably selected.In this paper, when selecting the L value, the results are continuously compared through experiments.In the end, the best results are achieved when L is set to 13.
LSTM
LSTM is widely used in prediction problems.It is a special RNN model that can learn long-term data changes during model training.The LSTM prediction model completes the prediction task by controlling the information retention process through a forgetting gate, an input gate controlling the input information, and an output gate controlling the output information [42].By controlling the memory unit at each time step, LSTM determines the amount of information to be transmitted at the next moment and the amount of information retained at the previous moment, so it can effectively capture the continuity of PV output power.Its structure is shown in Figure 3.The structure contains a forgetting gate f t , an input gate it and an output gate t .The forgetting gate is able to selectively retain information in C t−1 ; the input gate determines the amount of information preserved in C t by the input X t ; and the output gate controls the effect of long-term memory on the current output h t .The expression is shown in Equation (7).
where W f , W i , W o and W c are weight matrices; b f , b i , b o and b c are bias parameters; σ is the activation function; tanh is the hyperbolic tangent function; and h t−1 and C t denote the previous cell output and internal candidate cell state, respectively.The LSTM network structure determines that the error does not decay sharply as the number of learning layers increases.At the same time, the network can solve the gradient explosion and gradient disappearance problems that may occur during training.Compared with shallow learning algorithms, LSTM exhibits an obvious advantage [43].
SSA-LSTM Prediction Model
For the PV power prediction problem, which possesses high volatility and stochasticity, this paper proposes a PV power prediction method considering feature-matched SSA and LSTM networks.The basic framework is shown in Figure 4.The above steps include the LSTM time series prediction model and the LSTM prediction model containing the feature vector.The PV output power sequence is analyzed by SSA, the embedding dimensionality L is determined and the PV output power sequence is decomposed into L subsequences.The sequence reconstruction is performed according to the contribution degrees of the different subsequences to generate m SSA sequences.A corresponding LSTM prediction model is constructed for each SSA sequence, the power prediction process is carried out separately in the two prediction models and the final prediction result is obtained by superimposing the predicted power of each series.To adjust the parameters of the LSTM, this paper uses the GridSearchCV method.After setting the parameter range, this approach can automatically match the optimal optimization parameters for each LSTM model, enabling it to quickly optimize the parameters and reduce the operation time.
Case Study 4.1. Data Preparation
The simulation data in this paper are obtained from PV power and climate feature sequence samples collected from 1 June 2020 to 31 July 2020 in a domestic PV power plant, and each sample point contains 10 items: power, irradiance, humidity, ambient temperature, plate temperature and the newly constructed power features.The article data are selected from 5:00 a.m. to 18:45 p.m. which is defined as the effective power output period, with a total of 56 data points per day and a sampling interval of 15 min.The sample ratio of the training set to the test set is 8:2.All models in this paper are implemented using the Python programming language, TensorFlow is used to build a deep learning model, and sklearn's GridSearchCV is used to realize parameter optimization [44].The optimal hyperparameter configuration is crucial to the prediction performance.In this paper, the search ranges for the batch size and number of epochs are set in advance, and the search range is defined again based on the optimal output parameters until the optimal result is obtained.The loss function adopts the mean square error (mse), and Adam is used for training to prevent overfitting.
Error Analysis and Comparison
In this paper, the mean absolute error (MAE), root mean square error (RMSE) and coefficient of determination (R 2 ) are used as the error indicators of photovoltaic power station output prediction.Although the mean absolute percentage error (MAPE) is widely used in experiments, its results are asymmetric.That is, errors higher than the original value will lead to greater absolute percentage errors [45].The MAE, RMSE and R 2 are calculated as follows: where N is the number of predicted data points, and the time interval between each pair of data points is 15 min; P Mi and P pi are the actual output power and predicted power at prediction point i, respectively; and P M is the average of the actual power values of all samples.
Result and Discussion
First, SSA decomposition is performed on the PV output sequence, which is decomposed into 13 feature components, and principal component analysis is performed on the components.The results are shown in Figure 5.It can be seen that: the contributions of the components decrease in order; the first component has a much higher contribution than the others; the contribution values of components two and three are basically equal; and the first six components are the main components with a cumulative share of 90%.In most cases, feature component screening causes a decrease in prediction performance [46], so all feature components are retained.
To verify the effectiveness of the SSA-LSTM prediction model, the results in this paper are compared with those of an LSTM network without the SSA decomposition model, as well as the XGBoost model.At the same time, we will evaluate the performance based on two frameworks: first, a univariate model using only time and historical load data, and second, a multivariate model incorporating feature data to better verify the importance of feature variables.The test set prediction results are shown in Table 5.From the univariate experiments, it can be seen that the SSA-LSTM model has the best prediction effect compared to those of the XGBoost and LSTM models without SSA.The MAE is reduced by 65.5% and 54.8%, the RMSE is reduced by 74.3% and 62.5%, and the R 2 is improved by 10.2% and 4.1%, respectively.This shows that the SSA can smooth the outgoing power series, which in turn improves the prediction results.
After adding the feature data, the prediction effects of the multivariate models are all significantly improved.Compared with the univariate models, the MAE and RMSE are reduced, respectively, by 41.3% and 36.1% for XGBoost; 42.3% and 50.2% for the LSTM model; and 49.4% and 45.4% for the SSA-LSTM.The prediction accuracy is further improved after incorporating the constructed feature, but the accuracy achieved by the LSTM model after incorporating the features is still inferior to that of the univariate SSA-LSTM model, indicating that the singular spectrum decomposition process plays an important role in the prediction procedure and proving the effectiveness of the SSA-LSTM model.It can be seen that in univariate prediction, the LSTM algorithm is more sensitive to seasonality and data trend [47], and singular universal analysis can highlight this characteristic in time series decomposition.Thus, the SSA-LSTM univariate model can achieve an excellent prediction effect.The smooth output sequence can also improve the prediction accuracy of the multivariable prediction model, which proves the effectiveness of the SSA-LSTM model.
To test the applicability of the model for prediction under complex weather conditions, three days with large fluctuations are selected from the dataset for output prediction, as shown in Figure 6.Since the prediction performances of the multivariate models in the above experiments are all better than those of the univariate models, multivariate models are used for these experiments.In the above experiments, the prediction accuracy of the XGBoost model is not as good as that of the LSTM model.Therefore, the XGBoost algorithm will be abandoned in the following experiments, and the SSA-LSTM model characterized by meteorological data will be used as a replacement to verify the role of the newly constructed power features.The models are classified into three categories: A, B and C.Among them, model A is the LSTM model using only meteorological data as its features.Model B is the SSA-LSTM model, which also uses only meteorological data as its features.Model C is an SSA-LSTM model with meteorological features and the new power features as its features, and the experimental results are as follows.From the simulation results in Figure 6, it can be concluded that the curve fit of model C is the highest and the prediction accuracy is significantly improved over that of models A and B. The specific experimental results are shown in Table 6.By comparing the error evaluation indices of model A and model B, it is seen that the prediction accuracy of the LSTM model is significantly improved after adding SSA decomposition, with MAE reductions of 70.5%, 75.1% and 85.2%, respectively; RMSE reductions of 72.7%, 74.8% and 71.7%, respectively; and significant R2 improvements, which fully verifies the effectiveness of the SSA model applied to LSTM.In addition, the prediction accuracy of model C is improved once again on the basis of model B, and the accuracy remains high under fluctuating weather conditions, which proves the effectiveness of the newly constructed features.
The results are compared and analyzed in Tables 5 and 6.From the comparison results of model A and multivariate LSTM, it can be seen that after adding new features, the MAE and RMSE indicators decrease, and the R 2 indicator improves significantly.This shows that the new features can effectively improve the prediction accuracy of the LSTM model.Therefore, after the above analysis, we believe that the new features and SSA can effectively improve the prediction accuracy of the model when they act on the LSTM model alone.Compared with the multivariate SSA-LSTM model, the prediction accuracy of model C is slightly lower.Because weather with strong volatility is more difficult to predict, it is easy to cause the accuracy of the prediction model to decline.The same problem arises in [37].
Secondly, when making single-day predictions, only the data before the prediction day can be selected as a training set, and the reduction of the training sample size will cause the prediction accuracy to decrease.
To better reflect the differences between the models, and at the same time verify the re-liability of the prediction models, residual analysis is conducted on the prediction results shown in Figure 6, and the specific results are shown in Figure 7.One column represents the test results for the same day, and the same row represents the same prediction model.From the figure, we can see that after adding the SSA decomposition model, the LSTM error range is significantly reduced.At the same time, it can be seen from the figure that the residual values are randomly distributed on both sides of the zero line, and there is no obvious trend and regularity, which verifies the reliability of the model.
Conclusions
In this paper, we propose a short-term PV prediction model based on feature matching (SSA-LSTM) by combining collected PV power generation data with PV power generation patterns observed under different climatic conditions; through example simulations, we can see the following: (1) Through the SSA decomposition method, the original high volatility and stochastic PV power output curve is decomposed into a series of smoother subsequences, which makes PV power prediction under fluctuating weather conditions much less difficult and improves the resulting prediction accuracy; (2) A reasonable feature selection process can highlight the key features of the input data, and the dataset obtained after feature matching can effectively improve the model prediction accuracy; (3) In an experimental results comparison, this paper adopts comparison tests between single-input models and multi-input models to evaluate the integrated prediction accuracy, and the results show that the SSA-LSTM prediction effect achieved after incorporating the new features is optimal.
Figure 1 .
Figure 1.Typical weather-PV output power curves.(a) Typical weather for sunny days (b) Typical weather for partly cloudy days (c) Typical weather for cloudy days (d) Typical weather for rainy days.
Figure 2 .
Figure 2. Flowchart of the SSA model.
Figure 4 .
Figure 4. Flowchart of the combined forecasting model.
Figure 5 .
Figure 5. SSA and principal component analysis.
Figure 6 .
Figure 6.Forecasting results: (a) is the forecast result on 5 July, (b) is the forecast result on 7 July and (c) is the forecast result on 10 July.
Table 1 .
Correlation coefficients for the first 7 moments.
Table 2 .
4imilarity feature selection process.Select the PV outputs at time t as the reference series; 3. Perform grey relation analysis with each t-moment for the previous 30 days;4.Sort r values from largest to smallest, and retain the powers of the three moments with the strongest correlations; 5. t moments +1, and repeat steps 2-4 until the last data point is reached;
Table 6 .
Error evaluation indices of each prediction model. | 8,054.2 | 2022-10-21T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
All-plus helicity off-shell gauge invariant multigluon amplitudes at one loop
We calculate one loop scattering amplitudes for arbitrary number of positive helicity on-shell gluons and one off-shell gluon treated within the quasi-multi Regge kinematics. The result is fully gauge invariant and possesses the correct on-shell limit. Our method is based on embedding the off-shell process, together with contributions needed to retain gauge invariance, in a bigger fully on-shell process with auxiliary quark or gluon line.
Introduction
Despite the high energy limit of Quantum Chromodynamics (QCD) (see eg. [1] for a review) has been studied for over forty years, the confrontation of various small-x approaches and experimental data is still not fully conclusive (here x ∼ 1/ √ s is the longitudinal fraction of hadron momentum carried by a parton and s is the center-of-mass energy). On one hand, the experimental data relevant to the small-x regime can be often explained by the collinear factorization, supplemented however with parton showers or other type of resummations and multi-parton interactions. On the other hand, certain types of reactions, for example the Mueller-Navalet jet production [2] give strong hints towards the need of inclusion of the small-x effects [3]. In addition, collisions of protons with heavy nuclei provide further hints, as observed for instance in [4] for the forward dijet production case.
In order to provide more solid statements regarding the need of small-x approaches, one needs higher order corrections for various components of small-x calculations, in particular for high energy partonic amplitudes. As a matter of fact, in collinear factorization, any partonic amplitude can be at present calculated at NLO automatically using computer software. This is still to be achieved in the small-x domain and our work is a step forward towards that goal.
The concept of k T -factorization is based on analogy with collinear factorization, but here both a hard part and a soft hadronic part depend on parton transverse momenta, i.e. we have explicit higher powers k T /Q present in the hard matrix elements (here, Q is the largest scale present in the process). Thus, instead of the leading twist, the accuracy is set by the leading power in 1/ √ s. The momenta of partons defining the hard amplitude may now be off-shell, with vector or spinor indices projected into components dominating in the high energy limit.
In the present work we shall consider multigluon amplitudes with a single gluon being off mass shell. Such amplitudes are primarily used in the forward particle production (see eg. [40]) and have large phenomenological impact (see eg. [41,42,43,44,45,46,47,48,49,49] for various application in forward jet production processes at LHC). The momentum of the off-shell gluon has the form where p µ is the light-like momentum typically associated with the colliding hadron, x is the fraction of this momentum carried by the scattering parton, and k µ T is a transverse component satisfying k T · p = 0. The off-shell gluon couples eikonally, i.e. its vector index is projected onto p µ (the propagator is included in the amplitude), see Fig. 1. The standard diagrams contributing to off-shell amplitude defined in that fashion are however not gauge invariant. The proper definition of such amplitudes can be done either within the Lipatov's high energy effective action [50,51] or by explicitly constructing additional contributions required by the gauge invariance, high energy kinematics and the proper soft and collinear behavior. The latter method is very useful in automated calculations at tree level and a few approaches exist: using the Ward identities [52], embedding in a bigger on-shell process [53] (see also [54] for earlier application to 2 → 2 process), using matrix elements of straight infinite Wilson lines [55]. In particular, the embedding method [53] has proved to be very effective in numerical calculations and is implemented in a Monte Carlo generator [56]. Also, it has been generalized to one-loop level with a proof of concept given in [57]. The great advantage of this method is that it can be used to extract the high energy off-shell amplitudes from existing on-shell one loop results. We will review the method in detail in Section 2.
In order to apply the embedding method at one-loop level, and in particular to validate the general concept of [57], it is reasonable to start with simplest one-loop helicity amplitudes. In on-shell case, these are the amplitudes with all helicities being the same, say 'plus' (we use the convention that all momenta are outgoing). Such amplitudes vanish at tree level, but are non-zero at loop level. Thus, in the present work, we shall calculate one-loop amplitudes with all-plus helicity gluons and one off-shell gluon, consistent with gauge invariance and high energy limit of QCD. Our result will be presented for arbitrary, say n, number of gluons. In particular, we find that for n = 3 our general result coincides with the existing result obtained from Lipatov's effective action [39].
As a basis for our calculation we shall use existing one-loop results for (− + · · · +) helicity on-shell amplitudes, where the first pair of particles is either gluon or quark-antiquark pair. The particles with helicity +− will provide an auxiliary quark or gluon line, with corresponding external spinors parametrized in a way that -upon taking a proper limit -will + terms required for gauge invariance p µ xp + k T Figure 1: In high energy factorization for forward jets (hybrid factorization [58,40]) the multigluon amplitude has one incoming momentum off mass shell, with the off-shell propagator projected onto light-like momentum p µ (typically the momentum of the hadron to which the gluon couples). The momentum of the off-shell leg has only one longitudinal component in the high energy kinematics. Such amplitude is in general not gauge invariant and additional terms are required to define it properly.
guarantee both the high energy kinematics (1) and eikonal coupling for the internal off-shell gluon attached to it.
We shall focus on the so-called color ordered amplitudes that correspond to planar diagrams and use the spinor helicity method (see [59] for a review). At tree level, the color decomposition of a full gluon amplitude into color ordered amplitudes is M a1,...,an λ1,...,λn (k 1 , . . . , k n ) = perm.(2···n) where t a are color generators, k i is momentum of i-th gluon with helicity projection λ i and the sum goes over all non-cyclic permutations of the arguments of the trace and color ordered amplitudes A. At one-loop level, additional double trace terms are present. They can be however obtained as linear combinations of the leading trace contributions. It is known that on-shell (± + · · · +) one-loop amplitudes have rather simple structure, given by a rational function of spinor products. Consider for instance the all-plus on-shell leading trace color ordered amplitude. It has a remarkably simple form for arbitrary number of gluons (conjectured by Z. Bern, G. Chalmers, L. J. Dixon and D. A. Kosower in [60,61] and demonstrated by G. Mahlon in [62]) : Above, the spinor products are defined as where u ± (k) are the spinors of helicity ± for an on-shell momentum k. The above result is most easily understood within the unitarity methods (see eg. [63]), or -more generally -the on-shell methods (see [64] for a comprehensive review). The off-shell gauge invariant amplitudes we calculate in the present work inherit the rational structure. Our paper has the following structure. In the following section, we will describe the embedding method in more detail. Next, in Section 3 we will present the main results for the amplitudes. In Section 4 we recalulate the amplitudes using the embedding method with auxiliary gluon line as a verification of our results. In Section 5 we will investigate the on-shell limit of the obtained off-shell amplitudes. Finally, in Section 6 we shall summarize our work and discuss further perspectives.
The method
The method to obtain off-shell amplitudes we are about to use has been covered in [53]. Here we shall apply it to obtain one loop scattering amplitudes for arbitrary number of positive Figure 2: Gauge invariant off-shell amplitudes can be obtained by considering a process with an auxiliary quark-antiquark pair, with momenta parametrized in terms of a parameter Λ in such a way, that upon taking the limit Λ → ∞ the coupling to the quark line becomes eikonal and the momentum of the off-shell gluon has the high energy form (1).
helicity on-shell gluons and one off-shell gluon with the high energy kinematics (1) (called also the quasi-multi-Regge kinematics). Let us briefly recall how the method works. The basic idea is to calculate the amplitude with the off-shell gluon using an on-shell amplitude with an auxiliary quark-antiquark pair, which follows specific kinematics. Ultimately, the auxiliary quark and antiquark spinors are decoupled ensuring gauge invariance of the off-shell amplitude. Schematically, the method can be summarized as (see also Fig. 2 where X stands for other on-shell particles involved in the hard scattering process and Λ is a real parameter parametrizing momenta of auxiliary quarks (see below). The gauge invariant off-shell amplitude is denoted A * . The momenta of the auxiliary quarks are taken to be the following: where and q µ is an arbitrary light-like momentum such that q · k T = 0, q · p > 0. Note, that p µ 1 and p µ 2 are light-like and they satisfy p µ 1 + p µ 2 = k µ , where the latter is the momentum of the off-shell gluon as defined in Eq. (1). In the limit Λ → ∞ the coupling of gluons to the quark line becomes eikonal, consistent with the high energy limit. The factor 1/g s in Eq. (5) is to correct the power of the coupling, and the factor x|k T | is for the correct matching to k Tdependent PDFs in a cross section. In particular, the factor |k T | makes sure the amplitude is finite for |k T | → 0.
In practice, instead of using the above definitions of p µ 1 and p µ 2 , we will use their expansion in Λ: In order to use the helicity method, we need to express k µ T in terms of spinors. It can be decomposed as follows with Realize that k µ T is a four-vector with a negative square, and we have The spinors of p µ 1 and p µ 2 can be decomposed into those of p µ and q µ following Realize that (Λ − x)/Λ β = 1 − β. We see that spinor products are independent of Λ. Further, the spinors for auxiliary quarks behave for large Λ as In what follows, we shall call the above kinematics together with the Λ → ∞ the "Λ prescription". Applying it to an amplitude with auxiliary partons gives the gauge invariant off-shell amplitude.
Alternatively, the "embedding" method described above can be used with an auxiliary gluon line, instead of the quark line. Indeed, the color decomposition for (n − 2)-gluon amplitude with a quark-antiquark pair is given by and can be projected onto (n − 1)-gluon amplitude by a contraction with (t a * ) ji , where a * represents the color index of the off-shell gluon. Now, for an auxiliary gluon pair instead of quarks, one simply needs to select only those permutations in Eq. (2) that retains the order of gluons 1 and 2 and substitute t a1 t a2 → t a * . At one loop, the color decompositions get more complicated and are given by equation (1) in [65] and equation (2.4-5) in [66] respectively. One can however easily see that the same procedure goes through to extract a single gluon color from a pair of colors. In [57] it has been shown that at tree level, the partial amplitudes obtained using different pairs of auxiliary partons are identical. We will see here that the same holds at one loop for the all-plus amplitudes.
All-plus off-shell gauge invariant amplitudes at NLO
In this section we present our results for one loop amplitudes for one off-shell gluon and n − 1 on-shell positive helicity gluons. We begin with several low multiplicity examples, starting with the simplest cases: n = 3 (the vertex), n = 4 and n = 5. Then, we will turn to a general result for arbitrary n. For each case, we first present the known amplitude with auxiliary quarks and the amplitude we obtain by applying the Λ prescription on it.
3-point vertex
We first consider the 3-point vertex with one off-shell gluon and two positive helicity on-shell gluons at one loop. Such vertex has been calculated for arbitrary helicity projection in [39] from the Lipatov's effective action.
In order to calculate it from Λ prescription, we need the 4-point amplitude for quark, anti-quark and two gluons. It has the following form [66] : where n f accounts for the number of Weyl fermions circulating in the loop, n s the number of complex scalars and Applying the Λ prescription gives : We checked that for n s = 0 the above result agrees with the one of [67,39], up to an overall constant and a factor xE/|k T |, where E is the energy component of p µ .
4-point amplitude
The 5-leg amplitude with auxiliary quarks is given by [66] : Applying the Λ prescription we find that the first term is of the order Λ −1 and thus vanishes. Further calculation leads to the following result
Apply the Λ prescription we find that the term with the factor 1 + 1 N 2 c vanishes leading to
n-point amplitude
Finally, in the following section we shall derive the general expression for one-loop amplitude for one off-shell gluon and n − 1 on-shell gluons with all helicities positive. To this end, we need the one loop amplitude for a quark-antiquark pair and n − 1 positive helicity gluons, which has been derived in [68]. It reads with where After applying the Λ prescription we find that the term with the factor 1 + 1 N 2 c is of the order Λ −1 , whereas the other term is of order 1 and is the one contributing to the off-shell amplitude. Eventually, we obtain the following expression for the off-shell amplitude: A * (1) n (g * , 3 + , · · · , (n + 1) + ) = g n with U * 1 = n j=3 pj p(j + 1 p| / K j,j+1 / K (j+1)···(n+1) |p j(j + 1) , It can be readily checked that the above expression recovers the amplitudes calculated previously for n = 3, 4, 5.
Verification with auxiliary gluons
In the following section we shall verify the off-shell gauge invariant amplitudes we obtained in the previous section applying the Λ prescription to the corresponding amplitude with auxiliary gluons instead of auxiliary quarks. .
4-point amplitude
The amplitude with the auxiliary gluons is [65] A This expression leads to the following off-shell amplitude which turns out to be equal to Eq. (22).
5-point amplitude
Six-point amplitude with auxiliary gluons is given by [68] : This amplitude turns out to be equal to the one obtained with auxiliary quark line, Eq. (23). The comparison is detailed in Appendix A.
n-point amplitude
For the general case of n-point amplitude, the on-shell gluonic amplitude is taken from [68] A (1) with , Applying the Λ prescription to T 2 gives the same result as for S 2 in (27). It turns out that T 1 is equal to S 1 within the Λ description once you realize that the first term in the sum over j in T 1 is of the order Λ −1 . In the end, applying the Λ prescription toq − q + g + · · · g + or g − g + g + · · · g + gives the same expression, given in Eq. (29).
On-shell limit
Now that we obtained an expression for A * (1) n (g * , 3 + , · · · , (n + 1) + ), we should verify that, in the on-shell limit i.e. when |k T | → 0, we obtain an on-shell amplitude with a gluon with momentum xp µ . We expect that the limit consists of the sum of the amplitudes for which the, now on-shell, gluon has either negative or positive helicity. For tree-level amplitudes, this can be understood as follows. Firstly, at the on-shell limit, the contributions to the amplitude that dominate have a propagator with denominator k 2 T = −κκ * , and have exactly the form of the first term in Fig. 1. More precisely, they have the form where we use the planar Feynman rules as in equation (10) of [69], where J µ represents the off-shell current, and where we included the factor x|k T | from the Λ-prescription. Using current conservation k · J = 0, we can see that projecting to p µ is equivalent to projecting to −k µ T /x. Secondly, using Eq. (9) to Eq. (11), we see that with polarization vectors Thus we find lim |kT |→0 where |k T |/κ * = e iφ for some angle φ, and |k T |/κ its complex conjugate, and where A (0) n (g ± X ) = ε ± · J. In [69] it is explained how such a coherent sum of amplitudes becomes an incoherent sum of squared amplitudes in a cross section.
When taking the on-shell limits in expressions consisting of spinor products and invariants involving the momentum p µ , the final step is to interpret this momentum as the momentum of the now on-shell gluon, divided by x. Since the tree amplitudes are homogeneous in p µ of degree 1, this results in the overall factor 1/x equivalent to the one coming from changing projector p µ → −k µ T /x above. The off-shell one-loop all-plus amplitudes can easily be checked to be homogeneous in p µ of degree 1 too, and the same factor 1/x will show up to eat the factor x from the Λ-prescription.
We now verify that the same limit appears for the one-loop n-point all-plus amplitudes we obtained in Section 3.4. One can notice that U * A * (1) n (g * , 3 + , · · · , (n + 1) So we already have the contribution from the amplitude with negative helicity gluon (in place of the off-shell one). We now need to show that the second term is actually the contribution from the amplitude with a positive helicity gluon, i.e.
To this end, we have to work on U * 3 . One can show that Injecting it in Eq. (43) leads to lim kT →0 A * (1) n (g * , 3 + , · · · , (n + 1) More details on the above calculation are given in Appendix B. This is exactly what we expect from the on-shell limit of an off-shell amplitude: a contribution from an amplitude where the off-shell gluon is replaced by a positive helicity gluon and another one where it is replaced by a negative helicity gluon.
Summary
In this paper we have calculated expressions for amplitudes in high energy factorization with one off-shell gluon and any number of plus-helicity gluons at one loop level. We also obtained expressions for specific cases: 3, 4 and 5 point amplitudes. To obtain these results we used the embedding method developed in [53,57]. The method relies on identifying pair of on-shell partons as auxiliary lines which can be decoupled in high energy limit, leaving gauge invariant off-shell amplitude with proper high energy kinematics. We find agreement with the existing calculation for the 3-point vertex with a Reggeized gluon in [39]. Furthermore we explicitly demonstrated that we obtain the correct on-shell limit for all calculated amplitudes. Thus, we conclude that the embedding method works at the one-loop level, at least for amplitudes with same helicities.
Our future plans involve calculation of other QCD amplitudes and, in particular, addressing also the real corrections. The ultimate goal is to automatize the NLO calculations in k T -factorization as well as the small-x improved TMD factorization (ITMD) [70,71].
A 5-point amplitude -detailed calculation
In order to compare the off-shell gauge invariant 5-point amplitude obtained from the auxiliary quark lineq − q + g + g + g + g + to the one obtained from the auxiliary gluon line g − g + g + g + g + g + , we will rewrite both expressions. Let's first rewrite the first term of the amplitude with auxiliary quarks (before applying the Λ prescription, see Eq. (23) Let us now rewrite the expression for the amplitude (35). In the second term we use (51) In the first term we use For the factorized term in the second line, we can use the momentum conservation For the last term, before applying Λ prescription, we use : In the end, we have If we put back the factor − [p6] 2 κ * s k6 (not writen in the calculation for simplicity), we recognize the second line of Eq. (55). Thus, both approaches give the same result.
B On-shell limit calculation
In this appendix we detail the calculation that leads to Eq. (45) which implies the correct on-shell limit for the n-point off-shell amplitude we presented in Eq. (38).
In order to rewrite the expression for U * 3 so that the on-shell limit can be utilized, let us come back to the expression for T 2 , see Eq. (38) before applying the Λ prescription. We focus on the first term in the sum over j (i.e. for j = 3), since it is the term that leads to U * 3 when applying the Λ prescription. Let us call this term T 3 : We have κ p(n + 1) .
This is the only term in the sum over l that has κ in the denominator and that is the only non vanishing term when k T tends to 0. Putting all this together leads to This demonstrates the first relation in Eq. (45). We now have to prove the second one i.e. we need to show that the obtained expression corresponds to the numerator of the amplitude for n − 1 gluons with positive helicity (up to some factor Now we can work on U * 3 . Let's first express F in terms of a sum. For a direct comparison, we should also use the expression of U * 3 with the following change in the momenta label : p → 1, ∀i ∈ 3, . . . , n + 1, i → i − 1 (then momentum conservation expresses the same way i.e.
on the other hand, we have The first equality is obtained by momentum conservation on the index l (in the second term only) and the second one also by momentum conservation, on the index i this time (for terms 2 and 3). This finally proves the second relation in Eq. (45) which leads then to the expected on-shell limit for the amplitude in Eq. (29). | 5,454.4 | 2020-08-18T00:00:00.000 | [
"Physics"
] |
The search value of a set
We study search games in which the hider may hide in a finite number of locations. We assume that the cost of searching these locations does not depend on the order in which the locations are searched. From these assumptions we derive that the cost function is submodular, thus placing search games with an immobile hider in the context of coalitional games.
Introduction
defined the search value of a network by means of a search game that takes place on the network. In the present paper we define the search value of a set V(X ) by means of a search game on a set X . The payoff of the search game is given by a submodular cost function f : 2 X → R ≥0 . We prove that f (X ) 2 < V(X ) ≤ f (X ) and that an optimal search corresponds to a directed random walk on a Hasse diagram. We conjecture that an optimal hider strategy is contained in the core of the game, which is defined as the polyhedron of probability vectors p i satisfying the constraints i∈A p i ≤ f (A) for all A ⊂ X .
We consider a zero-sum two-player game between Searcher and Hider. Hider chooses a place to hide from a finite number of locations. Searcher then goes through these locations one by one, and the game ends as soon as Searcher selects Hider's location. This game is known as a search game on discrete locations with an immobile hider. There exists an extensive literature on the topic, see Alpern and Gal (2003, Ch. 3) and Alpern et al. (2013, Ch. 1), but this concerns games on networks or metric spaces. In this paper we want to study search games that are not necessarily placed on a network, and therefore we follow an axiomatic approach, imposing general conditions only. Our results are motivated by a recent study of Alpern and Lidbetter on expanding search Alpern and Lidbetter (2013).
We number the hiding locations 1 to n. So Hider chooses an element from X = {1, . . . , n} and Searcher chooses a permutation π on n elements. The cost function f : 2 X → R represents the cost of the search operation. We assume that the cost depends only on the locations that have been searched, and not on the order in which they have been searched. If Hider's location is π( j), then the Hider receives the payoff f ({π(1), . . . , π( j)}). In this game, Hider wants to maximize and Searcher wants to minimize the total cost of the search operation.
To simplify our notation, we will often omit brackets for singletons and write f (x) instead of the more accurate f ({x}).
Conditions on the payoff function
We assume that the search is carried out by a team of agents who can coordinate their search operation, which is translated into the following informal conditions on f : (i) Searching nothing costs nothing (ii) Searching more costs more (iii) Searching costs less if more has been searched Since the game ends as soon as Searcher finds Hider, the value f (∅) never occurs as a payoff in the game, and condition (i) could be omitted. However, if f (∅) = 0 then we can simply redefine the cost function by subtracting f (∅) from every other value. Therefore, we we may impose condition (i) without loss of generality. Condition (ii) says that searching is never to Searcher's benefit. This is a natural condition, but one can imagine situations in which this is not true, when there are benefits other than finding the Hider. It may be worthwhile to study games in which (ii) is omitted. The third condition says that if A ⊂ A, then the marginal cost of searching B after searching A is at most equal to the marginal cost of searching B after searching A .
The formal conditions on the cost function are Our translation into the mathematical conditions (i) and (ii) is obvious, but (iii) demands an explanation. Let f A be the marginal cost function defined as However, this is equivalent to the condition that we impose here, which says that f is submodular. Such functions arise naturally from many different optimization problems Fujishige (2005) and in cooperative games Shapley (1971). There exists an extensive literature on such functions. That is why we state mathematical condition (iii) in this form. Of course, we still need to prove that the submodularity of f is equivalent to the condition that Definition 1 Let f : 2 X → R ≥0 be a cost-function that satisfies our axioms. The search value V is equal to the value of the search game with cost function f .
The game's strategies
We describe the strategies of the players. Hider selects an element x ∈ X . Searcher selects an increasing chain ∅ = A 0 ⊂ A 1 ⊂ · · · ⊂ A k = X . If A i is the first element of the chain that contains x, then f (A i ) is the payoff. It is Hider's reward and Searcher's cost. We will allow the cardinality of elements of the chain to increase by more than one: if the search is carried out by a team, different locations may be searched at the same time.
To describe such chains, we use lattices. A family of sets L is a lattice if it is closed under union and intersection. L can be illustrated by a Hasse diagram, which is a directed graph with vertex set L and edges between A, B ∈ L if A ⊂ B and there is no C ∈ L such that A ⊂ C ⊂ B. In other words, B covers A. The Hasse diagram is directed upward from the root ∅ to the top X . A chain ∅ = A 0 ⊂ A 1 ⊂ · · · ⊂ A n = X corresponds to a path in the Hasse diagram that starts at the root and ends at the top. So a pure search strategy corresponds to a walk on the Hasse diagram from the bottom ∅ to the top X . If Hider chooses x ∈ X , then Searcher's payoff is f (A i ) for the minimal i such that x ∈ A i . We add weights to the edges, to compute this payoff. If B covers A then the edge AB has weight f A (B), i.e., the marginal search cost of B if A has been searched. The payoff is equal to the sum of all the weights in the path up until the first vertex that contains Hider's location.
Each lattice of sets L can be represented by 2 Y , where Y is the set of atoms of L. So we may restrict our attention to chains that increase one by one. However, allowing chains to increase by more than one element makes it easier to define the search game, as we will illustrate in the third example in the next section. If L is equal to 2 X , then a pure search strategy corresponds to a permutation π and π(1), π(2), . . . , π(n) give the order in which the locations are searched. We will consider search games on the lattice 2 X only, unless explicitely stated otherwise. If we want to emphasize that the chain increases one by one, then we say that the search is sequential.
Some sample games 4.1 Sequentially searching three locations for the price of two
The cost function is defined on subsets of {1, 2, 3}. We write f (1) = x, f (2) = y, f (3) = z and we assume that x ≤ y ≤ z. We define the function to be additive on doubletons, but f {1, 2, 3}) is the sum of the two largest costs. Hence the name 'searching three for the price of two'. We leave it to the reader to verify that f satisfies conditions (i), (ii), (iii). Searcher has six pure strategies. If Searcher chooses a permutation with π(3) = 1, then he pays the maximal cost y + z unless he finds the Hider immediately. So switching π(2) and π(3) does not harm Searcher if π(3) = 1. Therefore, the pure strategies in which π(3) = 1 are dominated. We may assume that location 1 is never the last to be searched, reducing the strategy space to four permutations. The game can thus be represented by the Hasse diagram in Fig. 1.
In the degenerate case that x = y = z, the unique optimal mixed strategy for Hider is to hide equiprobably in one of the three locations. It is optimal for Searcher to select one of the three vertices 1, 2, 3 equiprobably, but the choice of the second vertex is unimportant, since the cost of searching two is the same as the cost of searching three in this degenerate case. It follows that the optimal search strategy is not unique.
The more general game of searching n for the price of k seems to be very difficult to solve. A further analysis of a related search game is contained in Fokkink et al. (2015).
Expanding search on a tree
The expanding search game on edge-weighted networks was introduced by Alpern and Lidbetter in Alpern and Lidbetter (2013). The game on a weighted tree was completely solved in that paper. In this game, the hiding locations X are the leaves of the tree. For A ⊂ X the search cost f (A) is defined as the total weight of the edges in the minimal subtree containing A and the root of the tree. We denote this minimal subtree by T (A). The tree in Fig. 2 illustrates this game. For instance, searching the subset {1, 3} costs a + b + d + e, and searching {1, 2} costs a + b + c.
To show that this game fits into our framework, we need to prove that f is submodular.
i.e., the weight function is submodular.
Restricted expanding search
We now give an example of a game with a lattice other that 2 X . If we modify the previous game by forcing Searcher to inspect locations 2 and 3 simultaneously, then the lattice L is generated by the atoms {1}, {2, 3}, {4}. If we denote the two locations 2, 3 by a single element x, then we get a sequential search game with locations {1, x, 4}. This game can still be described by expanding search on a tree. For instance, if a ≤ d then the cost function corresponds to the expanding search of the tree in Fig. 3.
Multiple objects
We conclude with another example of a search game in which the lattice is not the full lattice. Suppose Hider can select more than one location, so he can hide multiple objects as in Lidbetter (2013). For instance, suppose that there are two hidden objects. Then we have a search game on the product lattice {A × A : A ⊂ X } ⊂ 2 X ×X . The game ends as soon as A contains both hidden objects.
Directed random walk and bounds on the value
In this section we derive some properties of the optimal strategies, from which we derive our bounds on the search value V. A mixed Hider strategy is a probability distribution P on the set of locations X . Lemma 2 Let P be an optimal Hider strategy. Suppose that the cost function f is strictly increasing. Then all strategies are active, i.e., P(x) > 0 for all x ∈ X.
Note that by submodularity f is strictly increasing if and only if f (X ) > f (A) for all proper subsets A ⊂ X .
Proof We argue by contradiction and we suppose P(N ) = 0 for some N ⊂ X . We need to show that P is not optimal. Let ∅ = A 0 ⊂ A 1 ⊂ · · · ⊂ A n = X be a pure search strategy that is a best response against P. The cost of this strategy is equal to x∈X P( where A x denotes the first element of the chain which contains x. We modify the chain so that N is searched last. Define A i = A i \ N and adjust the indices such that ∅ = A 0 ⊂ A 1 ⊂ · · · ⊂ A k = X \ N increases one by one. Extend the chain by adding the elements of N in an arbitrary order. Then A x ⊂ A x for all x / ∈ N and we find the inequality Since the original search chain is a best response, the modified search chain cannot improve, and this must in fact be an equality. We conclude that f (A x ) = f (A x ) for all x / ∈ N . But then A x = A x by our assumption that f is increasing. We conclude that in any best response, N is searched last.
An optimal Searcher strategy must mix between pure strategies that are best responses to P. So if P is optimal, then N is searched last. Suppose w / ∈ N and z ∈ N and we select a Hider strategy P that is equal to P for all elements, except P (z) = P (w) and P (w) = 0. A search that starts in X \ N will have the strict inclusion A w ⊂ A z . Against P the search cost changes by P (z) f (A z ) − P(w) f (A w ) > 0, which contradicts our assumption that P is optimal.
The condition that f (X ) > f (A) is sufficient for all Hider strategies to be active, but it is not necessary. In the 3-for-the-price-of-2 game, all Hider strategies are active if x = y = z, but f ({1, 2, 3}) = f ({1, 2}). For a cost function f that is not strictly increasing, there does not seem to be a simple condition which guarantees that all strategies are active.
A pure search strategy corresponds to a path in the Hasse diagram, or equivalently, a permutation of the locations. The strategy space thus has cardinality n!. We prove below that we may limit mixed Searcher strategies to random walks on the Hasse diagram. Since the Hasse diagram is a graph with n2 n−1 edges, this presents a modest reduction on the number of Searcher strategies.
A mixed search strategy is a probability distribution on paths across the Hasse diagram. For each edge AB in the Hasse diagram, let p AB be the probability that the Searcher's path contains AB in this mixed strategy . For a fixed A, it is clear that since both sums represent the probability that Searcher visits A. In other words, if we take the p AB to represent the flow through the edge AB, then the equation says that in-flow equals out-flow. Thus, a mixed strategy induces a flow on the Hasse diagram. Since the out-flow from the root and the in-flow to the top are equal to 1, it is a flow of unit size.
A flow of unit size corresponds to a directed random walk on the Hasse diagram: if Searcher reaches A, then the probability that she moves to B is proportional to p AB . It is perhaps more convenient to think of a flow as a random walk, since a random walk corresponds to a probability distribution on the paths of the Hasse diagram. So, in turn, a random walk corresponds to a mixed search strategy. Thus we find that a mixed search strategy induces a random walk on the Hasse diagram, which is a special mixed search strategy. We now show that the random walk produces the same payoffs as the original mixed strategy.
Theorem 1 For every mixed search strategy there exists a directed random walk on the Hasse diagram that produces the same payoffs.
Proof It suffices to show that the random walk produces the same payoff against each pure Hider strategy. Suppose the Hider location is x ∈ X . Let S be the family of all subsets that do not contain x and let T be the family of subsets that contain x. Then (S, T ) forms a cut in the Hasse diagram. The edges AB that connect S to T all add x to A. Let E x be the set of all these edges. We claim that the payoff of the mixed search strategy against x is equal to To see why this equation holds, observe that a pure search strategy is a single path. It gives payoff f (B) where B is the first subset of vertices in the path which contains x. In other words, it gives payoff f (B) for the unique edge in E x that is in the path. A mixed search strategy is a weighted sum of paths. Each path crosses E x at a unique element. The probability that the path crosses AB is equal to p AB , thus we obtain our equation. If we replace the mixed strategy by a random walk, then we do not alter the probabilities p AB , so the random walk produces the same payoff.
So we have reduced the optimization of the search strategy to a network flow problem. There exists an extensive literature on this topic, but there does not seem to be a ready made solution for our problem. We turn to the analysis of this problem in the next section. We conclude this section with some examples of mixed strategies.
The double tour
A Searcher pure strategy is a permutation π of the hiding locations. Let π be the permutation in which Searcher goes through X in the reverse direction, i.e., π ( j) = π(n + 1 − j). If Searcher's strategy is to select π or π equiprobably, then we say that she performs a double tour. It is analogous to the double tour of a graph, which is an optimal strategy in a wide variety of network search games, see Gal (1979). If the chain ∅ = A 0 ⊂ A 1 ⊂ · · · ⊂ A n = X corresponds to π, then ∅ = A c n ⊂ A c n−1 ⊂ · · · ⊂ A c 0 = X corresponds to π . The payoff against a pure hiding strategy x ∈ X is 1 . So we find that the payoff of a double tour is bounded from below by . This is close to the lower bound on the game in our following theorem: Proof The upper bound is obvious, since f (X ) is the maximum payoff. To prove the lower bound, we let X = {1, . . . , n} and let A k = {1, . . . , k}. Define w j = f (A j ) − f (A j−1 ) and consider the submodular function w(A) = j∈A w j , which is in fact modular (additive). It is well known that w(A) ≤ f (A) for all A ⊂ X , see Schrijver (2003, page 771). So the value W of the game with cost function w satisfies W ≤ V. It suffices to show that W ≥ f (X ) 2 . Consider the mixed strategy in which Hider is at j with probability p( j) = w j j w j . Now if Searcher goes through the locations 1 to n in some arbitrary order, then the probability of finding Hider at k is equal to p(k), which is proportional to w k , and the cost is equal to the sum of all w j over the locations j that have been searched before k. So, regardless of the order of the search, the total cost is k≥ j w k w j j w j . This can be rewritten as
A remark on the lower bound
We may try to improve the lower bound by optimizing the chain The lower bound on V can thus be improved to f (X )+δ 2 . The minimal increment of the chain δ does not exceed min{ f (x) : x ∈ X }, so this is a small improvement only. However, it is the best possible lower bound: the value of searching 3 for the price of 2 is 3x 2 if all costs are equal so that x = y = z. In this case f (X ) = 2x and δ = x.
The core of the game
The reader familiar with coalitional games will notice that an optimal Hider strategy P has properties that are similar to the Shapley value. P satisfies the following four properties: These properties correspond to the four axioms defining the Shapley value Shapley (1953). Our first three axioms are equivalent to Shapley's axioms regarding efficiency, symmetry, and the null player respectively, but the fourth axiom is weaker than the linearity axiom. Note that in property (4) we violate our condition that f (∅) = 0, but the empty set is irrelevant since the game continues until the Hider is caught. If f − c remains submodular, which may happen if c is sufficiently small, then we can replace it by the cost function g that is equal to f − c for all non-empty A ⊂ X and g(∅) = 0. This is called a Dilworth truncation of f − c, see Lovász (1983). More specifically, the strategy P remains optimal if the Dilworth truncation of f − c only attains a different value at the empty set.
In analogy to coalitional games we define the core of the game C f as the polyhedron of Hider strategies that are bounded by the cost function.
Definition 2 Without loss of generality we may suppose that f (X ) = 1. The core of the game is a polyhedron of probability vectors: In a coalitional game, the core may be empty, but the core of a search game is always nonempty, a fact that we have already used in the proof of Theorem 2. We conjecture that the core always contains the optimal Hider strategies.
Conjecture 1 Optimal Hider strategies are contained in the core.
If the conjecture holds, then computing the optimal Hider strategy becomes easier. In particular, the theorem below shows that it simplifies the analysis of cost functions that are direct sums Lovász (1983), i.e., functions f for which there exists a non-empty for A ⊂ X and f (X ) = 1. Let V( A) be the value of the search game restricted to A, and similarly let V( A c ) be the value of the game on A c . Then Before we turn to the proof of this theorem, let us remark why it is useful. Submodular functions can be reduced by repeated Dilworth truncation and decomposition into direct sums, until they become irreducible Cunningham (1983). Optimal strategies are invariant under truncation, while direct sums are easy to handle by this theorem (if our conjecture is true). To find an efficient algorithm to compute the search value of a set, we thus need to focus on games with an irreducible cost function, such as our sample game of searching n for the price of k.
Proof Searcher plays as follows. Either she first exhaustively searches A before searching A c , or she does the exact opposite and exhaustively searches A c first. Of course, Searcher adopts the mixed optimal strategies on A and A c . If Hider hides in A, then the payoff is V( A) if Searcher selects A first, and f (A c ) + V( A) if she selects A c first. If Hider hides in A c the payoff is similar, switching A and A c . We represent this using the following matrix game We need to show that Searcher cannot improve this strategy. Consider the Hider strategy P in which he hides in A with probability f (A) and in A c with probability f (A c ). Of course, his strategy of hiding in these sets is optimal. Let P A be an optimal Hider strategy for the game restricted to A. By our conjecture, the optimal Hider strategy P A on A satisfies P A (V ) ≤ f (V )/ f (A). This implies that the Hider strategy P is in the core of the game on X . If Searcher first performs an exhaustive search in A or A c against P, then the expected search cost is f (A) . We need to show that Searcher can do no better.
Suppose that a best response to P is to first search A 1 ⊂ A, then B 1 ⊂ A c , and then A 2 ⊂ A etc. More specifically, let A = A 1 ∪ · · · ∪ A k and A c = B 1 ∪ · · · ∪ B k for disjoint subsets A i , B j . Searcher alternately searches A and A c , switching from A i to B i and back to A i+1 . Without loss of generality, we may assume that k is as small as possible, and that it is optimal to search A first. If Searcher decides to switch and search B 1 before A 1 , then the | 6,036.8 | 2016-06-27T00:00:00.000 | [
"Economics",
"Mathematics"
] |
Semantic Role Labeling Meets Definition Modeling: Using Natural Language to Describe Predicate-Argument Structures
,
Introduction
Commonly regarded as one of the key ingredients for Natural Language Understanding (Navigli, 2018), Semantic Role Labeling (Gildea and Jurafsky, 2002, SRL) aims at identifying "Who did What to Whom, Where, When, and How?" within a given sentence (Màrquez et al., 2008).More precisely, for each predicate in the sentence, the task requires: i) selecting its most appropriate sense from a predetermined linguistic inventory; ii) identifying its arguments, i.e., those parts of the sentence that are semantically related to the predicate; and, iii) assigning a semantic role to each predicate-argument pair, as shown in Figure 1.Due to the potential uses of these semantically rich structures, the research community has seen steady progress in the task, and SRL has been shown to be beneficial A: SRL annotations using predicate sense and semantic role labels (top) compared with their natural language definitions (bottom).B: the semantics of sense and role labels is undefined for out-of-inventory predicates (e.g., the inventories used for CoNLL-2009 andCoNLL-2012 do not include an entry for "google"), but we can still use valid natural language definitions.for an increasingly wide range of applications in Natural Language Processing (NLP), such as Question Answering (Shen and Lapata, 2007), Information Extraction (Christensen et al., 2011), Machine Translation (Marcheggiani et al., 2018), and Summarization (Mohamed and Oussalah, 2019), as well as in Computer Vision for Situation Recognition (Yatskar et al., 2016) and Video Understanding (Sadhu et al., 2021), inter alia.
An important yet often overlooked aspect of SRL is that, since its conception, the formulation of the task has generally relied upon predetermined linguistic resources, such as FrameNet (Baker et al., 1998), PropBank (Palmer et al., 2005), Verb-Net (Kipper Schuler, 2005) and, more recently, VerbAtlas (Di Fabio et al., 2019), which provide the labels to be used for tagging predicates and their arguments with senses and semantic roles, respectively.Therefore, to this day, SRL has been framed predominantly as a classification task in which systems assign discrete labels to portions of a sentence (Figure 1A,top).Although recent systems have achieved impressive results on standard benchmarks (Hajič et al., 2009;Pradhan et al., 2012) in English (Shi and Lin, 2019;Marcheggiani and Titov, 2020) as well as in multilingual SRL (He et al., 2019;Conia et al., 2021), we observe and emphasize that relying upon discrete labels raises the following critical questions: • The assumption that both predicate senses and semantic roles can be unequivocally categorized into distinct classes has long been -and still is -at the center of numerous discussions because the boundaries between meanings are not always clear-cut (Tuggy, 1993;Hanks, 2000); unsurprisingly, disambiguation approaches that are not tied to specific inventories have been gaining momentum (Bevilacqua et al., 2020;Barba et al., 2021a,b).
• FrameNet, PropBank, and VerbNet are heterogeneous, non-overlapping resources that have led, consequently, to specialized techniques that are more effective on PropBank's rather than FrameNet's labels, or vice versa.
• Relying on any predetermined inventory hinders the ability to generalize to out-ofinventory instances.For example, some rare senses or neologisms may not be covered by the inventory of choice, which, therefore, does not define either their possible senses, or their corresponding semantic roles (Figure 1B,top). 1urthermore, recent progress in NLP at large has primarily pursued state-of-the-art results without giving much importance as to why a system may have a predilection for one particular option over the alternatives, thus making it difficult for a human to interpret their output.And SRL is no exception to this.In this paper, instead, we put forward a generalized formulation of Definition Modeling -the task of defining the meaning of a word or multiword expression in context -to reframe SRL as the task of describing sentence-level semantic relations between a predicate and its arguments using natural language definitions only.More specifically, our contributions can be summarized as follows: 1. We move away from discrete labels and introduce a novel formulation of SRL that reframes the problem as the task of using natural language to describe predicate-argument structures (Figure 1A, bottom).
2. We propose DSRL (Descriptive Semantic Role Labeling), a simple yet effective conditional generation model to produce such natural language descriptions, dropping discrete labels while also demonstrating how to use these descriptions to retrieve standard SRL labels and achieve competitive or even stateof-the-art results on gold benchmarks.
3. In contrast to previous work, our approach provides an interpretable output in natural language, can seamlessly produce descriptions according to different linguistic theories and annotation formalisms, and naturally admits descriptions for out-of-inventory instances (Figure 1B, bottom).
4. We provide an in-depth analysis of the strengths and pitfalls of our approach, showing where there is still room for improvement.
We hope that our semantically-driven descriptions in natural language, free of resource-specific labels that require expert knowledge of SRL, will not only enable easier integration of sentence-level semantics into downstream applications but also provide valuable insights to NLP researchers.
Related Work
Linguistic resources for SRL.As mentioned above, SRL is generally associated with a linguistic theory and a corresponding linguistic resource, which defines an inventory of predicate senses and semantic roles2 (Baker et al., 1998;Palmer et al., 2005;Kipper Schuler, 2005).These inventories are a rich and diverse source of expert-curated knowledge; however, aligning sense and semantic role labels across such resources using manual or automatic techniques (Giuglea and Moschitti, 2006;Palmer, 2009;Lopez de Lacalle et al., 2014;Stowe et al., 2021;Conia et al., 2021) is far from trivial due to their heterogeneous nature, variable degree of coverage, and different granularity.Perhaps it is this complexity that has led researchers towards the development of approaches that are effective mainly in just one of the task "styles", usually PropBank-style SRL (Marcheggiani et al., 2017;Cai et al., 2018;Strubell et al., 2018;Shi and Lin, 2019;Blloshmi et al., 2021;Conia and Navigli, 2022, inter alia) or FrameNet-style SRL (Swayamdipta et al., 2017;Peng et al., 2018;Lin et al., 2021;Pancholy et al., 2021, inter alia).To sidestep this situation, recent studies have analyzed the feasibility of moving away from rigorous linguistic resources and have looked into capturing predicate-argument relations as question-answer pairs, with promising results in the production of questions through slot-filling templates and generative models (He et al., 2015;FitzGerald et al., 2018;Pyatkin et al., 2021).In this paper, instead, we reframe SRL as a generalization of Definition Modeling and directly generate human-readable descriptions of the semantic relations between a predicate and its arguments, replacing discrete labels with natural language definitions to overcome the heterogeneities of linguistic inventories.
Recent approaches in SRL.Independently of the linguistic inventory of choice, given the complexity of the task, early work often employed separate systems for each step of the SRL pipeline (Roth and Lapata, 2016;Marcheggiani et al., 2017).However, in recent years, researchers have successfully managed to develop end-to-end approaches (Cai et al., 2018;He et al., 2018), especially due to the increasing expressiveness of recent neural architectures.Since then, the attention of the community has mainly focused on when syntactic features are useful (Strubell et al., 2018) or can be dispensed with (Conia and Navigli, 2020).Further to this, several studies have also investigated the effectiveness of their proposed approaches on different annotation formalisms, namely, dependency-and after, for simplicity, we follow PropBank and call them senses and semantic roles, respectively, independently of the resource.
span-based SRL (Li et al., 2019;Marcheggiani and Titov, 2020).Most recently, sequence-to-sequence models have found renewed traction by learning to directly generate predicate-argument structures as linearized sequences (Blloshmi et al., 2021;Paolini et al., 2021).Although the focus of our approach is to generate natural language descriptions, we stress that it can be flexibly employed to perform SRL in its traditional formulation, jointly tackling predicate sense disambiguation, argument identification and labeling in a syntax-agnostic fashion for both span-and dependency-based formalisms, the key difference being that our method also produces human-readable and, therefore, interpretable descriptions of the semantics of a sentence.
Definition Modeling.The task of Definition Modeling was originally concerned with producing a natural language definition for a given word and its corresponding embedding (Noraset et al., 2017).The formulation of the task was later generalized to take polysemy into account, as the same word may convey different meanings depending on the context it appears in.Although introduced a few years ago now, Definition Modeling has attracted significant interest (Ni and Wang, 2017;Ishiwatari et al., 2019) and has found success in semantic tasks (Huang et al., 2019;Bevilacqua et al., 2020) such as Word Sense Disambiguation (Bevilacqua et al., 2021, WSD) and Word-in-Context (Pilehvar and Camacho-Collados, 2019, WiC).Motivated by the success of Definition Modeling, we propose a novel generalization of its formulation, in which the objective is to use natural language not only to define a target word in context but also to describe its semantically-relevant sentential constituents.
3 Describing Predicate-Argument Structures using Natural Language In this Section, we introduce our novel reformulation of the SRL task (Section 3.1), describe DSRL, a simple yet effective autoregressive approach for it (Section 3.2), and show how to use DSRL to perform standard SRL (Section 3.3).
Task Formulation
Taking inspiration from Definition Modeling, we propose addressing predicate sense disambiguation, argument identification, and argument classification in an end-to-end fashion as the task of describing the argument structure of a predicate p in a sentence s by generating a natural language description t p that defines not only p but also the semantic relations that connect p to its arguments a 1 , a 2 , . . ., a |A| , where A is the set of arguments of p.For example, if we consider the predicate p = "gave" in the sentence s = "Mary gave the book to John", then a valid natural language description of p and its argument structure could be represented as t p = "give: transfer.
[Mary]{giver} gave [the book]{thing given} [to John]{entity given to}".Indeed, such a sequence contains i) the predicate definition for predicate sense disambiguation, ii) all the arguments of p in s within square brackets for argument identification, along with iii) a definition of the semantic role of each argument within curly brackets.
Description Generation
To tackle our SRL formulation, we introduce a simple end-to-end autoregressive approach that, given an input sentence s and a predicate p in s, generates the natural language description t p of its argument structure.In particular, we devise a sequence-tosequence model whose input sequence s p is defined as follows: where w i is the i-th word in the original sentence s, while <p> and </p> are two special markers that indicate the beginning and the end, respectively, of the predicate p, with k > 1 if p is a multiword expression.Correspondingly, we instruct the model to generate a semantically-augmented sentence t p in which: i) the sense definition of p is prepended to the original sentence, ii) the arguments of p are enclosed within square brackets, and, iii) each argument is followed by its semantic role definition within curly brackets.More formally: where p i is the i-th word of the predicate p, d p i is the i-th word of the definition of p, w a j i is the i-th word for the j-th argument of p, and d a j i is the i-th word of the definition of the semantic role for the j-th argument of p, while k ′ , m j and m ′ j are the length of the definition of p, the length of the argument a j , and the length of the definition of the semantic role for a j , respectively.With this encoding, we then train our sequence-to-sequence model to learn the factorized probability p(t p | s p ) defined as follows: by minimizing the cross-entropy loss with respect to the generated natural language description.
From SRL to Natural Language and Back
Given a dataset annotated with predicate sense and role labels from an inventory that defines such labels in natural language, we note that it is always possible to convert such a dataset to our formulation.3Moreover, although the main objective of our approach is to generate an output sequence that describes sentence-level semantics, in several scenarios, it is still useful to work with discrete labels for predicate senses and semantic roles, e.g., to assess the quality of the generated structures on gold benchmarks with their standard metrics.We stress that our formulation generalizes standard SRL; casting the descriptions generated by our model to standard SRL labels is only possible if the label inventory of choice defines a suitable sense for the target predicate, which is not the case in Figure 1B (top) as the verb "to google" is not covered by PropBank.If the predicate is covered by the inventory, we can easily select the sense or the role label ȳ whose natural language description d ȳ is most similar to the definition d • generated for the predicate p or for one of its arguments a j .We select ȳ as follows: where σ(•) is a similarity function (e.g., cosine similarity), f (•) provides a vector representation of a definition, Y is the set of labels, and d y is the definition of y as provided by the inventory of choice.We note that, for simplicity, we do not apply any post-processing to enforce the validity of the generated output, leaving more complex strategies (e.g., constrained decoding) as future work.
4 Experiments and Results
Data
We train and evaluate DSRL on three widely adopted benchmarks for English SRL, namely: i) CoNLL-2009 (Hajič et al., 2009) for dependencybased PropBank-style SRL, ii) CoNLL-2012(Pradhan et al., 2012) for span-based PropBank-style SRL, and iii) FrameNet 1.7 (Baker et al., 1998) for span-based FrameNet-style SRL.While CoNLL-2009 is a collection of finance-related news from the Wall Street Journal, CoNLL-2012 is a more heterogeneous corpus comprising news, conversations, and magazine articles.FrameNet 1.7, instead, provides a relatively small dataset of annotated documents; following the literature (Swayamdipta et al., 2017;Peng et al., 2018), we include in the training set "exemplar" sentences extracted from partially annotated usage examples from the lexicon itself.We provide a broader look at the characteristics of each dataset in Appendix B and further details about semantic role definitions in Appendix D.
Implementation Details
We implement DSRL using Sunglasses.ai'sClassy. 4As our underlying sequence-to-sequence model, we use BART-large (Lewis et al., 2020), a Transformer-based neural network (400M parameters) pretrained with denoising objectives on massive amounts of unlabeled text. 5We do not modify its architecture except for the embedding layer, where we add the special tokens used to indicate predicates and their arguments,6 as described in Section 3.2.We train our model using RAdam (Liu et al., 2019) as the optimizer for a maximum of 500 000 steps with a batch size of 2048 tokens and a standard learning rate of 10 −5 .
We measure the F1 score on the validation set at the end of each training epoch, adopting an early stopping strategy to interrupt the training process if the F1 score does not improve for 10 consecutive epochs.We do not modify any of the hyperparameters of BART compared to its pretraining phase, and, more generally, we do not run any hyperparameter search due to the cost of fine-tuning the language model.The training process is carried out on a single GPU (a GeForce RTX 3090) and requires about 10 hours for FrameNet, 15 for CoNLL-2009 and 20 for CoNLL-2012.We recall that, in order to evaluate our system with standard scoring scripts,7 we have to cast our descriptions to the discrete labels of the target inventory (see Section 3.3).For this step, we compute the cosine similarity between the representation of a generated description and those of the possible senses or roles, using the sentence-level embeddings of SimCSE (Gao et al., 2021).8
Comparison Systems
We compare our results with the current state of the art in PropBank-style and FrameNet-style SRL.Following standard practice in PropBank-based SRL, we report the results achieved by our system using gold pre-identified (but not disambiguated) predicates, i.e., the position of a predicate (but not its sense label) is given as input to the system.
PropBank-style SRL.We consider Li et al. (2019), who first quantified the benefits of contextualized word representations in both dependencyand span-based PropBank-style SRL, later surpassed by Shi and Lin (2019), who used BERT instead of ELMo, and Conia and Navigli (2020), who designed and took advantage of complex languageagnostic components.We also take into account some studies for PropBank-style SRL that found success by leveraging syntactic features such as He et al. (2019), who devised a strategy to cleverly prune a sentence based on its syntactic dependency tree, and Marcheggiani and Titov (2020), who exploited graph convolutional networks to encode syntactic relations.Most recently, Blloshmi et al. (2021) proposed a simple and general approach to tackle SRL as a sequence-to-sequence task, in which, however, a system is still required to generate a linearized sequence of discrete labels.
FrameNet-style SRL.Although the research community has generally focused on PropBankstyle SRL, especially due to the widespread adoption of PropBank in several CoNLL tasks (Carreras and Màrquez, 2005;Surdeanu et al., 2008;Hajič et al., 2009;Pradhan et al., 2012) and in other resources such as Abstract Meaning Representation (Banarescu et al., 2013, AMR), FrameNet-style SRL has also been at the center of notable studies such as Swayamdipta et al. (2017), who investigated the effect of joint learning of syntactic and semantic features, and Peng et al. (2018), who instead showed the advantages of learning from disjoint data sources.Finally, we also consider recent work by Pancholy et al. (2021), who developed a data augmentation strategy using frame relations, and the above-mentioned Marcheggiani and Titov (2020), who introduced a graph-based neural architecture to tackle FrameNet-style SRL.
Main Results
Here, we first evaluate the robustness of DSRL in achieving strong or even state-of-the-art results on standard benchmarks, and then its flexibility in performing dependency-and span-based, PropBankand FrameNet-style SRL.Remarkably, our model achieves even better results when jointly trained on dissimilar annotation formalisms and linguistic resources, despite their heterogeneous characteristics.
PropBank-style SRL.We first discuss the results obtained by DSRL on the gold standard benchmarks provided as part of the CoNLL-2009 and CoNLL-2012 Shared Tasks, annotated with Prop-Bank sense and role labels.As can be seen in Table 1, we observe strong results in dependencybased SRL, reaching an F1 score of 92.5% in the English test set of CoNLL-2009.Therefore, despite having to cast our natural language descriptions to discrete labels, our approach performs in the same ballpark as the most recent state-of-the-art systems proposed by Conia and Navigli (2020) and Blloshmi et al. (2021); the fact that our approach is able to slightly outperform the latter (+0.1% in F1 score) is particularly meaningful, as they adopt the same pretrained language model (BART-large).We can observe the same behavior in span-based SRL, where our model -without any task-specific modifications -marginally surpasses (+0.1% in F1 score) that of Blloshmi et al. (2021) on the English test set of CoNLL-2012, as shown in Table 2. Thus, the key observation here is that a natural language output does not necessarily hurt performance.
FrameNet-style SRL.As shown in Appendix E, PropBank definitions for predicate senses and semantic roles are quite short, and therefore one may wonder whether our task reformulation is feasible in practice when using longer definitions from richer sources, such as FrameNet, in which the label definitions are up to three times longer.From our experiments, this is, indeed, the case: our 79.9 79.9 79.9 approach achieves state-of-the-art results in fullstructure extraction (Baker et al., 2007) on the test set of FrameNet 1.7, obtaining 79.3 in F1 score (Table 3).We note that the results are not directly comparable with previous work, as DSRL employs a language model (BART) that is different from that of other approaches, e.g., Marcheggiani and Titov (2020) used RoBERTa.However, the results achieved by DSRL still indicate the performance that a generative approach can obtain in framesemantic parsing (Das et al., 2014), which might be considered more complex than PropBank-based SRL.Indeed, predicates in FrameNet usually have a higher degree of polysemy, and the semantic roles are sparser, e.g., there are more than 2000 differ- 5 Quantitative Analysis
Rare and Unseen Senses
The probability with which a word assumes one of its possible senses follows Zipf's distribution (Kilgarriff, 2004), and thus it is very skewed towards the most frequent senses.Here, we analyze the bias that our system shows in predicting the most frequent predicate senses on the following partitions of the CoNLL-2009 and CoNLL-2012 test sets: i) MFS, all the instances containing predicates that are annotated with their most frequent sense; ii) LFS, all the instances containing predicates that are not annotated with their most frequent sense; iii) UNSEEN, all the instances containing predicates that are annotated with a sense that is not present in the training set.
As we can see from Table 5, the performance of our system on predicate sense disambiguation is strong in the MFS partition -more than 98.5% in both CoNLL-2009 and CoNLL-2012 -since the vast majority of predicates are annotated with their most frequent sense.This bias justifies the difference in F1 score between the MFS and LFS partitions, i.e., -11.9% and -9.3% on CoNLL-2009 and CoNLL-2012, respectively.As far as the UN-SEEN partition is concerned, on the other hand, we observe that our approach seems to be capable of generating and retrieving senses that it has never seen at training time with a relatively low decrease in performance (-6.6% and -13.9% compared to the results on the LFS partition).Interestingly, the results on argument labeling are comparable between MFS and LFS predicates.However, there is still large room for improvement in the argument labeling of UNSEEN predicates, whose argument structure represents a more challenging zero-shot setting.
Data efficiency
Considering the large expense entailed in manually annotating text with sense and role labels, we deem it indispensable to also evaluate the flexibility of a system in terms of its scalability on fewer training instances.Therefore, we analyze the results of our model by gradually reducing the training set to 75%, 50%, 25%, and 10% of its original size, and compare this learning curve with that of GSRL (Blloshmi et al., 2021).Notwithstanding the significant differences between the two approaches, both show similar learning curves on CoNLL-2009 and CoNLL-2012 (Figure 2), confirming that manu- ally annotating more sentences eventually ceases to provide large improvements: in fact, the enormous effort of doubling the training instances of CoNLL-2012 by annotating other 100,000 predicates (from 50% to 100% of its original size) results in less than a 1.0% gain in F1 score.Interestingly, our system shows higher data efficiency in the lowest data regime, especially for span-based SRL with a 2.6% gain in F1 score over GSRL when they are both trained on 10% of the original dataset.We argue that our novel formulation better leverages the pretraining of the underlying language model in lower-data scenarios.However, when more training data is available, task-specific approaches are eventually able to close the gap.Finally, we investigate whether our approach is still capable of handling multiple inventories at the same time in low-data regimes.To this end, we trained the model with several combinations of inventories on 10% of their training data.As we can see from 6 Qualitative Analysis
Generation Examples
In Table 7, we provide some examples of the descriptions generated by our system.Given an input sentence, we compare its gold standard sequence (ĝ) with the one generated automatically (g).We find that, in some cases, the automatic descriptions are more contextual than the gold ones, occasionally overcoming the limitations of the linguistic inventories.In Example 1, for instance, the gold definition of the predicate brandish.01 is only applicable to weapons; instead, the model-generated sequence is preferable as the entity brandished is a flag.In other cases, such as in Example 2, our approach generates more descriptive definitions, e.g., depictor instead of agent, and thing described rather than theme.Furthermore, we show some ex- amples in which the model generates semanticallyappropriate natural language descriptions for outof-inventory, and thus unseen, predicates.Even in this setting, the model often generates semanticallyappropriate natural language descriptions.This is the case with Example 3, in which the model describes the semantics of nibble.01(unseen at training time) by taking advantage of a similar predicate, namely, peck.01 (seen at training time).This is also true for noun predicates, as shown in Example 4.
Classes of Error
We identify three main classes of error: the first is directly connected to our system (Disambiguation Errors) and the other two (Out-of-Inventory Descriptions and Retrieval Errors) concern the noisy process we use to cast natural language descriptions to discrete class labels.
Disambiguation errors occur when the model generates a definition that does not describe the correct sense of a predicate in a given context.For example, the system provides the wrong definition for the predicate "bumble" in the following sentence s, misclassifying it as "speak quietly": s: Shane survived the week only to have an executive bumbling his way into a criminal investigation.• Gold: speak or move in a confused way • Pred: speak quietly We note that, given the autoregressive nature of the model, producing a wrong sense definition often compromises the entire argument structure.
Ouf-of-inventory descriptions may be produced by our approach since it is not strictly tied to the vocabulary of a predefined linguistic resource.While our model can generate predicate-argument structures not present in the inventory, they can still provide correct semantic explanations.For instance, in the following sentence, the reference and the generated definitions convey the same semantics: • Gold: dupe: trick.He meets [a French girl]{tricker} who dupes [him]{tricked} [into providing a home for her pet and then steals his car]{induced action}.
• Pred: dupe: deceive.He meets [a French girl]{deceiver} who dupes [him]{victim} [into providing a home for her pet and then steals his car]{tricked into}.
Associating "victim" to "tricked" is far from trivial, and such cases often result in retrieval errors, i.e., errors that are caused by the inability of the sentence embedding model -SimCSE in our case -to correctly capture the semantic similarity between the gold and generated definitions.
Conclusion
Recent progress in SRL has mainly revolved around the development of state-of-the-art systems which, however, are bound to specific predicateargument inventories.In this paper, instead, we proposed a novel task formulation that takes a step towards putting interpretability and flexibility in the foreground: we reframed SRL as the task of describing the predicate-argument structure of a sentence using natural language only, which is human-interpretable by definition.Our experiments, supported by in-depth analyses, demonstrated that prioritizing interpretability does not come at the expense of performance.Furthermore, our approach is flexible enough to achieve competitive or even state-of-the-art results on popular gold standard benchmarks for SRL, showing that natural language can act as a bridge between heterogeneous linguistic resources, e.g., PropBank and FrameNet, and also annotation formalisms, e.g., dependency-or span-based SRL.We hope that our model will foster research in high-performance yet interpretable systems in NLP, and provide a means towards achieving easier integration of sentencelevel semantics into downstream applications.
Limitations
Generation.Although our model achieves results on gold standard benchmarks that are on par or even better than the current state of the art, its generative nature certainly makes it slower than previous work based on discriminative approaches (He et al., 2019;Shi and Lin, 2019;Conia et al., 2021).Indeed, our model generates the entire semantically-augmented sentence, i.e., the input sentence with its predicate-argument structures in natural language, autoregressively.While this issue also affects our most direct competitor (Blloshmi et al., 2021), which generates discrete labels, this is still a limitation -or, more precisely, a weakness -we would like to remark.Indeed, before deploying our system in production environments, one should carefully weigh the advantages of our method against its slower inference times.The degree of slowdown will inevitably depend on the hardware, but we estimate that a generative approach could be several times slower than a discriminative one.However, this could also be a matter for further research on the topic; for example, non-autoregressive generative models are steadily narrowing the performance gap (Gu and Tan, 2022) while mitigating the weaknesses of current autoregressive approaches.
Evaluation.Section 6 and Table 7 provide a qualitative analysis of the behavior of our proposed approach on out-of-inventory instances, which may also include rare predicates or neologisms.We acknowledge that a quantitative analysis of how our model really performs on out-of-inventory instances would provide sounder evidence of the benefits of our approach.However, we do not possess the economic and human resources required to create a benchmark large enough for this purpose.We believe that such a benchmark could be a great contribution to the area of SRL, but the endeavor of annotating a significant number of out-of-inventory instances will require further study.
Multilinguality.Extending our work to multiple languages is still a challenge and may require more effort than current approaches, such as that proposed by Conia et al. (2021) which uses languagespecific decoders on top of a shared cross-lingual encoder.One could consider pursuing a similar strategy, i.e., using a shared cross-lingual encoder and multiple language-specific autoregressive decoders.However, the main limitation here is the availability and the structure of current linguistic inventories in other languages and, therefore, definitions in languages other than English.For instance, the Chinese PropBank inventory provided as part of the CoNLL-2009 Shared Task lacks definitions for the majority of the predicate senses, whereas the latest version is not freely distributed.Fortunately, the attention to multilingual SRL is increasing; for example, it would certainly be interesting to analyze the feasibility of our approach to the recently released global FrameNet project.
Ethics Statement
Pretrained language models have been shown to manifest undesirable biases, inherited from the corpora on which they have been trained using selfsupervision strategies.We train our model starting from the weights of BART (Lewis et al., 2020) and, therefore, there is a high probability that these biases are also inherited, or even exaggerated, by our final models.However, we did not investigate such biases in this work; hence, we advise against using our model in a production environment without a careful analysis beforehand.Finally, we remark that the test sets of CoNLL-2009, CoNLL-2012, and FrameNet 1.7 also contain relatively old documents about economics, politics, and past events that do not reflect the current situation.Therefore, the results of such benchmarks are intended only as a basis for comparison with previous approaches and not as a measure of the performance of our model in real-world applications.
A Data License
Both the CoNLL-2009 and CoNLL-2012 datasets are distributed by the Linguistic Data Consortium (LDC) and can be used under the LDC license.9FrameNet 1.7 -the linguistic resource and its annotated dataset -is freely available upon request. 10e note that the original Shared Task of CoNLL-2012 was concerned with the task of Coreference Resolution; however, given its SRL annotations, it soon also became a popular benchmark for spanbased SRL.
B Data Statistics
In Tables 8, 9, and 10, we provide an overview of the statistics of the train, validation and test sets, respectively, for the datasets we use in our experiments, namely, the English splits of CoNLL-2009, CoNLL-2012, andFrameNet 1.7.In particular, for each dataset, we report the number of sentences and their average length in tokens, with FrameNet having the longest sentences on average (+20% over CoNLL-2009 and+40% over CoNLL-2012).We also report the number of annotated predicates for each dataset; interestingly, each predicate in FrameNet features around 6 arguments per predicate, a value that is much larger than those of CoNLL-2009 andCoNLL-2012, which feature around 2.5 arguments per predicate.These are probably the reasons why the FrameNet dataset is particularly challenging, even for modern neuralbased models.
Finally, we can also appreciate the heterogeneity between the characteristics of PropBank-style and FrameNet-style SRL.Indeed, FrameNet clusters predicate senses into frames, resulting in a smaller number of predicate classes (around 1,000) compared to PropBank (5,000 to 8,000).At the same time, the frame-specific semantic roles of FrameNet result in a much larger number of role classes compared to the coarse-grained semantic roles of PropBank.
C Training Sequence Statistics
In Table 11, we report the average length in characters of the sequences used to train our model.As we can see, FrameNet 1.7 features the longest sequences among the three datasets we take into account, in line with what we report in Appendix B.
D Argument Modifiers Definitions
The English PropBank features two categories of semantic roles: core and adjunct.If we define a semantic role as the relationship between an action or event (predicate) and one of the participants (argument), then the former category includes all those semantic roles that mark an important participant in the event, one that is expected to take part in it.In PropBank, these core roles are identified using the labels ARG0, ARG1, . . ., ARG5, and their definitions change from predicate sense to predicate sense.Instead, the second category, namely the adjunct roles or argument modifiers, are general roles whose semantics is not specific to a particular predicate and, therefore, can be used to tag general arguments, e.g., the time of the action (ARGM-TMP) or the place of the event (ARGM-LOC).We use the PropBank guidelines to translate such labels into natural language.In Tables 12 and 13, we list the argument modifiers definitions that we use to train our model on CoNLL-2009 andCoNLL-2012, respectively.While we aimed at creating argument modifier definitions that are homogeneous with the core role definitions, we remark that we did not perform a search for better definitions.As one can see, some of the definitions reported in Tables 12 and 13 are the natural language equivalent of the labels (e.g, ARGM-ADV and its definition "adverbial modifier", ARGM-LVB and its definition "light verb", or ARGM-PRD and its definition "secondary predication", among others).We believe that a possible venue for future research is looking into how we can create better definitions for such semantic roles.
E Definitions Statistics
The length of the sequence that our model generates in output is certainly dependent on the length of the definitions we use to describe the sense of a predicate and its arguments.In this Appendix, we provide a broad look at the number of unique sense and role definitions that appear in the train, validation, and test sets of CoNLL-2009, CoNLL-2012 andFrameNet 1.7.
As we can see in ingly, the difference between CoNLL-2009 and CoNLL-2012 in the average length of the semantic role definitions is even narrower, whereas the difference in length between PropBank-style and FrameNet-style role definitions widens even further, with FrameNet using role definitions that are almost four times longer than PropBank's.The difference in length between the predicate sense and semantic role definitions between FrameNet and PropBank can be explained by the fact that, in the former resource, the definitions are richer and more detailed.For example, the agent of the predicate provide is defined just as "giver" in Prop-Bank, whereas in FrameNet is defined as "person that begins in possession of the theme and causes it to be in the possession of the recipient".
F Special Tokens
As mentioned in Section 3.2, we use some special tokens to instruct the model on some task-specific functions.For example, we pre-identify a predicate in an input sentence by surrounding its tokens with the special tokens <p> and </p>, indicating the start and the end of a predicate, respectively.Table 16 lists all the special tokens we use in our model in addition to the standard ones (e.g., <s> and </s> to indicate the start and end of the generated sequence).We note that some of these special tokens can be used in combination.For example, combining <propbank<span-srl> informs the model that we want it to generate a sentence anno- tated with PropBank-style definitions according to the span-based formalism; instead, combining <framenet<span-srl> will result in a sentence annotated with FrameNet-style definitions using a span-based formalism.
For reference, we also provide a few examples of how these special tokens are inserted in an input or output sequence in Table 17, using sentences from the training set of CoNLL-2012.
For the implementation, we simply add these special tokens to the input and output vocabulary of the underlying language model (i.e., BART).The embeddings corresponding to the special tokens are randomly initialized and updated during training.
Figure1: A: SRL annotations using predicate sense and semantic role labels (top) compared with their natural language definitions (bottom).B: the semantics of sense and role labels is undefined for out-of-inventory predicates (e.g., the inventories used for CoNLL-2009 and CoNLL-2012 do not include an entry for "google"), but we can still use valid natural language definitions.
Table 3 :
Results (%) on precision (P), recall (R) and F1 score on the English test set of FrameNet.
Table 5 :
Predicate and argument labeling scores on the test sets ofCoNLL-2009 and CoNLL-2012.We report the performance (F1) on the most frequent senses (MFS), least frequent senses (LFS) and unseen senses (UNSEEN).Support indicates the number of instances (percentage) of the corresponding class.
Table 7 :
Generation examples.Given an input sentence, we compare the gold and the system-generated sequence.Predicates are underlined.
Table 8 :
Overview of theCoNLL-2009, CoNLL-2012, andFrameNet training datasets.For each dataset we report the number of sentences (Total s ), the number of sentences with at least an annotated predicate (Annotated), the average number of tokens per sentence (Avg.Len.), the number of predicates (Total p ) and predicate senses (Senses), and also the number of arguments (Total a ) and argument roles (Roles).
Table 9 :
Overview of theCoNLL-2009, CoNLL-2012, and FrameNet validation datasets.For each dataset we report the number of sentences (Total s ), the number of sentences with at least an annotated predicate (Annotated), the average number of tokens per sentence (Avg.Len.), the number of predicates (Total p ) and predicate senses (Senses), and also the number of arguments (Total a ) and argument roles (Roles).
Table 10 :
Overview of theCoNLL-2009, CoNLL-2012, andFrameNet test datasets.For each dataset we report the number of sentences (Total s ), the number of sentences with at least an annotated predicate (Annotated), the average number of tokens per sentence (Avg.Len.), the number of predicates (Total p ) and predicate senses (Senses), and also the number of arguments (Total a ) and argument roles (Roles).
Table 11 :
CoNLL-2009, CoNLL-2012, andFrameNet training sequence statistics.For each dataset, we report the average length in characters of the sequence used for training the model.domain of CoNLL-2009, which features a significant portion of sentences about finance from the Wall Street Journal, whereas CoNLL-2012 covers a more varied set of domains.Although the number of unique sense definitions is different, the average length of these definitions between CoNLL-2009 and CoNLL-2012 is close, suggesting homogeneous definitions despite the use of two different versions of the English PropBank.This is not the case when comparing the average length of the PropBank definitions used for CoNLL-2009 and CoNLL-2012 with those of FrameNet.Indeed, predicate sense definitions in FrameNet are two to three times longer on average than PropBank's.However, the experimental results reported in Tables 3 and 6 show that our proposed generative model is still able to produce longer sense definitions.We can observe a similar picture in Table15for the definitions of the semantic roles.Interest-
Table 12 :
CoNLL-2009argument modifiers definitions.We provide descriptions for argument modifiers when they are not specified in the given predicate roleset.
Table 13 :
CoNLL-2012 argument modifiers definitions.We provide descriptions for argument modifiers when they are not specified in the given predicate roleset. | 9,083.2 | 2022-12-02T00:00:00.000 | [
"Computer Science"
] |
A Product Conceptual Design Method Based on Evolutionary Game
: In this paper, an intelligent-design method to deal with conceptual optimization is proposed for the decisive impact of the concept on the product-development cycle cost and performance. On the basis of matter-element analysis, an effective functional-structure combination model satisfying multiple constraints is first established, which maps the product characteristics obtained by expert research and customer-requirements analysis of the function and structure domain. Then, the Evolutionary Game Algorithm (EGA) was utilized to solve the model, in which a strategy-combination space is mapped to the solution-search space of the conceptual-solution problem, and the game-utility function is mapped to the objective functions of concept evaluation. Constant disturbance and Best-Response Correspondence were applied cross-repeatedly until the optimal equilibrium Pareto state corresponding to the global optimal solution was obtained. Finally, the method was simulated on MATLAB 8.3 and applied to the design for fixed winch hoist, which greatly shortens its design cycle.
Introduction
Research on product conceptual design is booming with regard to the direct influence of the concept on the quality of the final product, and the vast majority of researchers agree that how to scientifically evaluate candidate concepts and how to express a product concept with an accurate model are two vital tasks in conceptual design [1].Hence, advanced models and effective evaluation systems have been intensively addressed by researchers worldwide.Danni et al. [2] presented an evaluation and selection method composed of three modules: data mining, concept reconstruction, and decision support, to improve the efficiency of concept review and evaluation.Sun et al. [3] established an effective conceptual model for new-product concept development from two theoretical backgrounds about organizational learning, and the model was applied to the design of a large scramjet with satisfying results.Wang et al. [4] proposed an optimization decision model for product conceptual design to help enterprises select key technical characteristics under the condition that cost and time maximally meet customer requirements.Christoph F. et al. [5] presented methodology integration with a knowledge model for conceptual design in accordance with model-driven engineering, and the work extended Gero's Function-Behavior-Structure model.Based on Bunge's Scientific Ontology, Chen et al. [6] developed an explicit and complete conceptual foundation for the establishment of a new conceptual design model.Varun Tiwari et al. [7] proposed a novel way of performing design-concept evaluations, where instead of considering the cost and benefit characteristics of the design criteria, the work identifies the best concept that satisfies constraints imposed by the team of designers, as well as fulfilling as many of a customer's preferences as possible.To obtain the best comprehensive performance of mechanical products, Wang et al. [8] established an evaluation model for product conceptual design based on the principle of maximum-entropy value, and solved the model by constructing a Lagrange function.
The above work mainly focuses on product-model expression and product conceptual evaluation.However, there could be many generated concepts through its combination nature, and the evaluation of a larger number of concepts, one by one, is a very difficult work, although many novel and effective methods of concept evaluation have been proposed [6][7][8].As a result, the best design concept cannot easily be obtained, and the internalization of the conceptual-design process becomes critical.
Computational intelligence, which consists of an evolutionary neural network and fuzzy logic, is a novel technology aiming to bring intelligence into computation [9].Attempts have been made in recent years for the application of computational intelligence.Manu Augustin [10] proposed a framework that uses a fuzzy inference process for evaluating each initial concept against identified decision criteria, to select and/or evolve improved concepts.Integrated with ACO, Ma et al. [11] presented a mathematical programming model to quantitatively predict change-propagation impact, and improved the intelligence of change-propagation prediction during the design process.Ming-Chyuan et al. [12] proposed an integrated procedure that involves neural-network training and genetic-algorithm simulations within the Taguchi quality-design process to aid in searching for an optimal solution with more precise design-parameter values for improving product development.Oliviu Matei et al. [13] addressed the automated product-design problem with two distinct evolutionary approaches: genetic algorithms and evolutionary ontology.S.H. Ling [14] developed intelligent particle-swarm optimization (iPSO), where a fuzzy-logic system, developed based on human knowledge, is proposed to determine the inertia weight for the swarm movement of the PSO and the control parameter of a newly introduced cross-mutated operation.
Although the above methods greatly contribute to the process of conceptual-design intelligence, the main focus is to study the commonality of various problem models [4,6,13,14].While the model can be solved to obtain a feasible solution, they ignore the personality of the problem.If we choose or design a specific algorithm to solve a specific problem, the efficiency and accuracy of the solution is improved [8].In view of this, we explored the establishment of a constraint model for product design, focusing on the functional variables and constraints of the model, and the optimal or approximately optimal solution of the functional variable combination of the Evolutionary Game Algorithm (EGA) search model was completed in this paper.In order to accurately express information during conceptual design, product characteristics are extracted at first via customer-requirement analysis and the application of expert knowledge, and the Analytic Network Process (ANP) is used to assess their importance.Then, a model of product conceptual design is established by means of mapping product characteristics to the functional and structural domains while comprehensively taking all constraints of product conceptual design into account.Finally, to quickly solve the model, intelligent algorithm EGA, with fast convergence speed, was used [15], and the optimal solution was obtained after multiple evolutions.
The paper is organized as follows.Section 1 introduces the process of how a product-optimization design model is established.Section 2 briefly introduces EGA.Section 3 provides a practical example to illustrate how the method performs.Section 4 concludes the paper.
Matter-Element Description
The matter-element model is a representation of objects for computer storage, recognition, and operation, which is widely employed in product design and reliability assessment.Yue et al. [16] applied matter-element theory to ecological-risk assessment, and successfully evaluated the Gannan Plateau.To solve the formal description in the modular design of mechanical products, Huang et al. [17] introduced extension theory into Reconfiguration Design Technology (RDT), and built the matter-element model.Based on the model, the selection, matching, and transformation of a mechanical product and its modules were researched.Liu [18] proposed an assessment approach by combining extension and ensemble empirical-mode decomposition (EEMD) to describe the bearing performance-degradation (BPD) process that was denoted by the matter-element model.Lv et al. [19] presented a new method for equipment-criticality evaluation based on a fuzzy matter-element model.
In this paper, a model of product conceptual design based on matter-element analysis was constructed.Firstly, the function tree and structure tree could be obtained by mapping product characteristics to functional and structural domains, before which the product characteristics and their importance must be obtained through expert investigation and customer-requirement analysis.Then, in order to obtain the utility function of a product, various constraints in conceptual design are comprehensively considered, and the utility vector of the product characteristics is given to each substructure with the knowledge of the expert team.Finally, a matter-element model of product conceptual design is established with both utility attributes and design-constraint attributes invested to express product information.
The above model can be described as Pro = (S_Attrib, U_Attrib, C_Attrib, Cl_Attrib), where U_Attrib denotes the model-evaluation information of each product concept; Cl_Attrib denotes the hierarchical information of the matter element; C_Attrib denotes the constraint information including functional, structural, and relational constraints during product conceptual design; and S_Attrib denotes product-feature information.In order to express it more clearly, the matter-element model of product conceptual optimization design is expressed as follows: where v i (i = 1, 2, 3, 4) is the value of an attribute belonging to a matter element, the larger the v 1 , the better the concept; v 2 denotes the hierarchical information of the matter element; v 3 indicates whether the solution is a feasible solution, for example, v 3 = 0 means that the solution is feasible without breaking any constraint; and v 4 is the combined information of the substructures for achieving a functional unit.The detailed information of each attribute is expressed via its submatter elements, and the process of finding the optimal solution is transformed into the process of searching for a matter element of a product concept with a maximum v 1 under constraint conditions v 3 via combination of submatter elements.
Matter-Element Description
Product-characteristics set PC is obtained through the brainstorming of experts and technicians involved in all phases of the product life cycle, with customer requirements being taken into consideration (the ith element in PC is denoted by PC i ).The original PC should be processed to obtain the new one, as their relationships may be inclusion, cross, and independence.Generally speaking, there are mutual relations between elements in PC, customer-requirement set CR (the jth element in CR is denoted by CR j ) and PC i , which should all be taken into account when synthetically analyzing the importance of PC.The Analytic Network Process (ANP) method is a widely used decision-making algorithm, mainly to determine the relative importance of a group with inter-related elements in a multiobjective decision-making problem; therefore, it is adopted to analyze a PC and calculate its importance.
Analyzing the importance of a PC driven by CR
Assume that each PC i is independent from the others.Importance vector w s = (w 1 , w 2 , . . .w m ) is obtained according the customers' preference for each requirement.For each PC i , relative importance matrix R i between CR and PC i is evaluated by an expert team; for element r ij ∈ [0,9] in R k , which indicates the importance of RC i for RC j when pursuing PC k , if r ij = 0, then r ji = 1/r ij ; else, r ij = r ji = 0.In addition, the Analytic Hierarchy Process (AHP) [20] is used to obtain relative-importance vector w i = (w 1i , w 2i . . .w ii , w mi ), where ∑ m j=1 w ji = 1, and importance matrix W cr-pc of a PC driven by CR can finally be obtained.
where w ij denotes the impact degree of CR i on PC j , and vector w (1) = w s × w cr-pc denotes the importance of a PC driven by CR.
Gaining mutual importance among elements of PC
Relative-importance degree matrix R' i that is similar to R i is obtained when considering the correlations between PC i and the others.r ij in R'i indicates the importance of PC i for PC j when pursuing PC k , and importance vector w (2) = (w n ) is also obtained by AHP, where ∑ n j=1 w 3. Gaining the importance of PC The importance of PC i is shown in Equation ( 5) by comprehensively considering the two relationships mentioned above. (5)
Multidomain PC mapping
With the fuzzy, complex, and tedious relationships between PC and the product structure, inaccuracy of information mapping and loss of information occur if we directly map the PC to the product-structure domain.Therefore, considering the correspondence between product function and structure in axiomatic design [21], the functional domain is introduced as an intermediate medium between PC and product structure, guiding mapping the PC to the product domain, and completing the product-structure design of the specific PC i .
Function Decomposition
The process of PC multidomain mapping is shown in Figure 1, where the product-function tree is obtained by progressively decomposing product function to the tiniest independent functional units; the structure tree corresponding to the function tree is obtained by an expert team that enumerates the component structure corresponding to each function in the product-design database; the cell located at the bottom of the structure tree is called a substructure.
Concept Modeling
Both functional units and substructures are denoted by the matter element after finishing the multidomain mapping of PC.The optimal-design concept is obtained by solving the model through the EGA via mapping product features to game players.A mechanical-product concept is expressed in Figure 2; it has n functional units, and ith functional units have k substructures.where the information of the entire product concept is denoted by the matter-element model, for example, the specific information of the ith functional unit of the product, which includes structural information S_Attrib, constraint information C_Attrib, utility information U_Attrib, and hierarchical information Cl_Attrib, is denoted by the second-level matter element.The substructures to achieve a functional unit are denoted by third-level matter elements.It should be noted that the thirdsubstructure-layer matter elements are alternative substructures, which are optional strategies of the game player, since the effectiveness of structural combinations has not been judged; therefore, no constraint information is required.
Values of Obtained Matter-Element Attributes
Linguistic terms such as 'very unimportant' and 'medium' are usually used to assess an
Concept Modeling
Both functional units and substructures are denoted by the matter element after finishing the multidomain mapping of PC.The optimal-design concept is obtained by solving the model through the EGA via mapping product features to game players.A mechanical-product concept is expressed in Figure 2; it has n functional units, and ith functional units have k substructures.
Concept Modeling
Both functional units and substructures are denoted by the matter element after finishing the multidomain mapping of PC.The optimal-design concept is obtained by solving the model through the EGA via mapping product features to game players.A mechanical-product concept is expressed in Figure 2; it has n functional units, and ith functional units have k substructures.where the information of the entire product concept is denoted by the matter-element model, for example, the specific information of the ith functional unit of the product, which includes structural information S_Attrib, constraint information C_Attrib, utility information U_Attrib, and hierarchical information Cl_Attrib, is denoted by the second-level matter element.The substructures to achieve a functional unit are denoted by third-level matter elements.It should be noted that the thirdsubstructure-layer matter elements are alternative substructures, which are optional strategies of the game player, since the effectiveness of structural combinations has not been judged; therefore, no constraint information is required.
Values of Obtained Matter-Element Attributes
Linguistic terms such as 'very unimportant' and 'medium' are usually used to assess an attribute's importance, as they are always fuzzy during product design.Some linguistic terms should where the information of the entire product concept is denoted by the matter-element model, for example, the specific information of the ith functional unit of the product, which includes structural information S_Attrib, constraint information C_Attrib, utility information U_Attrib, and hierarchical information Cl_Attrib, is denoted by the second-level matter element.The substructures to achieve a functional unit are denoted by third-level matter elements.It should be noted that the third-substructure-layer matter elements are alternative substructures, which are optional strategies of the game player, since the effectiveness of structural combinations has not been judged; therefore, no constraint information is required.
Values of Obtained Matter-Element Attributes
Linguistic terms such as 'very unimportant' and 'medium' are usually used to assess an attribute's importance, as they are always fuzzy during product design.Some linguistic terms should be transferred to crisp numbers for accurate analysis and calculations.
• Strategy variables and utility vectors are obtained For m substructures s ij (j = 1, 2 . . .m) corresponding to a functional unit f i (i = 1, 2 . . .n), one of them must be chosen to achieve f i during conceptual design, and the choice information of f i for m substructures can be donated by the value of S_Attrib.For example, if m = 8 and the fourth substructure is chosen, then the value of S_Attrib of f i is v 4 = 00010000, and the utility vector for the PC of substructure s i4 is used to calculate the utility value of the product concept.A typical mapping relationship between product features and matter-element attribute values is expressed in Figure 3.For m substructures sij (j = 1, 2…m) corresponding to a functional unit fi (i = 1, 2... n), one of them must be chosen to achieve fi during conceptual design, and the choice information of fi for m substructures can be donated by the value of S_Attrib.For example, if m = 8 and the fourth substructure is chosen, then the value of S_Attrib of fi is v4 = 00010000, and the utility vector for the PC of substructure si4 is used to calculate the utility value of the product concept.A typical mapping relationship between product features and matter-element attribute values is expressed in Figure 3. where uij is a utility vector of a jth substructure of an ith functional unit provided by experts and designers based on a nine-point scale [22], which denotes the utility index of sij; xi is the strategy variable of functional unit fi, which denotes the choice information of alternative substructures.In the matter-element model proposed above, xi is the value of S_Attrib for a game player.The frequently used nine-point scale is shown in Figure 4.
•
C_Attrib value is obtained Product-design constraints, including functional, structural, and related constraints are ultimately embodied in the portfolio optimization of product substructures.In the optimization model proposed in this paper, a uniform expression C was used to specify dependency constraints that can denote the multiple constraint forms, and a dual constraint is taken as an example, shown in Equation 4.
, = { , } where xi and xj are variables denoting the constraint relationship between functional units i and j, and the ranges of xi and xj are expressed as uik and ujp, respectively.C indicates that j must choose the pth substructure if i chooses the kth.The constraint between a fixed winch hoist coupling and its service brake is used as an example.
C (b, c) = {(Wheel break, Wheel coupling), (Disc break, Disc coupling)} (7) This shows that the wheel brake must be matched with the wheel coupling; otherwise, the number of constraints on the current composition strategy increases.If the number of constraints in the current combination strategy is i, then the value of C_Attrib v3, which is used to decrease the utility where u ij is a utility vector of a jth substructure of an ith functional unit provided by experts and designers based on a nine-point scale [22], which denotes the utility index of s ij ; x i is the strategy variable of functional unit f i , which denotes the choice information of alternative substructures.In the matter-element model proposed above, x i is the value of S_Attrib for a game player.The frequently used nine-point scale is shown in Figure 4.For m substructures sij (j = 1, 2…m) corresponding to a functional unit fi (i = 1, 2... n), one of them must be chosen to achieve fi during conceptual design, and the choice information of fi for m substructures can be donated by the value of S_Attrib.For example, if m = 8 and the fourth substructure is chosen, then the value of S_Attrib of fi is v4 = 00010000, and the utility vector for the PC of substructure si4 is used to calculate the utility value of the product concept.A typical mapping relationship between product features and matter-element attribute values is expressed in Figure 3. where uij is a utility vector of a jth substructure of an ith functional unit provided by experts and designers based on a nine-point scale [22], which denotes the utility index of sij; xi is the strategy variable of functional unit fi, which denotes the choice information of alternative substructures.In the matter-element model proposed above, xi is the value of S_Attrib for a game player.The frequently used nine-point scale is shown in Figure 4.
•
C_Attrib value is obtained Product-design constraints, including functional, structural, and related constraints are ultimately embodied in the portfolio optimization of product substructures.In the optimization model proposed in this paper, a uniform expression C was used to specify dependency constraints that can denote the multiple constraint forms, and a dual constraint is taken as an example, shown in Equation 4.
, = { , } where xi and xj are variables denoting the constraint relationship between functional units i and j, and the ranges of xi and xj are expressed as uik and ujp, respectively.C indicates that j must choose the pth substructure if i chooses the kth.The constraint between a fixed winch hoist coupling and its service brake is used as an example.
C (b, c) = {(Wheel break, Wheel coupling), (Disc break, Disc coupling)} (7) This shows that the wheel brake must be matched with the wheel coupling; otherwise, the number of constraints on the current composition strategy increases.If the number of constraints in the current combination strategy is i, then the value of C_Attrib v3, which is used to decrease the utility • C_Attrib value is obtained Product-design constraints, including functional, structural, and related constraints are ultimately embodied in the portfolio optimization of product substructures.In the optimization model proposed in this paper, a uniform expression C was used to specify dependency constraints that can denote the multiple constraint forms, and a dual constraint is taken as an example, shown in Equation (4).
where x i and x j are variables denoting the constraint relationship between functional units i and j, and the ranges of x i and x j are expressed as u ik and u jp , respectively.C indicates that j must choose the pth substructure if i chooses the kth.The constraint between a fixed winch hoist coupling and its service brake is used as an example.
C (b, c) = {(Wheel break, Wheel coupling), (Disc break, Disc coupling)} (7) This shows that the wheel brake must be matched with the wheel coupling; otherwise, the number of constraints on the current composition strategy increases.If the number of constraints in the current combination strategy is i, then the value of C_Attrib v 3 , which is used to decrease the utility value of a concept in an evolutionary game, is i.
• Cl_Attrib value is obtained The Cl_Attrib attribute in the model mainly denotes the hierarchical information of the matter element.As shown in Figure 2, v 2 = 1 indicates it is just a matter element of the product concept rather than a component.
Benefits
•By focusing on functional variables and constraints of the model, the obtained solution is the optimal solution that satisfies the constraint.
•Comprehensively considering PCs and CRs makes products perform well in terms of performance and personalization.
•A modular product functions as a player in the EGA that performs well on combinatorial optimization problems, and quickly obtains the optimal solution.
Introduction of Evolutionary-Game Algorithm
Considering that product conceptual design is actually a combinatorial optimization problem, EGA was employed to solve the above optimization model as it is effective in solving combinatorial optimization problems [23].The optimal solution is obtained through the game for functional-unit-layer matter elements, a combination of substructure-layer matter elements, and comparison between matter elements in the conceptual layer.
The EGA is a novel kind of intelligent computation algorithm based on economic game theory and dynamic evolution calculation, which takes maximum utility as its optimization objective and searches the whole solution space by combining the strategies of game players, and simultaneously considers local and global performances.Compared with the selection process of a stochastic genetic algorithm, the EGA converges to a global optimal solution with probability 1, and is more certain in evolution [24].
Fundamental Theorems
A basic game consists of game player i, strategy set S, and utility u; the two fundamental theorems for EGA are shown as follows.
•
If strategy combination S* satisfies Equation (8) for any strategy s i ∈ S i of any game player i, then it is called an S* Nash equilibrium, and S i is the strategy set of i.The specific form of Equation ( 8) is as follows: where S −i is the strategy combination of players without i, S −i * is the Nash equilibrium of strategy combinations of players without i, and s * i is the optimal strategy for i in a Nash equilibrium.It is called a strict Nash equilibrium when • Assuming that S −i = ∏ S k , where k = 1,2 . . .n and k = i.If Equation (10), established as follows, is satisfied, then B i is called the Best-Response Correspondence for player i.
Underlying the meaning of the Best-Response Correspondence is a process where i chooses the strategy with the maximum utility in the current situation.The dynamic process that all game players complete a Best-Response Correspondence in turn is called the Optimal-Response Dynamic.
EGA Expression
The specific form of the evolutionary-game algorithm is expressed as EGA= {G, S 0 , α, β, τ}, and each member of the EGA is described in detail as follows.
• Game structure G The game structure is described as G = [I, S, U], where I, S, and U denote the information of the game players, the current situation, and utility, respectively.For the model described in this work, game-player set I is obtained by mapping functional units to the strategy variables, and k substructures for realizing a functional unit are mapped to the strategy set of the player.
The mathematical description is s ij ∈ {0,1}, where i ∈ I (1 ≤ i ≤ n) and 1 ≤ j ≤ k; for example, if k = 7 and the third substructure is selected when he functional unit i generates a strategy, according to Section 2.3.3, the strategy of player i transfers to binary code 0010000.Then, the strategy combination of n players constitutes a solution S (also called a situation) in the above model.Equation 11is the form of utility function f(S) that is used to calculate utility value U of the current situation.
where u ij is the utility vector of substructure s ij , m is the element number of PC; W PC is the importance degree of m PC i calculated by Equation ( 5); W i is the importance degree of game player i in product conceptual design, where ∑ n i=1 W i = 1; n is the number of game players; and f max is the maximum-utility value of the current evolutionary generation.Compared with penalty functions in other algorithms that are difficult to determine forms, f max can be directly calculated.It should be noted that game player i is only bound by the constraint rules associated with itself during the game.
• Initial situation S 0
The EGA starts with S 0 , which is initialized by a randomization method.
• Optimization operator α
Game theory is based on the assumption that all game players are economic, and in the process of evolution, each game player pursues maximum-utility values.Hence, the Best-Response Correspondence is called the optimization operator for the maximum-utility value of a game player made by it.
• Equilibrium perturbation operator β
In order to ensure that the solution obtained by the EGA is globally optimal, equalization perturbation operator β is employed to break the current Nash equilibrium state reached after several iterations; then, a new Nash equilibrium state is obtained by performing the Best-Response Correspondence of each player after the balance state is sequentially broken.The specific calculation form of β is shown in Equation ( 12): where p i is the perturbation probability assigned according to the importance degree of each functional unit, and the functional units with much contribution to the utility value of a solution are easier to deviate the system from the original state.Therefore, higher disturbance probability should be attached to them, and the functional units with less contribution to the utility value should be given lower probability.X i is a decimal randomly generated from 0 to 1; si indicates that the disturbance operator changed nothing and the former strategy is maintained; Z i is the disturbance operator that means a strategy is randomly selected from the strategy set of player i to replace the current one.
• Termination condition τ In a given situation, the process of Optimal-Response Dynamic is called one round, and the Nash equilibrium state of the situation is reached after two rounds.Two rounds achieving a Nash equilibrium state are defined as a generation.Setting the iteration termination condition as τ ≥ T, and T is the preset iteration generation.
EGA Process
The specific process of EGA is shown in Figure 5.
Case Study
A 3600 KN fixed winch hoist that was supported by the Sinohydro hydraulic machinery company was taken as an example to validate the method mentioned above.PC set F for the fixed winch hoist was obtained by product investigation, customer-requirement analysis, technical, economic, and social environments, and, finally, an expert team.Given F = {○ 1 low complexity, ○ 2 manufacturability, ○ 3 assembly ability, ○ 4 reliability, ○ 5 mechanical strength ○ 6 environment- Step 1: Set parameters.First, maximum iteration number T and disturbance probability pi are set.
Update game structure to G = G 0 with the strategy randomly initialized; then, initial situation S 0 is generated, and the system starts to evolve from S 0 when τ = 0.
Step 3: Calculate current-situation utility value.Calculate the U of the current situation based on f(S).
Step 4: Application of optimization operator α.
α is first used to estimate updated player utility, and then to update the strategy combination of game players from S j to S j+1 when the updated one is better than the before; otherwise, keep S j unchanged.
Step 5: Stability of the situation.
If the situation at timer τ = τ(i) satisfies U i+1 = U i , then strategy S i is stable and its corresponding solution is a Nash equilibrium solution.
The new situation is achieved by applying β to the current situation; then, update situation S j to S j+1 and calculate the utility value of S j+1 .Finally, estimate whether it is a stable evolution strategy again.
Step 7: Estimation of termination condition.The algorithm terminates when τ ≥ T is satisfied; otherwise, it returns to Step 6.
The EGA steps can be regarded as a stochastic process in a Nash equilibrium solution space that continuously updates the current stable solution with a better Nash equilibrium until the optimal situation equilibrium is reached.Since the main operation of EGA is only to compare the utility value between different strategy combinations, the global optimal solution can always be obtained by reasonably setting the number of iterations, because the utility of the global optimal solution is greater than other feasible solutions, and the utility of all feasible solutions is greater than infeasible solutions.Compared with frequently used evolutionary algorithms, such as Genetic Algorithm, Ant-Colony Algorithm, and Artificial Neural Networks, which involve complex mutation operations, path calculation, and network learning, respectively, the speed and efficiency of EGA are obvious advantages.
Case Study
A 3600 KN fixed winch hoist that was supported by the Sinohydro hydraulic machinery company was taken as an example to validate the method mentioned above.PC set F for the fixed winch hoist was obtained by product investigation, customer-requirement analysis, technical, economic, and social environments, and, finally, an expert team.Given F = { 1 low complexity, 2 manufacturability, 3 assembly ability, 4 reliability, 5 mechanical strength 6 environment-friendly, 7 brake, 8 low noise, 9 Lifting stability, 10 low cost, 11 synchronicity, 12 high energy-conversion efficiency, and 13 lightweight}.How to express and implement the element-matter model of a fixed winch hoist is introduced below.
Design Knowledge
A fixed winch hoist is a heavy-tonnage lifting machine that works in the water-conservancy and hydropower industries, and it is composed of 10 components.As shown in Figure 6, only the structure of a movable pulley is related to the lifting force, and the structure of the other components could have different structures according to different PCs.Hence, the process of conceptual design according to a PC is transformed into the process of selecting the optimal structure of each component based on product characteristics.In order to solve the problem of fixed-winch-hoist conceptual design from the perspective of product characteristics, the function tree was first obtained by an engineer through functional decomposition, with the substructure set for each functional unit enumerated as shown in Table 1.Then, based on knowledge and customer-requirement constraints, they were obtained as shown in Figure 7. Finally, a model for product conceptual design was established, as shown in Figure 8.In order to solve the problem of fixed-winch-hoist conceptual design from the perspective of product characteristics, the function tree was first obtained by an engineer through functional decomposition, with the substructure set for each functional unit enumerated as shown in Table 1.Then, based on knowledge and customer-requirement constraints, they were obtained as shown in Figure 7. Finally, a model for product conceptual design was established, as shown in Figure 8.
Acquiring PC importance
Analyzing the importance of PC driven by CR According to customer preferences, the weight vector for five customer requirements CR = { 1 maintainability, 2 long service life, 3 work stability and reliability, 4 energy utilization rate, 5 environment-friendly}, w s = (0.29, 0.30, 0.31, 0.09, 0.05) appeared, and PC importance relationship matrix W cr-PC driven by customer requirement is obtained using the Analytic Hierarchy Process.Relative importance matrix R i between CR and PC i is evaluated by an expert team, and R 1 was taken as an example Using the Analytic Hierarchy Process to obtain mutual importance vector w (2) among elements in PC.Relative importance matrix R' i among PC i is evaluated by the expert team, and R' 1 between PC 1 and the others was taken as an example.
Obtaining PC Substructure Utility Vector
According to expert analysis, the impact of each candidate substructure on the PC was quantified by 0 to 9. As shown in Table 2, the larger the number is, the greater the impact.Particularly, 0 indicates that the substructure had no effect on this index.
Results and Discussion
Under the same conditions, the hoist concept was designed by the company's designers using an empirical design system of the company.As shown in Figure 11, the result was: single fold with center rope; balanced pulley placement; sliding bearing; wheel coupling; safety brake; and motor fixed pulley different side.Comparing the concept designed by the company engineers with that achieved by the method proposed in this paper, the main difference was that engineers think that a motor fixed pulley different side makes the structure more compact, while there is neither CR nor PC related to compactness.From this point of view, the proposed design method in this paper is less advanced in the application of expert knowledge.A comprehensive comparison was made from occupation, design cycle, experiential knowledge, result reliability, and economy.Results are shown in Table 3.
Results and Discussion
Under the same conditions, the hoist concept was designed by the company's designers using an empirical design system of the company.As shown in Figure 11, the result was: single fold with center rope; balanced pulley placement; sliding bearing; wheel coupling; safety brake; and motor fixed pulley different side.
Results and Discussion
Under the same conditions, the hoist concept was designed by the company's designers using an empirical design system of the company.As shown in Figure 11, the result was: single fold with center rope; balanced pulley placement; sliding bearing; wheel coupling; safety brake; and motor fixed pulley different side.Comparing the concept designed by the company engineers with that achieved by the method proposed in this paper, the main difference was that engineers think that a motor fixed pulley different side makes the structure more compact, while there is neither CR nor PC related to compactness.From this point of view, the proposed design method in this paper is less advanced in the application of expert knowledge.A comprehensive comparison was made from occupation, design cycle, experiential knowledge, result reliability, and economy.Results are shown in Table 3. Comparing the concept designed by the company engineers with that achieved by the method proposed in this paper, the main difference was that engineers think that a motor fixed pulley different side makes the structure more compact, while there is neither CR nor PC related to compactness.From this point of view, the proposed design method in this paper is less advanced in the application of expert knowledge.A comprehensive comparison was made from occupation, design cycle, experiential knowledge, result reliability, and economy.Results are shown in Table 3.
Obviously, this method needs to be improved in the acquisition and learning of empirical knowledge, but performs well in other aspects.
Conclusions
Product conceptual design was investigated in this paper.Based on the achieved experiment results, the following conclusions are derived: It was found that a problem could be effectively solved by the method proposed in this paper.Using this process, the design cycle was reduced to 0.5 hour, and occupation and economy greatly improved.
The method does not perform well in empirical-knowledge application.Hence, our future work will focus on how to more accurately acquire design knowledge and objective-subjective expert knowledge.
Figure 2 .
Figure 2. Matter-element model of a mechanical product.
Figure 2 .
Figure 2. Matter-element model of a mechanical product.
Figure 2 .
Figure 2. Matter-element model of a mechanical product.
Figure 7 .
Figure 7. Constraint expression of hoist conceptual design.
Figure 7 .
Figure 7. Constraint expression of hoist conceptual design.
Figure 7 .
Figure 7. Constraint expression of hoist conceptual design.
Table 1 .
Function units and their alternative substructure.
Table 1 .
Function units and their alternative substructure. | 8,708 | 2019-03-05T00:00:00.000 | [
"Computer Science"
] |
Shuanghuanglian Injection for Viral Pneumonia: A Protocol for Meta-analysis
Background:Viral pneumonia is inammation (irritation and swelling) of the lungs due to infection with a virus. Rapidly progressing viral pneumonia is associated with considerable mortality, representing a severe threat and imparting a substantial nancial burden worldwide.Specic treatments for the viral pneumonia were not yet determined. Recently, Shuanghuanglian injection of Traditional Chinese Medicine was used to treat viral pneumonia. However,there is no systematic reviews have evaluated its ecacy and safety for viral pneumonia. Methods:We search four English databases ( Pubmed, Web of Science, Embase, and the Cochrane library) and four Chinese databases (China National Knowledge Infrastructure, Wanfang Database, Chinese Biomedical Literature Database, Chinese Science and Technology Periodical Database) for all randomized controlled trial of Shuanghuanglian injection for the treatment of viral pneumonia until 11st of December , 2020. Two reviewers individually extracted data from the included randomized controlled trials (RCTs). Data will be synthesized by either the xed-effects or random-effects model according to a heterogeneity test, Methodological quality assessment and risk of bias will be assessed using the Cochrane bias risk tool. Meta-analysis will be performed using RevMan5.3.5 software provided by the Cochrane Collaboration. Results: Viral pneumonia has become a disease with substantial mortality. A systematic review assessing the benecial and harmful effects of Shuanghuanglian injection for viral pneumonia is needed. This study will compare the different outcome indicators of various studies directly and indirectly.This analysis will provide a high-quality synthesis of effectiveness and safety of Shuanghuanglian injection treatment for viral pneumonia.The main outcome indicators include: Outcomes will include mortality, cure rate, ecacy or adverse events conrmed by imaging diagnosis. Systematic review registration: INPLASY2020120047.
Background
Viral pneumonia is an interstitial pulmonary pneumonia caused by upper respiratory viruses (parain uenza virus, adenovirus, and respiratory syncytial virus, etc) .The patient may have severe cardiopulmonary dysfunction, which will poses a huge threat to the patient's life [1]. The disease occurs frequently in winter or spring and it can spread out [2]. Recently, the global pandemic of the new type of coronavirus pneumonia has attracted the attention of the public one again [3]. The virus causes damage to the patients' bronchial epithelial cells, ciliary dyskinesia, and destruction of the phagocytic function of neutrophils, which leads to a decline in respiratory defenses. Patients with viral pneumonia usually have features such as: fever, cough, and rales on auscultation of both lungs [4]. Some severe patients may even have clinical symptoms such as dyspnea, shortness of breath, and chest tightness. Because of the virulence of the pneumonia virus, the age of the patient, and the autoimmune function state are closely related to the occurrence of viral pneumonia, its pathological basis, pathogenesis, and clinical features are diverse [5]. Due to the genetic mutation of the virus, it is di cult for the human body to form a stable, long-term speci c immunity, and the incidence of viral pneumonia is relatively high.
At present, the treatment of viral pneumonia is mainly based on western medicine [6]. The commonly used antiviral drugs in western medicine include amantadine and its analogues, and neuraminidase inhibitors.
Amantadine and itsanalogues have a good therapeutic effect on viral pneumonia caused by in uenza A virus, but have no effect on the treatment of viral pneumonia caused by in uenza B virus [7], and it can't prevent the infection of in uenza A virus [8]. The study found that neuraminidase inhibitors have a good therapeutic effect on in uenza A and B viruses: the drug resistance rate of patients is extremely low, the duration of clinical symptoms is short, complications are few, and the length of hospitalization is short too [9]. However,when the human immune system is basically normal, the effective rate of such drugs to prevent in uenza is approximately about 70%~93% [10]. Until now, there is still no uniform regulation on whether hormone therapy can be used for viral pneumonia, but it is certain that glucocorticoids are not effective against respiratory syncytial virus [11]. Furthermore, in the treatment of varicella-zoster virus and Hantavirus pneumonia, the use of hormones for anti-in ammatory treatment will aggravate the condition.
[12] Clinicians often use high-dose hormone shocks to treat viral pneumonia [13]. During the treatment process, the side effects of hormones are quite obvious. Many patients have increased mortality, severe hypertension, and femoral head necrosis. Because speci c antiviral drugs are still lacking, support for symptomatic treatment of viral pneumonia is still the main focus. Study have reported that after immunoglobulin vaccination, it is possible to prevent respiratory syncytial virus infection [14]. Sensitive and speci c diagnostic methods and diagnostic methods for viral pneumonia are still to be developed.
Traditional Chinese medicine(TCM) can receive good clinical effects in the prevention and treatment of viral pneumonia [15,16]. TCM is not only inexpensive, easy to obtain, and has fewer side effects, but also can regulate the patient's own immune function and eliminate or reduce the patient's clinical symptoms.
Correcting immune disorders and reducing in ammation is key to the treatment of viral pneumonia.
Shuanghuanglian injection is composed of medicinal materials such as honeysuckle, scutellaria baicalensis, and forsythia. It has anti-in ammatory, antiviral, and immune functions. In recent years, Shuanghuanglian injection has become one of the rst choice drugs for the treatment of respiratory tract infectious diseases in Chinese medicine hospitals. However, there is no systematic reviews had shown promising its effects and safety for viral pneumonia. The aim of this review is The aim of our study is to objective provide helpful evidence of whether Shuanghuanglian Injection would reduce the mortality and incidence of viral pneumonia. A better understanding of Shuanghuanglian injection can guide the treatment of viral pneumonia.
Methods
We will strictly abide by the requirements of the "Preferred Reporting Items for Systematic Review and Meta-analysis Protocols" to report the meta analysis [17].
Protocol and registration
The protocol registration number was INPLASY2020120047.
Type of study.
Studies were randomized controlled trials (RCTs) using Shuanghuanglian injection for viral pneumonia in adult patients. The language will be limited to Chinese or English. Exclude non-RCT, animal experiments,unclear results indicators such as images, and other nonquantitative indicators. For the articles published repeatedly in Chinese and English journals,the latest one published articles were taken.
Type of Participants.
Participants were adults aged 18 years old and older with the diagnosis of viral pneumonia in the general population, regardless of gender, ethnicity, race, and disease stage. Children, patients with severe cardiovascular diseases and mental illnesses, pregnant women, breast stage, and cancer will be excluded.
Interventions
Interventions included Shuanghuanglian injection alone or in combination with conventional therapy (CT) and/or biological agents for at least 2 weeks. The controls included no treatment, placebo, and CT alone.
Outcome indicators
The main outcome indicators include: the cure rate,mortality, e cacy, or adverse events con rmed by imaging diagnosis. The secondary outcome indicators include: odds ratio, risk ratio, hazard ratios, standardized incidence ratio, standardized mortality ratio, and associated 95% con dence intervals (CIs).
Data sources and search strategies
We will serch the following databases will be searched: Pubmed, Embase, Web of Science, the Cochrane library, the China National Knowledge Infrastructure, Wanfang Database, Chinese Biomedical Literature Database, Chinese Science, and Technology Periodical Database. Collect all the RCTs on the treatment of viral pneumonia with Shuanghuanglian injection. And manually search for references in the related literature. The retrieval time is from the inception of the database to December 10, 2020. The language is limited to Chinese and English. The search strategy for PubMed was listed, which was including all search terms. Other searches will be conducted based on these results, and the search strategy will be modi ed as required for other electronic databases.
Selection of studies
All initial records from the four electronic databases will be imported into the web-based systematic review Rayyan software [10]. Two authors independently complete the following process: according to the above search strategy to complete the process of document retrieval, import documents into Rayyan software. then, according to the inclusion and exclusion criteria, lter the literature by reading the title and abstract. If it is not possible to determine whether the article meets the requirements based on the inclusion and exclusion criteria, then read full text to select. All the procedures will be carried out by 2 independent reviewers and completed a cross-check. Any con ict will be resolved by discussion with the 3rd author. The process of research selection is shown in Fig. 1.
Data extraction
The data will be extracted out by two independent reviewers in accordance with the Cochrane Handbook of Systematic Reviews of Interventions. Two investigators will independently screen all the included studies to extract the following data: name of the rst author, publication year, study design, country, intervention, control group, study period, sample size, numbers of outcomes, age at enrollment, sex, duration of follow-up, adjustments, and effect estimates. The reviewers llthe extracted information into a pre-built Excel table. If necessary, we will contact the trial author for further information.
Dealing with missing data
If there is data loss in the included study, we will contact the original author of the article to obtain the original information. If the missing data is still not available, the existing data will be analyzed and a sensitivity analysis will be performed to address the potential impact of the missing data.
Risk of bias assessment and quality of selected studies
Two researchers independently evaluated the risk of bias in randomized controlled trials in accordance with the Cochrane Handbook of Systematic Reviewers, including the following items: random sequence generation, allocation concealment, blind participants and personnel, blind assessment of results, incomplete result data, selective reports, and other biases. The quality of studies was classi ed as being at of high, unclear, or low risk of bias. After completion, they would recheck. In the case of a disagreement, they would discuss. If no agreement could be reached, a decision would be made in consultation with researchers from the third party.
Statistical analysis
Statistical analyses were performed using RevMan 5.3 software. Dichotomous data. The relative risk (RR) was calculated with 95% con dence intervals (CI). Continuous data. A xed-effect mean difference (MD) with 95% CI was calculated for outcomes reported in the same scale, and the standardized mean difference (SMD) with 95% CI was calculated for outcomes reported indifference scales.
Assessment of heterogeneity: For the meta-analysis of non-signi cant heterogeneity, we applied a xedeffect model (FEM),otherwise, Statistical heterogeneity was calculated using the I2 statistic, and > 50% was considered to be substantial. Subgroup analyses were performed to explore heterogeneity, and a random-effects model was applied.
Subgroup analysis: When heterogeneity is detected, we will judge the source of heterogeneity through subgroup analysis (e.g., different types of Chinese medicines therapies, research quality, publication age, participation population, and length of treatment). In addition, we can also observe the relationship between the effect values and grouping variables.
Publication bias
The Cochrane Collaboration's Risk of Bias Tool was used to assess bias [18]. Seven domains of risk were assessed: sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessors, incomplete outcome data, selective reporting, and other bias(baseline balance and funding/con ict of interest). Publication bias for meta-analysis of ≥ 10 studies was also assessed by using funnel plots and Egger test.
Assess the quality of evidence
The evaluation of the strength of the evidence will be based on the grading of recommendations assessment, development, and evaluation system, there are 4 levels of evidence strength: high, medium, low, or very low.
Discussion
Respiratory tract infections are important components of respiratory diseases that have become one of the major death threats worldwide [19]. Viral pneumonia is a common respiratory disease caused by viral infection. It can easily cause obvious respiratory diseases due to the lung tissue of the affected area and the in ammation changes of the trachea and bronchus, which can affect the normal life and work of patients [20]. TCM has the characteristics of multi-component, multi-target, and multi-channel treatment. It has a long history of treating viral diseases and has remarkable e cacy. Although Shuanghuanglian injection has signi cant advantages of multiple approaches and multiple targets in the treatment of viral pneumonia. However, due to the complex composition of Chinese medicines, potential safety hazards may exist. Therefore, The purpose of this meta-analysis is to systematically summarize and evaluate the e cacy and safety of Shuanghuanglian injection in the treatment of viral pneumonia. It will helps clinicians to timely adopt treatment methods based on the diagnosis results and prevent further expansion of viral pneumonia, which has important clinical signi cance for the early treatment and rehabilitation of patients. Abbreviations 95% CI = 95% con dence interval, IF = inconsistency factor, RCT = randomized controlled trial, PRISMA-P = preferred reporting items for systematic review and meta-analysis protocols, TCM = traditional Chinese medicine, CT = conventional therapy, COVID-19=Novel Coronavirus Pneumonia.
Not applicable
Authors' contributions CH is the guarantor. CH, YY and QZ contributed to the conception of the study. The manuscript presenting the protocol was drafted by YY and revised by CH. The search strategy was developed by all authors and will be run by QZ and MH, who will also independently screen the potential studies, extract data from included studies and assess the risk of bias. ZS and SX will conduct and nish the data synthesis.QZ and ZL will arbitrate in cases of disagreement and ensure no errors occur during the study. All authors critically revised the draft and approved the nal manuscript as submitted.
Funding
This work was supported by the the department of Science and Technology of Sichuan Province, grant number: 2016JY0008.
Availability of data and materials
All data generated or analyzed during this study will be included in the published article and its supplementary les. Should any additional information be required, it will be made available from the corresponding author on reasonable request.
Ethics approval and consent to participate Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Author details | 3,253.6 | 2020-12-08T00:00:00.000 | [
"Medicine",
"Biology"
] |
Variable Switching Frequency for ZVS over Wide Voltage Range in Dual Active Bridge
: The Dual Active Bridge (DAB) converter is known for its advantageous characteristics, including bidirectionality, galvanic isolation, and soft-switching operation. However, achieving Zero Voltage Switching (ZVS) across the complete operation range is not guaranteed, particularly through a wide input–output voltage ratio. This paper explores the integration of the switching frequency as a control variable in the DAB converter to ensure ZVS operation across a wide voltage range, employing Single Phase Shift (SPS) modulation. The study evaluated the RMS and reactive currents under variable switching frequency, presenting the advantages of this approach. Moreover, it includes a design-oriented analysis of the ZVS limits and their relationship with the switching frequency, aiming to ensure ZVS at any operating point. Experimental results validated the theoretical analysis, while presenting the main advantages of variable switching frequency implementation.
Introduction
The Dual Active Bridge (DAB) converter is widely utilized for several applications, including grid power transmission and automotive applications.Despite having been studied for several decades [1][2][3][4][5][6], the DAB converter remains popular today and is extensively employed in diverse applications, including battery chargers for electric vehicles.It offers several attractive characteristics, including bidirectional power transfer, galvanic isolation, high power density, and soft-switching capability.
The DAB converter consists of two Full Bridges (FB) connected through an inductor and a transformer that provides galvanic isolation.Its electrical schematic is described in Figure 1, where the sum of inductor and the transformer leakage inductance is represented by L k , V 1 and V 2 are the input and output DC voltages, respectively, and v 1 and v 2 are the AC voltages at their respective FB.In this analysis, Single Phase Shift (SPS) modulation is utilized, which is widely employed in industry due to its simplicity and ease of implementation.It consists in controlling the power flow with a phase shift between both FBs, and it is further discussed in the next section.
There have been numerous studies with the objective of expanding the soft-switching operation range of the DAB converter.Among these studies, several modulation strategies have been proposed using the Pulse Width Modulations (PWMs) of the FBs as an additional control variable.One such strategy is the Double Phase Shift (DPS) modulation [7], which consists in varying the phase shift between the branches of one FB, as well as the phase shift between both FBs.Additionally, it is possible to vary the phase shift of the branches in both FBs, along with the phase shift between the FBs, adding additional degrees of freedom and resulting in more complex modulation known in some literature as Triple Phase Shift (TPS) [8,9].Some studies combine all the previously mentioned modulation techniques degrees of freedom and resulting in more complex modulation known in some literature as Triple Phase Shift (TPS) [8,9].Some studies combine all the previously mentioned modulation techniques along an operation profile [10,11].Also, these techniques may be employed to optimize the performance of the converter, e.g., for minimizing conduction losses [12].All of these are modulation methods which can be easily adapted to any design of the converter and any change in the operating point.One of the main drawbacks is the increased complexity in the implementation.Besides modulation strategies for the DAB converter and soft switching may also be extended by hardware methods.Some literature includes the use of the transformer magnetizing current to provide the energy for the parasitic capacitances.Others consider using several inductors controlled through additional switches to modify the inductance [13], or directly modify the inductance value using a variable inductor [14].These solutions have some drawbacks including the need for additional components or the adaptability of the design to work in other operating conditions.
The control strategies based on varying the switching frequency in DAB can be classified into three categories [6]: those which employ the switching frequency as a control variable for regulating the power; those which adopt the switching frequency for extending the maximum power of DAB; and those which use the switching frequency for expanding the soft-switching operational range.All of them share its relative simplicity as the main benefit.In all the cases, the resulting control law is a linear relationship, which is advantageous compared to phase-shift modulations.However, there are some limitations to highlight.These techniques have in common the main disadvantages already reported when the control varies the frequency: increase in the current harmonic content (and therefore, more complex design of EMI filters which leads to bulkier stages); more complex design in general for estimating the power loss (and hence, for getting an optimum design); and practical limitations in the frequency range (the lowest frequency is limited by the audible frequency, i.e., 8kHz, and the highest frequency is restricted by the switching characteristics of the power transistor technology used).
In the case of those solutions which use the switching frequency for extending the soft-switching operational range, there have been several studies with different approaches applied to the DAB.In [15], variable frequency is employed for the DAB converter operating at a fixed input-output voltage ratio.It calculates the optimum switching frequency for different output power based on a power loss model.In [16], variable frequency is used to improve the efficiency at light loads, although it is used in combination with a particular transformer, denominated dual leakage transformer.In [17], switching frequency is used in a control algorithm to increase power range and extend Zero-Voltage Switching (ZVS).It consists of using two switching frequencies, the lower frequency to increase the power of the converter, and at the higher frequency to maintain ZVS at low power.In [18], an optimal, full-operating-range ZVS modulation scheme is presented, which includes variable frequency, although the DAB converter is part of a two-stage acdc converter.In [19], variable frequency is employed in SPS modulation, at low power to keep the converter working with ZCS.A power loss analysis is performed using variable frequency and compared to constant frequency.In [20], a modulation strategy which Besides modulation strategies for the DAB converter and soft switching may also be extended by hardware methods.Some literature includes the use of the transformer magnetizing current to provide the energy for the parasitic capacitances.Others consider using several inductors controlled through additional switches to modify the inductance [13], or directly modify the inductance value using a variable inductor [14].These solutions have some drawbacks including the need for additional components or the adaptability of the design to work in other operating conditions.
The control strategies based on varying the switching frequency in DAB can be classified into three categories [6]: those which employ the switching frequency as a control variable for regulating the power; those which adopt the switching frequency for extending the maximum power of DAB; and those which use the switching frequency for expanding the soft-switching operational range.All of them share its relative simplicity as the main benefit.In all the cases, the resulting control law is a linear relationship, which is advantageous compared to phase-shift modulations.However, there are some limitations to highlight.These techniques have in common the main disadvantages already reported when the control varies the frequency: increase in the current harmonic content (and therefore, more complex design of EMI filters which leads to bulkier stages); more complex design in general for estimating the power loss (and hence, for getting an optimum design); and practical limitations in the frequency range (the lowest frequency is limited by the audible frequency, i.e., 8 kHz, and the highest frequency is restricted by the switching characteristics of the power transistor technology used).
In the case of those solutions which use the switching frequency for extending the soft-switching operational range, there have been several studies with different approaches applied to the DAB.In [15], variable frequency is employed for the DAB converter operating at a fixed input-output voltage ratio.It calculates the optimum switching frequency for different output power based on a power loss model.In [16], variable frequency is used to improve the efficiency at light loads, although it is used in combination with a particular transformer, denominated dual leakage transformer.In [17], switching frequency is used in a control algorithm to increase power range and extend Zero-Voltage Switching (ZVS).It consists of using two switching frequencies, the lower frequency to increase the power of the converter, and at the higher frequency to maintain ZVS at low power.In [18], an optimal, full-operating-range ZVS modulation scheme is presented, which includes variable frequency, although the DAB converter is part of a two-stage ac-dc converter.In [19], variable frequency is employed in SPS modulation, at low power to keep the converter working with ZCS.A power loss analysis is performed using variable frequency and compared to constant frequency.In [20], a modulation strategy which includes variable frequency is presented.It considers three modulation strategies, for low-, medium-, and high-power levels, oriented to minimize conduction losses.The switching frequency varies at medium power, transitioning from maximum frequency at low power, to minimum frequency at high power.In [21] a variable frequency control is presented which includes a Maximum Power Point Tracking (MPPT) using the perturb and observe technique to minimize the RMS currents.It develops a generalized state space average model to obtain a new small-signal model between the switching frequency and the output voltage.In [22], a Variable Frequency Modulation (VFM) is presented, with a closed-form algorithm to allow the converter to operate with ZVS over a wide power range with minimum reactive currents.It discusses the phase drift phenomenon, presenting a compensation scheme to ensure the operation of the VFM.In [23], an optimization technique is presented, where the converter works at an optimal constant phase shift, defined by the voltage ratio, to obtain minimum RMS currents.It proposes varying the switching frequency to modulate the power.In [24], a full ZVS control is proposed for a single-stage semi-DAB ac-dc converter, in which variable frequency control is implemented to extend ZVS operation.In [25], The primary side works at half the switching frequency of the secondary side using a modulation method denominated in the article as asymmetric half-frequency modulation (AHFM).In [26], the EMI in the DAB converter using variable switching frequency modulation (VSFM) is analyzed, and an improved VSFM is proposed.There are studies that combine variable frequency with modulation techniques such as Extended Phase Shift (EPS) [27][28][29].
As the main contribution, this paper presents a design-oriented analysis of the DAB converter, with the aim of ensuring ZVS operation in any operating point over a wide voltage range.This is done by including the switching frequency as a control variable.Moreover, it presents the behavior of the RMS and the reactive currents analyzed at constant power and variable switching frequency.Analyzing the converter at a constant power operation facilitates the understanding of working with variable switching frequency, emphasizing the dependance between the input-output voltage ratio and the phase-shift angle.This is a general analysis that can be applied to any operating point and extended to any application.This paper is organized as follows: In Section 2, a brief description of the operation principles of the DAB converter is presented.In Section 3, the RMS and reactive currents are discussed, along with the ZVS limits.The experimental results are discussed in Section 4, and the conclusions are presented in Section 5.
DAB Converter Operating Principles
One of the main advantages of this topology is its ability to achieve soft-switching operation, either with ZVS or Zero Current Switching (ZCS).ZVS is achieved using reactive currents, which enable a soft turn-on of the MOSFETs during the dead time interval.ZCS can be achieved by implementing modulation methods (DPS, TPS, etc.), that allow the converter to switch the devices when the inductor current is zero.It is well known that ZVS is not guaranteed for every operating point, as it depends mainly on the power managed and on the input-output voltage ratio, . Note that the definition of M in this paper is the voltage ratio, regardless of the power transfer direction and is not to be confused with the converter gain.
The power flow in the DAB converter has been widely studied and it depends on several parameters, some of which can be used as control variables.The average power flow using SPS can be described by the following: where V 1 , V 2 , and L k , can be considered as constant for a given operating point, leaving the switching frequency, f s , and the phase shift, φ, as the control variables.It could be possible to control the voltages, e.g., by adding additional converters stages connected through a dc-link.The inductor can also become a control variable, either by using more than one inductor connected through switches [13], or by using a variable inductor [14], as is exposed in the previous section.
The simplest modulation method is the SPS, which consists of controlling the power transfer by modifying the phase shift between both FBs, while their PWMs are kept constant at a duty cycle of 50%.This modulation technique has several advantages, such as its simplicity, low computational demand, and its capability to achieve full power, while other modulation methods limit the converter maximum power.In Figure 2, the SPS modulation waveforms are presented considering a positive power transfer, from FB 1 to FB 2 .The phase-shift angle, φ, between v 1 and v 2 , generates the inductor current i Lk .The current value at the switching instants of FB 1 and FB 2 power devices, i 1 and i 2 , respectively, are shown in the figure and can be defined as follows: as is exposed in the previous section.
The simplest modulation method is the SPS, which consists of controlling the power transfer by modifying the phase shift between both FBs, while their PWMs are kept constant at a duty cycle of 50%.This modulation technique has several advantages, such as its simplicity, low computational demand, and its capability to achieve full power, while other modulation methods limit the converter maximum power.In Figure 2, the SPS modulation waveforms are presented considering a positive power transfer, from FB1 to FB2.The phase-shift angle, between 1 and 2 , generates the inductor current .The current value at the switching instants of FB1 and FB2 power devices, 1 and 2 , respectively, are shown in the figure and can be defined as follows: The output and input current waveforms, and , respectively, are shown in Figure 2, where the shaded area represents the reactive currents, with its value at the switching instant of the corresponding FB shown in its respective color.The reactive currents are discussed in the next section.This modulation technique achieves the maximum power transfer at a phase-shift angle of 2 , although a lower phase-shift angle is usually chosen for achieving the maximum rated power at the design stage of the converter.This is due to the loss of linearity around angles greater than 3 , making it harder to control, as well as a lower increment in the power transfer with respect to the phase shift.It is also important to note that this modulation has high reactive currents, and the fact that they increase with the phase-shift angle implies that lower phase-shift angles are to be preferred, as it benefits the converter regarding conduction losses.However, as it is explained in the next section, for operating points where ≠ 1, there is a minimum phase shift that limits the ZVS operation of the converter.
The converter waveforms from top to bottom: voltages V 1 and V 2 , the current through the inductor, i Lk , the input current, i i , and the output current, i o .The waveforms are referred to as FB 1 .
The output and input current waveforms, i o and i i , respectively, are shown in Figure 2, where the shaded area represents the reactive currents, with its value at the switching instant of the corresponding FB shown in its respective color.The reactive currents are discussed in the next section.
This modulation technique achieves the maximum power transfer at a phase-shift angle of π 2 , although a lower phase-shift angle is usually chosen for achieving the maximum rated power at the design stage of the converter.This is due to the loss of linearity around angles greater than π 3 , making it harder to control, as well as a lower increment in the power transfer with respect to the phase shift.It is also important to note that this modulation has high reactive currents, and the fact that they increase with the phase-shift angle implies that lower phase-shift angles are to be preferred, as it benefits the converter regarding conduction losses.However, as it is explained in the next section, for operating points where M ̸ = 1, there is a minimum phase shift that limits the ZVS operation of the converter.
RMS and Reactive Currents
ZVS operation in the DAB converter working with SPS modulation depends on the inductor current at a switching instant being high enough to charge/discharge the parasitic capacitances on the MOSFETs.The inductor current value at the switching instants, i 1 and i 2 from Figure 2, must be positive to guarantee ZVS in its respective FB.These current values are also shown for the output and input currents in the last two waveforms from top to bottom.These are the reactive currents that flow through the body diode of the MOSFET during the dead time, allowing ZVS operation in their respective FBs.
The RMS current through the inductor, i Lk RMS , can be calculated using ( 2) and ( 3) and is defined as [30]: Considering a constant power operation, it is possible to vary the switching frequency as a function of the phase shift and obtain i Lk RMS .Figure 3 shows the normalized i Lk RMS for different values of M, as a function of the phase shift and the normalized switching frequency, with the latter shown in red and referred to the right y-axis.The dashed lines represent the ZVS limit for each case in their corresponding color.These ZVS limits are obtained using (8).The figures presented in this paper are generated using Matlab R2022b.Note that as the phase shift tends to zero, so does the switching frequency.Therefore, it is important to define a minimum switching frequency, which is limited by the magnetic devices, as low frequencies may increase the magnetic flux density in the core, causing high power losses and potentially the saturation of the core.The currents are normalized with respect to the average input current, given by the power and the input voltage for each value of M, and can be described as follows: The switching frequency is normalized with respect to the maximum switching frequency for each value of M, and can be described as follows: where f smax is the switching frequency at the maximum phase shift for a constant power operation.The minimum i Lk RMS is marked with an 'x' for each curve in their respective colors.The figure shows that for M = 1, the RMS current always increases with the phase shift, and the lowest value is at the lowest possible phase shift.However, for M ̸ = 1, the minimum current value happens at higher phase-shift angles.An important observation is that the minimum i Lk RMS , for a given value of M, occurs at the same phase-shift angle, regardless of the power, and will depend only on M.This means that for any power, the minimum RMS current can be achieved by using variable switching frequency.This is an advantage, as it makes it possible to know the phase-shift angle at which the lowest value of i Lk RMS occurs, only by knowing the voltages.This facilitates a possible control algorithm oriented to minimize conduction losses, in which the phase shift is set as a function of M to work with minimum i Lk RMS , and the frequency may be used to modulate the power transfer.Another observation is that the minimum i Lk RMS is always inside the ZVS limit.Reactive currents in the DAB converter are necessary for ZVS operation, as they provide the energy for soft-switching.During the dead time, the reactive currents flow through the body diode of the MOSFET.These currents need to be big enough to provide the energy to charge and discharge the parasitic capacitances of the MOSFETs and turn on the device with ZVS.The minimum currents needed for ZVS, 1 and 2 , can be defined as follows [10,31]: where 1 and 2 are the total parasitic capacitances at 1 and 2 , respectively. 1 and 2 represent the inductance from Figure 1 referred to 1 and 2 , respectively.The currents 1 and 2 are also referred to their respective FBs.It is possible to estimate the MOSFET at a certain operating point [32,33].However, for simplicity, in this analysis it is considered constant, and its value is approximated using the value given by the datasheet.These currents define the ZVS limits, which are discussed in the next subsection.Figure 4 shows the normalized reactive currents for different values of M. The average reactive currents can be obtained from the shadowed area in Figure 2, and can be described as follows: where 1 and 2 correspond to the reactive current in 1 and 2 , respectively.The normalization is done using the average input current as the base value.The reactive currents in 1 and 2 are shown in blue and orange, respectively, while the ideal ZVS Reactive currents in the DAB converter are necessary for ZVS operation, as they provide the energy for soft-switching.During the dead time, the reactive currents flow through the body diode of the MOSFET.These currents need to be big enough to provide the energy to charge and discharge the parasitic capacitances of the MOSFETs and turn on the device with ZVS.The minimum currents needed for ZVS, i 1min and i 2min , can be defined as follows [10,31]: where C oss eq1 and C oss eq2 are the total parasitic capacitances at FB 1 and FB 2 , respectively.L k1 and L k2 represent the inductance L k from Figure 1 referred to FB 1 and FB 2 , respectively.The currents i 1min and i 2min are also referred to their respective FBs.It is possible to estimate the MOSFET C oss at a certain operating point [32,33].However, for simplicity, in this analysis it is considered constant, and its value is approximated using the C oss value given by the datasheet.These currents define the ZVS limits, which are discussed in the next subsection.
Figure 4 shows the normalized reactive currents for different values of M. The average reactive currents can be obtained from the shadowed area in Figure 2, and can be described as follows: where i q1 and i q2 correspond to the reactive current in FB 1 and FB 2 , respectively.The normalization is done using the average input current as the base value.The reactive currents in FB 1 and FB 2 are shown in blue and orange, respectively, while the ideal ZVS limits are marked with a vertical dashed line.It is evident that the reactive current becomes zero at the ZVS limit in its respective FB, as it can also be deduced from Figure 2.Both reactive currents are minimum at this point.Like the RMS currents, for M = 1, the reactive currents increase with the phase shift, and as M gets farther away from 1, the phase shift at which they are at minimum increases.For M = 1, the currents in both FBs are equivalent.
It can be concluded that operating the converter with phase-shift angles close to the ZVS limit will have the lower reactive current values.Note that Figure 4 shows reactive currents at the left of the ideal ZVS limit, however, they do not provide ZVS operation, as they flow through the device after it has been turned on.
limits are marked with a vertical dashed line.It is evident that the reactive current becomes zero at the ZVS limit in its respective FB, as it can also be deduced from Figure 2.Both reactive currents are minimum at this point.Like the RMS currents, for = 1, the reactive currents increase with the phase shift, and as gets farther away from 1, the phase shift at which they are at minimum increases.For M=1, the currents in both FBs are equivalent.It can be concluded that operating the converter with phase-shift angles close to the ZVS limit will have the lower reactive current values.Note that Figure 4 shows reactive currents at the left of the ideal ZVS limit, however, they do not provide ZVS operation, as they flow through the device after it has been turned on.
ZVS Boundaries
The limits for ZVS operation depend on the reactive current as mentioned previously.Considering the SPS modulation, and defining a minimum current that will provide the energy for the parasitic capacitances, it is evident that by using ( 5) and (6) to solve (2) and (3), the minimum angle that guarantees ZVS can be defined as below: The effect of the parasitic capacitance modifies the ZVS limits as it can be seen from ( 5) and ( 6) [2].The switching frequency as well as the voltages also affect the ZVS limits, as can be seen from (7). Figure 5 shows the modification of ZVS limits when varies from 20 kHz (left) to 120 kHz (right).As it was previously highlighted, (11) stabilizes the minimum phase-shift to obtain ZVS.This value varies linearly with the switching frequency.Therefore, the higher , the higher is the minimum phase shift.Hence, the ZVS limits the moves towards higher values of phase shift when increases.The resulting phase shift can become significant enough to be non-negligible.Even so, if we consider the effect of the frequency when plotting the ZVS limits, every operating point will have its own ZVS limit.Therefore, for simplification purposes in presenting this analysis, these effects are neglected and the ZVS limits are considered ideal.This implies that the ZVS operation is true when 1 > 0 and 2 > 0. Thus, (7) can be simplified and rewritten as below:
ZVS Boundaries
The limits for ZVS operation depend on the reactive current as mentioned previously.Considering the SPS modulation, and defining a minimum current that will provide the energy for the parasitic capacitances, it is evident that by using ( 5) and (6) to solve (2) and (3), the minimum angle that guarantees ZVS can be defined as below: The effect of the parasitic capacitance modifies the ZVS limits as it can be seen from ( 5) and ( 6) [2].The switching frequency as well as the voltages also affect the ZVS limits, as can be seen from (7). Figure 5 shows the modification of ZVS limits when f s varies from 20 kHz (left) to 120 kHz (right).As it was previously highlighted, (11) stabilizes the minimum phase-shift to obtain ZVS.This value varies linearly with the switching frequency.Therefore, the higher f s , the higher is the minimum phase shift.Hence, the ZVS limits the moves towards higher values of phase shift when f s increases.The resulting phase shift can become significant enough to be non-negligible.Even so, if we consider the effect of the frequency when plotting the ZVS limits, every operating point will have its own ZVS limit.Therefore, for simplification purposes in presenting this analysis, these effects are neglected and the ZVS limits are considered ideal.This implies that the ZVS operation is true when i 1 > 0 and i 2 > 0. Thus, (7) can be simplified and rewritten as below: Electronics 2024, 13, 1800 8 of 16 Note that now the ZVS limits are independent of the voltage values and depend only on their ratio, M.This simplification is beneficial for the converter design process, allowing the inclusion of other parameters at later design stages.Having simplified the equations for calculating the ZVS limits, it is easier to plot different curves on the same plane using the ideal ZVS limits as a reference.To visualize the converter operation at different frequencies, it is useful to represent a constant power curve, 'M vs. phase shift', as it provides insight into the relationship between the M and the ZVS limits.This curve can be obtained from (1) and it is described as follows: where is the constant power.Note that the curve must be referred to one of the voltages, which is selected arbitrarily, as it determines the 'slope' of the curve.For this analysis, each value of 2 will have its own curve and M will vary as a function of 1 .
Figure 6 shows the converter operating at constant power, evaluated at two different switching frequencies.The power curves are presented for three different values of 2 .Solid lines represent the converter working at an arbitrary switching frequency, , and the dashed line curves represent it working at 2 .The colored areas represent the variation of the voltage range 1 for each power curve in its respective color.The voltages are normalized.The figure shows the power curves in red, blue, and green, for 2 = 0.75, 1, and 1.25, respectively.The voltage 1 ranges from 0.8 to 1. From this figure, the influence of the switching frequency regarding the converter operating points with respect to the ZVS limits is clearly appreciated, as at 2 the converter operates within the ZVS limits throughout the whole voltage range.That is not the case for the switching frequency , as it can be seen that for 2 = 1.25, the whole operation range is outside the ZVS limits, and for 2 = 0.75, it partially operates within the limits.It is observed that as M grows farther away from 1, higher frequencies are needed to keep the converter operating within the ZVS limits.Note that now the ZVS limits are independent of the voltage values and depend only on their ratio, M.This simplification is beneficial for the converter design process, allowing the inclusion of other parameters at later design stages.
Having simplified the equations for calculating the ZVS limits, it is easier to plot different curves on the same plane using the ideal ZVS limits as a reference.To visualize the converter operation at different frequencies, it is useful to represent a constant power curve, 'M vs. phase shift', as it provides insight into the relationship between the M and the ZVS limits.This curve can be obtained from (1) and it is described as follows: where P const is the constant power.Note that the curve must be referred to one of the voltages, which is selected arbitrarily, as it determines the 'slope' of the curve.For this analysis, each value of V 2 will have its own curve and M will vary as a function of V 1 .
Figure 6 shows the converter operating at constant power, evaluated at two different switching frequencies.The power curves are presented for three different values of V 2 .Solid lines represent the converter working at an arbitrary switching frequency, f n , and the dashed line curves represent it working at 2 f n .The colored areas represent the variation of the voltage range V 1 for each power curve in its respective color.The voltages are normalized.The figure shows the power curves in red, blue, and green, for V 2 = 0.75, 1, and 1.25, respectively.The voltage V 1 ranges from 0.8 to 1. From this figure, the influence of the switching frequency regarding the converter operating points with respect to the ZVS limits is clearly appreciated, as at 2 f n the converter operates within the ZVS limits throughout the whole voltage range.That is not the case for the switching frequency f n , as it can be seen that for V 2 = 1.25, the whole operation range is outside the ZVS limits, and for V 2 = 0.75, it partially operates within the limits.It is observed that as M grows farther away from 1, higher frequencies are needed to keep the converter operating within the ZVS limits.
Minimum Switching Frequency
When working at a certain power, the phase shift is a consequence of the given operating point, making it necessary to consider the switching frequency as a variable to modify the phase shift.This allows the power curve to move in any direction, making it possible to work at any given value of M without losing ZVS.According to the analysis presented in the previous subsection, it becomes necessary to define the minimum frequency needed to operate within the ZVS limits for any given operating point.From Figure 6, it is stated that higher frequencies are needed to move the power curves inside the ZVS limits.However, increasing the switching frequency too much may lead to higher conduction and switching losses.Therefore, it is beneficial to work at low phase-shift angles, close to the ZVS limits, where the RMS currents are low, as well as the switching frequency.
The minimum switching frequency needed to guarantee ZVS at any operating point can be found by evaluating the from ( 12) into (13) resulting in the following: 8 < 1 (14) noting that the equation must also be referred to one of the voltages, 2 in this case.The minimum frequency for the entire range of operation can be found by substituting the variables 2 and with 2_ and , respectively, for > 1 , and with 2_ and , respectively, for < 1.
Figure 7 shows three power curves at 10 kW referred to a voltage 2 = 500 V.The curves have a switching frequency of 50 kHz (blue), 41 kHz (green), and 30 kHz (red).It is evident that by modifying the switching frequency, the converter is able also to modify the phase shift, and thus work with the minimum phase shift, staying within the ZVS operation limits.The switching frequency must be increased when the operating point is outside the ZVS limits (red operating point) and decreased to obtain lower reactive currents (blue operating point).The green operating point is on the ZVS limits, where it operates at the minimum switching frequency and minimum phase shift.
Ideal ZVS Limit FB 2
Ideal ZVS Limit FB 1 V 1 = 0.8 -1 nV2 = 0.75 @f n nV2 = 0.75 @f n nV2 = 0.75 @f n nV2 = 0.75 @2 f n nV2 = 0.75 @2 f n nV2 = 0.75 @2 f n Figure 6.M vs phase shift constant power curves.Power curves are referred to V 2 values: 0.75 (red), 1 (blue) and 1.25 V (green).The colored areas represent the variation of V 1 from 0.8 to 1. Solid lines and dashed lines correspond to a switching frequency of f n and 2• f n , respectively.The ideal ZVS limits are shown in black dotted lines.
Minimum Switching Frequency
When working at a certain power, the phase shift is a consequence of the given operating point, making it necessary to consider the switching frequency as a variable to modify the phase shift.This allows the power curve to move in any direction, making it possible to work at any given value of M without losing ZVS.According to the analysis presented in the previous subsection, it becomes necessary to define the minimum frequency needed to operate within the ZVS limits for any given operating point.From Figure 6, it is stated that higher frequencies are needed to move the power curves inside the ZVS limits.However, increasing the switching frequency too much may lead to higher conduction and switching losses.Therefore, it is beneficial to work at low phase-shift angles, close to the ZVS limits, where the RMS currents are low, as well as the switching frequency.
The minimum switching frequency needed to guarantee ZVS at any operating point can be found by evaluating the φ min from ( 12) into (13) resulting in the following: noting that the equation must also be referred to one of the voltages, V 2 in this case.The minimum frequency for the entire range of operation can be found by substituting the variables V 2 and M with V 2 max and M max , respectively, for M > 1, and with V 2 min and M min , respectively, for M < 1.
Figure 7 shows three power curves at 10 kW referred to a voltage V 2 = 500 V.The curves have a switching frequency of 50 kHz (blue), 41 kHz (green), and 30 kHz (red).It is evident that by modifying the switching frequency, the converter is able also to modify the phase shift, and thus work with the minimum phase shift, staying within the ZVS operation limits.The switching frequency must be increased when the operating point is outside the ZVS limits (red operating point) and decreased to obtain lower reactive currents (blue operating point).The green operating point is on the ZVS limits, where it operates at the minimum switching frequency and minimum phase shift.
Experimental Results
This section contains two main objectives: to validate the theoretical analysis regarding the RMS currents and to assess the implementation of the switching frequency as a control variable to ensure ZVS operation over a wide voltage range.The validation process is performed with a DAB converter prototype shown in Figure 8, and whose characteristics are described in Table 1.The converter maximum power, at 20 kHz and maximum voltages, is 43 kW, although the switching devices are designed to operate at a nominal power of 20 kW.The tests and analysis are performed at a constant power of 10 kW.This value is selected to allow the converter to work throughout the voltage and frequency ranges, maintaining the operating points around the ZVS limits.The switching frequency, , ranges from 20 kHz to 70 kHz.The input voltage, 1 ranges from 300 V to 500 V and the output voltage, 2 , ranges from 650 V to 800 V.
Experimental Results
This section contains two main objectives: to validate the theoretical analysis regarding the RMS currents and to assess the implementation of the switching frequency as a control variable to ensure ZVS operation over a wide voltage range.The validation process is performed with a DAB converter prototype shown in Figure 8, and whose characteristics are described in Table 1.The converter maximum power, at 20 kHz and maximum voltages, is 43 kW, although the switching devices are designed to operate at a nominal power of 20 kW.The tests and analysis are performed at a constant power of 10 kW.This value is selected to allow the converter to work throughout the voltage and frequency ranges, maintaining the operating points around the ZVS limits.The switching frequency, f s , ranges from 20 kHz to 70 kHz.The input voltage, V 1 , ranges from 300 V to 500 V and the output voltage, V 2 , ranges from 650 V to 800 V.
Experimental Results
This section contains two main objectives: to validate the theoretical analysis regarding the RMS currents and to assess the implementation of the switching frequency as a control variable to ensure ZVS operation over a wide voltage range.The validation process is performed with a DAB converter prototype shown in Figure 8, and whose characteristics are described in Table 1.The converter maximum power, at 20 kHz and maximum voltages, is 43 kW, although the switching devices are designed to operate at a nominal power of 20 kW.The tests and analysis are performed at a constant power of 10 kW.This value is selected to allow the converter to work throughout the voltage and frequency ranges, maintaining the operating points around the ZVS limits.The switching frequency, , ranges from 20 kHz to 70 kHz.The input voltage, 1 ranges from 300 V to 500 V and the output voltage, 2 , ranges from 650 V to 800 V.The converter is tested using two different bidirectional voltage sources connected to the converter ports.The switching devices are SiC MOSFETs on both FBs, and the transformer turns ratio is n = 2.The converter is controlled by a Microcontroller Unit (MCU), Texas LaunchPad F28379D.
Figure 9 shows the experimental validation of the theoretical i Lk RMS normalized curves.The curves show the theoretical values, and the operation points from the experimental results are marked with 'x' in its respective color.The ideal ZVS limits are shown in dashed vertical lines for each value of M, in their respective color.Each operating point varies the phase shift along with the switching frequency to maintain a constat power of 10 kW.The results match the theoretical curves, validating the presented analysis regarding the RMS currents.The operating points in Figure 9, are described in Table 2, showing the voltages, the phase shift, the switching frequency, and the RMS current for each operating point.The table is sorted from smallest to largest phase shift for each value of M. The efficiencies shown in Table 2 are for reference, as the converter is not optimized, and there are several factors that may impact the power losses.The currents in Figure 9 The converter is tested using two different bidirectional voltage sources connected to the converter ports.The switching devices are SiC MOSFETs on both FBs, and the transformer turns ratio is n = 2.The converter is controlled by a Microcontroller Unit (MCU), Texas LaunchPad F28379D.
Figure 9 shows the experimental validation of the theoretical normalized curves.The curves show the theoretical values, and the operation points from the experimental results are marked with 'x' in its respective color.The ideal ZVS limits are shown in dashed vertical lines for each value of M, in their respective color.Each operating point varies the phase shift along with the switching frequency to maintain a constat power of 10 kW.The results match the theoretical curves, validating the presented analysis regarding the RMS currents.The operating points in Figure 9, are described in Table 2, showing the voltages, the phase shift, the switching frequency, and the RMS current for each operating point.The table is sorted from smallest to largest phase shift for each value of M. The efficiencies shown in Table 2 are for reference, as the converter is not optimized, and there are several factors that may impact the power losses.The currents in Figure 9 Figure 10 shows constant power curves for two different cases.Each case compares two different operating points.Figure 10a,c,e corresponds to the first case, operating at 10 kW, 1 = 750 V, 2 = 500 V, at two different switching frequencies: 20 kHz and 42.5 kHz. Figure 10a shows both operating points marked with an asterisk and their corresponding efficiencies.The first operating point switches at 20 kHz (blue) outside the ZVS limits, and the second switches at 42.5 kHz inside the ZVS limits.As expected, the operating point moves within the ZVS limits by increasing the switching frequency.Figure 10b,d,f shows the second case where the converter is operating at 10 kW, 1 = 800 V, 2 = 300 V, and at two different switching frequencies: 20 kHz and 38 kHz.Like the first case, Figure 10b shows both operating points marked with an asterisk and their corresponding efficiencies.The first operating point switches at 20 kHz (blue) outside the ZVS limits, and the second switches at 38 kHz inside the ZVS limits.In the same manner as the previous case, increasing the frequency allows the operating point to move within the ZVS limits.This can also be appreciated from the converter current waveforms.Figure 10c-f shows the following Figure 10 shows constant power curves for two different cases.Each case compares two different operating points.Figure 10a,c,e corresponds to the first case, operating at 10 kW, V 1 = 750 V, V 2 = 500 V, at two different switching frequencies: 20 kHz and 42.5 kHz. Figure 10a shows both operating points marked with an asterisk and their corresponding efficiencies.The first operating point switches at 20 kHz (blue) outside the ZVS limits, and the second switches at 42.5 kHz inside the ZVS limits.As expected, the operating point moves within the ZVS limits by increasing the switching frequency.Figure 10b,d,f shows the second case where the converter is operating at 10 kW, V 1 = 800 V, V 2 = 300 V, and at two different switching frequencies: 20 kHz and 38 kHz.Like the first case, Figure 10b shows both operating points marked with an asterisk and their corresponding efficiencies.The first operating point switches at 20 kHz (blue) outside the ZVS limits, and the second switches at 38 kHz inside the ZVS limits.In the same manner as the previous case, increasing the frequency allows the operating point to move within the ZVS limits.This can also be appreciated from the converter current waveforms.Figure 10c-f shows the following waveforms for 20 kHz, 42.5 kHz, 20 kHz, 38 kHz, respectively: the AC voltages v 1 , v 2 , the inductor current i Lk , and the current values at the switching instant i 1 and i 2 .
For both cases, it can be appreciated from the inductor current at the switching instant that for the lower frequencies, the value is negative, therefore it is not operating with ZVS.According to (3), the current value needs to be positive to be in ZVS operation.At higher frequencies, the value of the current at the switching instants becomes positive, implying the converter is soft-switching.
Figure 11a,c,e shows a case for M = 1, in which ZVS operation should apply throughout the whole operation range (considering ideal ZVS limits).The operating points are defined by V 1 = 700 V, V 2 = 350 V, at two different switching frequencies: 20 kHz and 50 kHz.In Figure 11a, both operating points are marked with an asterisk and their efficiency is shown in their respective colors.The following waveforms are shown in Figure 11c,e for 20 kHz and 50 kHz, respectively: the AC voltages v 1 , v 2 , the inductor current i Lk , and the current values at the switching instant i 1 and i 2 .Both operating points in 11a are within the limits as the current values i 1 and i 2 are both positive in Figure 11c,e.
Figure 11b,d,f shows the operation point at 10 kW, V 1 = 800 V, V 2 = 500 V.For this figure, the real ZVS limits are estimated for FB 1 using (7), where (5) is as follows: This operating point is analyzed with two different frequencies, 38 kHz (blue) and 48 kHz (green).The ZVS limits are shown for both frequencies in their respective colors.Figure 11d,f, shows the waveforms for the operating points switching at 38 and 48 kHz, respectively.It can be seen that both currents i 1 and i 2 are positive in both (d) and (f), however in (d), i 1 is approximately 2.5 A, while in (f) it is approximately 6 A. In (d), the value of i 1 is close to i min , and the converter is operating at the ZVS limit, as can also be seen in (b), where the operating point of 38 kHz (blue) is at the ZVS limit.The efficiency measured at these operating points is 94.21% and 95.61% for 38 and 48 kHz, respectively.Regarding the ZVS limits, the minimum current needed for ZVS from (7) will be around the same (assuming constant C oss ), leaving the frequency as the main factor affecting them in a relevant way.
Conclusions
This study analyzed the DAB converter, incorporating the switching frequency as an additional control variable.The influence of the switching frequency on the converter operating points is clearly presented, considering constant power operation.The RMS, reactive currents, and ZVS limits were analyzed, showing the impact of the switching frequency on them, as well as their dependance on M. Furthermore, a DAB prototype was used for validating the analysis.
Conclusions
This study analyzed the DAB converter, incorporating the switching frequency as an additional control variable.The influence of the switching frequency on the converter operating points is clearly presented, considering constant power operation.The RMS, reactive currents, and ZVS limits were analyzed, showing the impact of the switching frequency on them, as well as their dependance on M. Furthermore, a DAB prototype was used for validating the analysis.
In conclusion, implementing the switching frequency as a control variable provides an additional degree of freedom to the converter.It makes it possible to vary the phase shift while maintaining a constant power operation.This allows the converter to work within the ZVS limits at any operating point throughout a wide voltage range.Moreover, it provides the ability to reduce the RMS and reactive currents.
The study reveals that the minimum i Lk RMS , for a given value of M, occurs at a specific phase-shift angle, determined by M and independent of the power.Importantly, this value is always within the ideal ZVS limits.These findings offer valuable insights for both the converter design and the control algorithms.
Figure 2 .
Figure 2.The converter waveforms from top to bottom: voltages 1 and 2 , the current through the inductor, , the input current, , and the output current, .The waveforms are referred to as FB1.
Figure 4 .
Figure 4. DAB converter normalized reactive currents in FB1 (blue) and FB2 (orange), for different operating points.The ideal ZVS limit is shown in vertical dashed line (black).
Figure 4 .
Figure 4. DAB converter normalized reactive currents in FB 1 (blue) and FB 2 (orange), for different operating points.The ideal ZVS limit is shown in vertical dashed line (black).
Figure 5 .
Figure 5.The influence of the switching frequency considering a variation from 20 kHz to 120 kHz.The ideal ZVS limits are shown in solid black line.
Figure 5 .
Figure 5.The influence of the switching frequency considering a variation from 20 kHz to 120 kHz.The ideal ZVS limits are shown in solid black line.
Figure 6 .
Figure 6.M vs phase shift constant power curves.Power curves are referred to 2 values: 0.75 (red), 1 (blue) and 1.25 V (green).The colored areas represent the variation of 1 from 0.8 to 1. Solid lines and dashed lines correspond to a switching frequency of and 2 • , respectively.The ideal ZVS limits are shown in black dotted lines.
Figure 9 .
Figure 9. Experimental results.Normalized at different operating points at constant power of 10 kW and variable switching frequency.The operating points from the experimental results are marked with an 'x' in its respective color.The vertical dashed lines represent the ideal ZVS limits for each value of in their respective color.
Figure 9 .
Figure 9.Experimental results.Normalized i Lk RMS at different operating points at constant power of 10 kW and variable switching frequency.The operating points from the experimental results are marked with an 'x' in its respective color.The vertical dashed lines represent the ideal ZVS limits for each value of M in their respective color.
Figure 10 .Figure 10 .Figure 11 .
Figure 10.Converter operating at constant power and variable switching frequency at different operating points: (a) 20 kHz (blue) and 42.5 kHz (green); (b) 20 kHz (blue) and 38 kHz (green); (c) and (e) correspond to the operating point of (a) at 20 kHz and 42.5 kHz, respectively.(d,f) correspond to the operating point of (b) at 20 kHz and 38 kHz, respectively.(c-f) show the inductor current, , in red.The current values at the switching instants are indicated in black as 1 and 2 and the voltages 1 and 2 are shown in green and blue, respectively.
Figure 11 .
Figure 11.Converter operating at constant power and variable switching frequency at different operating points: (a) 20 kHz (blue) and 50 kHz (green); (b) 38 kHz (blue) and 48 kHz (green); (c,e) correspond to the operating point of (a) at 20 kHz and 50 kHz, respectively.(d,f) correspond to the operating point of (b) at 38 kHz and 48 kHz, respectively.(c-f), show the inductor current, i Lk , in red.The current values at the switching instants are indicated in black as i 1 and i 2 and the voltages v 1 and v 2 are shown in green and blue, respectively.
are normalized as follows: i Lk RMS [pu] =
Table 2 .
Constant power operating points. | 11,989.6 | 2024-05-07T00:00:00.000 | [
"Engineering"
] |
The Interconnection and Damping Assignment Passivity-Based Control Synthesis via the Optimal Control Method for Electric Vehicle Subsystems
The interconnection between optimal control theory and the theory of energy-shaping control is described in our paper. For linear and nonlinear systems, the application of the theory of optimal control for the synthesis of parameters of energy-shaping control matrices is demonstrated in detail. The use of a Riccati equation allows us to form an optimality criterion and to synthesize the energy-shaping control system that provides the desired transient processes. The proposed approach was applied to the synthesis of control influences for electric vehicle subsystems, such as a two-mass system and a permanent magnets synchronous motor. The results of computer simulation studies, as well as those conducted on real experimental installations, are given in this paper.
Introduction
In recent decades, there has been a significant gap between the development of the theory of automatic control and the practical application of the established methods of the synthesis of control influences in technical systems. Modern systems, including electromechanical ones, are complex nonlinear objects. The use of nonlinear control theory [1][2][3] methods in such systems (especially feedback linearization, backstepping, and passivitybased control, as shown in [4][5][6]) creates new opportunities to synthesize effective control algorithms and to improve the dynamic and static characteristics of the systems themselves. At the same time, the mentioned methods of control systems synthesis are quite complex from a mathematical point of view, requiring specialist knowledge for their understanding and application; thus, their widespread use has declined for the time being. A common feature of the above-mentioned methods is the formation of control influences based on state variables [1,7], which will ensure the stability of the synthesized system. In the first case, the control coefficients based on the full state vector are in the new coordinate basis where the system is linear, and then there is a transition to the main coordinate basis. In the other two, the control synthesis is based on the iterative process [8,9]. In the backstepping method, synthesis occurs by increasing the complexity of the system and using a special type of Lyapunov function. In the passivity-based control method, synthesis is performed by splitting the trajectory into individual sections and finding optimal control over this time interval based on the application of the Fréchet derivative [10]. Unlike the methods of feedback linearization and backstepping, the method of passivity-based control, as a representative of energy-based approaches, is based on the physical laws of energy transfer and conversion [11]. This makes its application promising in electromechanical systems, particularly in electric vehicles [12,13], where the energy flow control system is at the core. As noted in [14] the main difficulties in the application of energy-shaping control are both the structure selection of the matrices of interconnections between subsystems and damping and the synthesis of the parameters of these matrices. For a linear system, as described in [15], the relationship between the theory of optimal control and interconnection and damping assignment passivity-based control (IDA-PBC) is shown, and the problems that need to be solved are formulated. In [11], for the synthesis of the control influence, the Ricatti equation for a single point in the state space was applied to form the control as a combination of the control influences obtained at individual points. The formulation of the problem of energy-shaping control synthesis, based on the theory of optimal control for a linear system, is caused primarily by the fact that the nonlinear theory of optimal control for the synthesis of control influence requires solving the Hamilton-Jacobi-Bellman equation [16], which is also difficult in complex systems. The linear theory of optimal control makes it possible to obtain a solution to the problem in the form of a matrix of feedback on state variables. These variables correspond to the principles of energy-shaping control. At the same time, the application of fuzzy set theory makes it possible to consider certain classes of nonlinear systems, including electromechanical systems, as a family of dynamic linear systems, and to synthesize a fuzzy controller based on the methods of classical control theory [17]. Thus, the synthesis of IDA-PBC via the linear theory of optimal control and its extension to nonlinear systems is an important task, especially for electric vehicles.
Due to the permanent development of electric vehicles in recent years [18], electric vehicles can be chosen as an example onto which we can apply the proposed ideas. In a modern electric car, the most crucial factor is an effective use of the battery charge [19][20][21], which is an energy management challenge that can be correctly formulated and solved with energy-based approaches. An electric vehicle is a complex electromechanical system which consists of different types of subsystems, and each of them is important, particularly the wheels, shaft, electric motor, inverter, traction battery, internal combustion engine, generator, etc. [22]. The mechanical parts of the whole powertrain (whether they be wheels, hubs, motors, or shafts), can be considered constitutive of a two-mass subsystem [23]. Despite the use of different types of motors in electrical vehicles (Direct Current Motor, Brushless Direct Current Motor, Alternating Current Motor, and Switched Reluctance Motor) [24], the most popular is a Permanent Magnet Synchronous Motor (PMSM). This is because of its high power rating and efficiency. The PMSM is a nonlinear system that requires a more complex control system design [25].
Considering the feasibility of passivity-based control in electromechanical systems, where providing optimal energy efficiency and energy flow management are key issues, this article aims to solve the problem of synthesis for control system interconnection and damping matrices with the use of the classical theory of optimal control.
Synthesis of Energy-Shaping Control in the Case of a Linear System
A linear system in well-known state-space representation is the following: and, when moving to a desired state, it transforms to: where x z -desired state vector and u z -desired input vector.
In this case, for the integral quality criterion, we obtain the following expression: where R 1 and R 2 are positive definite matrixes and λ(t)-indefinite Lagrange multiplier.
Taking that x * = x − x z and u * = u − u z , and also given that dx z (t) dt = 0, the formed criterion (2) can be written as follows: When t 1 → ∞, an optimal control of the object can be formulated in the form of a linear law u * = −K · x * , where the matrix of feedback coefficients K is determined by the formula K = R where P is the only non-negative symmetric solution of the algebraic Riccati equation: Then, the optimal control, which transits the system from any arbitrary state to a desired state, is determined as follows: In the case of energy-shaping control (in particular IDA-PBC), when considering the system as port-controlled Hamiltonian (PCH), the model of the linear system will look like: where J = −J T -skew-symmetric matrix that reflects the interconnections in the controlled object; R = R T ≥ 0-symmetric positive matrix that reflects the loss (damping) in the controlled object; H = 1/2 · x1 T · D −1 · x1-total energy function (Hamiltonian); x1-a state vector in PCH representation, the elements of which are various energy impulses; D-diagonal matrix of inertia coefficients; G-port matrix in PCH representation. According to the IDA-PBC procedure, the control system synthesis is reduced to determining the structures of the new internal energy interconnections J a and damping R a that provide the necessary behavior of the system [26]. The introduction of additional interconnections is carried out in order to change the flow of energy between the subsystems. It will lead to new forces that will move the system to a given point of equilibrium. The introduction of damping is carried out for the purpose of natural redistribution of energy, which leads to the damping of oscillations in the system and ensures its asymptotic stability. The model of the desired asymptotically stable closed-loop Hamiltonian control system is described by the following equation: where J d = J + J a = −J d T -matrix that reflects the interconnections in the desired system; R d = R + R a = R d T ≥ 0-matrix that reflects the loss (damping) in the desired system; )-the energy function of the desired closed-loop control system for the equilibrium point x1 0 .
Then, the equation for the control influences of the control system of partial derivatives will look like: Given that x1 = D·x and x1 0 = D·x z , while ∂H ∂x1 = x and ∂H d ∂x1 = x − x z , we will receive: and In order to find the control according to the energy-shaping approach, the expression should be multiplied by D −1 : Taking into account that u z = −B −1 · A · x z by analogy with the system of optimal control, the control influence, which transits the system from any arbitrary state to a desired state, can be written as follows: It should be noted that the state of the system with optimal control and energy-shaping control is determined by different state vectors: in the first case, the coordinates of the state, and in the second, energy pulses. Given the relationship between the vector of energy pulses and the coordinates of the state, it can be written as K = D −1 · K1, and then: Let W = J a − R a = −G · D · R −1 2 · B T · P. Then, taking into account that J a = −J T askew-symmetric matrix and R a = R T a ≥ 0-symmetric matrix, we will receive: In energy-shaping control, the matrix J a forms energy flows between individual subsystems. If we were to take for a linear system that J a = 0, then the damping matrix is defined as follows: Thus, IDA-PBC provides the formation of optimal control influences. In the case of the control object being a linear system, it could be synthesized using the theory of optimal control.
Study of the Efficiency of Synthesized Control in a Two-Mass System
Consider the application of the proposed approach to the synthesis of a control system for a two-mass subsystem for the electrical vehicle. The traditional model of a two-mass system looks like this [6]: where J 1 and J 2 -moments of inertia of the motor's rotor and the mechanism, respectively; ω 1 and ω 2 -angular velocities of the engine and mechanism, respectively; M-torque of the drive mechanism (electromagnetic moment of the motor); M c1 and M c2 -static moments acting on the motor itself and the mechanism, respectively; b 1 and b 2 -coefficients of external viscous friction of the motor and mechanism; c-transmission stiffness factor; ∆φ-twist angle; β-coefficient of internal viscous friction.
There is a more accurate representation of the last equation in (9) using the Caputo-Fabrizio operator, where elastic moment is formed in the following way: From another hand, our system is not a positional system which, according to [27], allows us to use the traditional representation of a two-mass system (9). As follows, in vector-matrix form (1), the model of a two-mass system (9) will take the form: When writing a controlled object as a PCH (5), the system model will look like: The Hamiltonian of the system will be as follows: Then given ∂H ∂x1 = ω 1 ω 2 ∆ϕ T : and accordingly, based on (6), the following matrices can be found: Thus, the matrix of control influences B (9), which is obtained from the model in PCH representation, differs from the traditional one obtained from the representation of the system in the state-space form. The presence of fictitious control influence 1 c · 0 allows, unlike writing the system in the form of state variables, for the finding of u z as the solution of the system A · x z + B · u z = 0. It is also worth noting that, in the energy-based approach, the control influence is formed as the sum of all influences that operate at a given point in the system, taking into account the sign. Given the above, a model of the system (9) in the form of state variables can be written as: or, separating the control and perturbing influences traditionally used in the synthesis of control systems, this model can be represented as: Given that the system has only one control influence, the quality criterion (3) will take the following form: Assume that R 1 -identity matrix. Then, Riccati's Equation (4) will take the form: The matrix of feedback coefficients, based on (7), will have the form: Let the investigated two-mass system have the parameters [28]: J 1 = 1 kg·m 2 , J 2 = 3 kg·m 2 , c = 20,000 N·m, b 1 = 0.25 N·m·s, b 2 = 0.25 N·m·s, and β = 10 N·m·s. Then, based on (11), the matrix P for α = 0.5 will be the following: and the matrix of synthesized coefficients based on state variables (12) will appear accordingly as: Given that x z = −A −1 B · u z , the synthesized control influence will be equal: Figures 1 and 2 show the change of system state coordinates for u z = 10 and coefficients, synthesized based on system state variables via optimal control theory.
In the case of energy-shaping control, based on (8), we obtain the following matrices of the control system: and general control system, synthesized using the general IDA-PBC approach [26] with the selected structure of matrices Ja and Ra, will have the form: . Figures 1 and 2 show the change of system state coordinates for uz = 10 and coefficients, synthesized based on system state variables via optimal control theory.
In the case of energy-shaping control, based on (8), we obtain the following matrices of the control system: and general control system, synthesized using the general IDA-PBC approach [26] with the selected structure of matrices Ja and Ra, will have the form: Received energy-shaping control (13) provides the same behavior, as shown in Figures 1 and 2. time, sec time, sec Received energy-shaping control (13) provides the same behavior, as shown in Figures 1 and 2.
Synthesis of Optimal Control Based on the Riccati Equation Written in Terms of Energy-Shaping Control
The main problems in the synthesis of optimal control are related to the selection of matrices R 1 and R 2 and finding the solution to the algebraic Riccati equation-matrix P.
In the case of energy-shaping control, the algebraic Riccati equation for finding the matrix P can be represented as follows: Given the transposition properties of the matrices, the Riccati equation can be written as: If taken as a partial case, P = γ · D, Riccatti's equation will take the form: and, accordingly: and, accordingly: Taking into account the fact that the control signal is supplied to only one port, then: Let R 2 -identity matrix, and R 1-for the studied two-mass system has the following form: Then, the solution of the algebraic Riccati equation is: In the case of energy-shaping control, the matrix of new internal energy interconnection J 2 = 0 and the matrix of formed damping is equal to: In this case, energy-shaping control (13) transforms to the form: For a traditional system of optimal control, a feedback matrix based on state variables: In this case, energy-shaping control (13) transforms to the form: For a traditional system of optimal control, a feedback matrix based on state variables: The conducted studies confirmed that systems received from both optimal control and IDA-PBC approaches provide the same behavior in the controlled object.
Synthesis of Energy-Shaping Control Parameters in the Case of a Nonlinear Electromechanical System
In the case of a nonlinear system, the system model is given in the form: The optimal control is based on the solution of the Hamilton-Jacobi-Bellman equation which, under the condition T V V = , has the form: for the condition ( ) ( ) 2 , The solution of the Hamilton-Jacobi-Bellman equation (14) is quite complex due to its nonlinear nature. In the case of a linear system, the Hamilton-Jacobi-Bellman equation is transformed into the well-known Riccati equation (4).
When using energy-shaping control, the nonlinear system can also be represented in The conducted studies confirmed that systems received from both optimal control and IDA-PBC approaches provide the same behavior in the controlled object.
Synthesis of Energy-Shaping Control Parameters in the Case of a Nonlinear Electromechanical System
In the case of a nonlinear system, the system model is given in the form: The optimal control is based on the solution of the Hamilton-Jacobi-Bellman equation which, under the condition T V V = , has the form: for the condition ( ) ( ) The solution of the Hamilton-Jacobi-Bellman equation (14) is quite complex due to its nonlinear nature. In the case of a linear system, the Hamilton-Jacobi-Bellman equation is transformed into the well-known Riccati equation (4).
When using energy-shaping control, the nonlinear system can also be represented in The conducted studies confirmed that systems received from both optimal control and IDA-PBC approaches provide the same behavior in the controlled object.
Synthesis of Energy-Shaping Control Parameters in the Case of a Nonlinear Electromechanical System
In the case of a nonlinear system, the system model is given in the form: The optimal control is based on the solution of the Hamilton-Jacobi-Bellman equation which, under the condition V = V T , has the form: for the condition V(x, T) = S 2 (x), where S 1 and S 2 -formed objective function. The solution of the Hamilton-Jacobi-Bellman Equation (14) is quite complex due to its nonlinear nature. In the case of a linear system, the Hamilton-Jacobi-Bellman equation is transformed into the well-known Riccati Equation (4).
When using energy-shaping control, the nonlinear system can also be represented in PCH form: (15) Skew-symmetric matrix J(x1) = −J T (x1) in the case of an electromechanical system can contain both elements that depend on state variables and elements that do not depend on state variables of system x1, and can be written as: J(x1) = J * + J ** (x1). Similarly, a symmetric matrix R(x1) = R * + R ** (x1) can be written. If matrix G does not depend on state variables, the model of the nonlinear system (15) can be written as follows: (16) The model of the desired asymptotically stable closed-loop PCH is described by the following equation: ∂H d ∂x1 and, taking into account the desired equilibrium point, can be written as: where J d * = J d (x1 0 ) and R d * = R d (x1 0 )-values of the matrices at the point of the state space, which is determined by the state vector of the system x1 0 .
With forms such as (16) and (17), we come to the synthesis of optimal control for linear systems. Synthesized control influence u * for energy-shaping control, as shown above, is equal to u * = u z − K1 · (x − x z ). Then In the case of control based on the full state vector of the system, we obtain: ]-control influence, which compensates nonlinearities in the electromechanical system.
Studies of the Efficiency of the Proposed Approach in the Example of a Permanent Magnet Synchronous Motor Control System
Consider the application of the proposed approach on the example of PMSM control. The model of PMSM in an orthogonal rotating coordinate system d-q, where the d axis is oriented along the rotor flow vector, has the form [14]: where L d and L q -the inductances of the armature winding (stator) at the axes d and q, respectively; R s -active resistance of the armature phase winding; p p -number of pole pairs; ω-angular speed of the rotor; Φ-the amplitude of the flux linkage of the armature winding with a pair of poles of the rotor permanent magnets; I m -total moment of inertia; b-coefficients of external viscous friction; M L -moment of static load.
In energy-shaping control, the state vector is x1 = and taking into account the next equations the system model will look like: -Hamiltonian of the system. Dividing the system control and disturbing influences and considering that: we will receive where P-solution of the Riccati equation As a result of solving the Riccati Equation (18), the resulting matrix K has the following form where The searched matrices of interconnections and damping, in this case, will look like: L q ·k 23 2 0 L q ·k 23 2 0 For the control system based on the full state vector we obtain: Control influence p p L q − L d · i q · i d compensates for fluctuations in the electromagnetic torque when L q = L d . However, in real systems, it cannot be implemented.
After substituting the synthesized control influences (20) in the model of PMSM, we obtain: For the control system of PMSM with permanent magnets placed on the rotor surface and with the following parameters: n H = 500 r/min, M H = 500 N·M, R s = 0.25 Ohm, The synthesized matrices of interconnection and damping are equal to: The obtained dependences of rotation speed change and stator current projection on the q axis are shown in Figure 7.
Combining this and regular IDA-PBC approaches, the following energy-shaping control can be obtained: which results in the same control influences. In order to confirm the efficiency of the proposed approach, studies were conducted on the validated simulation model and the real experimental installation ( Figure 8); these are described in [6]. The approach consists of a computer control system (1), PMSM (2) and DC-machine load (3) connected through the belt (4).
Motor (2) is a multipolar SPMSM, where L d ≈ L q . It receives power from a custom inverter, built on ATmega 128, which is the core of a low-level control system that forms control signals to transistors drivers. The high-level control system was implemented on the PC (1), which allows us to dynamically change the regulator's structure and parameters.
To provide speed feedback-loop, an absolute 12-bit Kübler 5862 rotary encoder was used. After processing position in Grey-code, this allowed us to calculate the motor's speed. For the currents feedback loops, ABB EH050AP current sensors were used.
The parameters of the main SPMSM are the following: n H = 80 r/min, M H = 80 N·M, R s = 1.7143 Ohm, Φ = 0.1825 Wb, J m = 2.43 kg·m 2 , p p = 38, L d = 6.228 mH L q = 6.468 mH, b = 0.1. As such, the respective control matrix (19) will look like this: To provide speed feedback-loop, an absolute 12-bit Kübler 5862 rotary encoder was used. After processing position in Grey-code, this allowed us to calculate the motor's speed. For the currents feedback loops, ABB EH050AP current sensors were used.
The obtained dependences of rotation speed change and stator current projection on the q axis are shown in Figure 9.
The obtained dependences of rotation speed change and stator current projection on the q axis are shown in Figure 9. Figure 9 shows results from the simulation model and the experimental installation. Due to limitations in experimental installation, particularly the limited power source, most studies were conducted on a low speed, which resulted in additional fluctuations in Figure 9.
Conclusions
The conducted analysis of literature sources allows us to assert that, in electric vehicle systems, the application of the theory of passive control is especially promising, as passive control is based on the physical laws of energy transfer and conversion; however, providing energy efficiency and energy flow management is a key challenge in electric vehicle systems.
The complexity of the synthesis of control system parameters inhibits the widespread application of the theory of passive control in electromechanical systems, which, in turn, creates a necessity to find new approaches capable of solving this problem.
The application of the classical theory of optimal control provides a way to synthesize parameter values of interconnection and damping matrices for energy-shaping control of linear and nonlinear electromechanical systems.
In contrast to many of the existing approaches, the application of the proposed approach for the synthesis of parameters of interconnections and damping matrices drives the transient characteristics of the system to be as desired, the appearance of which is determined by the given quality criterion of the system. The results of the performed studies confirm the efficiency of the applied approach to the synthesis of control influences in both linear and nonlinear electromechanical systems. The view of Riccati's equation in terms of energy-shaping control has made it possible to form an optimality criterion that corresponds to the synthesized energy-shaping control. | 5,811.4 | 2021-06-21T00:00:00.000 | [
"Engineering"
] |
AUTOMATIC MRF-BASED REGISTRATION OF HIGH RESOLUTION SATELLITE VIDEO DATA
In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.
INTRODUCTION
Currently the remote sensing community is expecting during the following years a paradigm swift from spare multi-temporal to every-day monitoring of the entire planet through mainly microsatellites at a spatial resolution of a few meters or centimeters (in the raster world), but also from other cutting-edge technology including hyperspectral sensors and UAVs.Moreover, apart from the standard imaging products video streaming from earth observation satellites significantly expands the variety of applications that can be addressed.
In particular, high resolution satellite video sequences [Murthy et al., 2014,d'Angelo et al., 2014,Kopsiaftis and Karantzalos, 2015] have become available and enrich the existing geospatial data and products.Skybox Imaging 1 and Urthecast 2 are already providing high resolution video datasets with a spatial/temporal resolution of approximately 1 meter and 30 frames per second.However, due to the continues movement of the satellite platform the acquired frames are not registered between each other.Moreover, in order to combine and fuse information from other geospatial data and imagery for any application or analysis their registration to a local/national geo-reference system is required.Therefore, the automated co-registration of video frames and/or their registration to a reference image/map is still an open matter.
The problem of image registration has been heavily studied and numerous approaches have been proposed [Zitova andFlusser, 2003, Sotiras et al., 2013].The methods fall into two main categories depending on the employed model i.e., rigid-based and non-rigid (deformable-) based ones.The first category consists of descriptor-based methods, which automatically detect and match points in the pair of images and then define a global transformation to register them.A variety of descriptors, such as SIFT [Lowe, 2004], ASIFT [Morel and Yu, 2009], SURF [Bay et al., 2008], DAISY [Tola et al., 2010], FREAK [Alahi et al., 2012], etc have Figure 1: The developed methodology manages to co-register the acquired video frames.Unregistered frames (left), registered frames after the application of the developed method (right).Data are from Skybox Imaging (Terra Bella).
been employed for a plethora of applications like face recognition, object identification, motion tracking and satellite imagery.Under such a framework one million of satellite RGB images have been registered by Planet Labs3 in just one day [Price, 2015].The second category contains non-linear registration methods.A similarity function is used to calculate the similarity of each pixel (from the first image) to a neighbourhood of pixels in the other image and find the best displacement which recovers the geometry.This kind of methods have been widely used in computer vision and medical imaging [Sotiras et al., 2013], while recently validated for very high resolution satellite data [Karantzalos et al., 2014] delivering high accuracy rates for both optical and multimodal data.
In this paper, a MRF-based registration framework is proposed for the co-registration of satellite video frames and/or their registration to a reference map/image (Figure 1).In particular, the developed method calculates a deformation map, while certain similarity functions (e.g., normalised cross correlation, mutual infor-mation, sum of absolute difference, etc.) were employed for calculating the displacement of every pixel.An energy formulation through an MRF model was defined and its minimization was performed using linear programming.The methodology was applied and validated based on Skybox Imaging data and certain corresponding reference images (Table 1).Experimental results were compared with the ones obtained from a descriptor-based technique [Price, 2015] which is based on a rigid registration framework using the STAR [Agrawal et al., 2008] and FREAK [Alahi et al., 2012] algorithms for establishing and matching correspondences.These correspondences were used for defining the homography transformation parameters and register the pair of images.Both methods have been quantitative and qualitative evaluated based on manually collected ground control points (GCPs).
Image Registration
Lets denote in a pair of images It: Ω → R 2 as the reference/target image and Is: Ω → R 2 as the source image that should be registered.The goal of registration is to define a transformation T : Ω → R 2 which will project the source to the target in the image pair.
For the rigid registration, the displacement of each pixel in the image is calculated using the same transformation parameters.
On the other hand, for the non-rigid registration the displacement of every pixel is calculated independently using only certain constraints for local smoothness defined by the model.Regarding the co-registration of satellite video frames, in our experiments the reference image corresponds to the first frame of the video sequence.
Rigid, descriptor-based registration
The most commonly used approach is based on a rigid registration [Le Moigne et al., 2011, Vakalopoulou and Karantzalos, 2014, Price, 2015] and calculates a global transformation for image pairs.The framework has four main components: i) the keypoint detector, which detects and holds the information about the position of every keypoint in each image, ii) the keypoint descriptor, which contains the characteristics of the keypoints, in order to be able to compare them, iii) the matcher, which matches the different keypoints in the source and target images and finally, iv) the image transformation method, which calculates the parameters of the transformation, based on the calculated correspondences.
For the evaluation of the proposed MRF-based approach the rigid registration method employed, here, is based on the recently proposed approach in [Price, 2015] including: a keypoint detector, the Star Detector (STAR), based on Center Surround Extremas (Censure) [Agrawal et al., 2008], a keypoint descriptor, the Fast Retina Keypoint algorithm (FREAK) [Alahi et al., 2012] and as matcher the brute force matcher (BFMatcher).Last but not least, the transformation used to register the source image to the target/reference was the homography one.
Generally speaking, the STAR algorithm detects numerous keypoints in each frame.Since the consecutive frames do not change a lot, many correspondences between the two frames were created.In order to reduce the outliers, the RANSAC [Fischler and Bolles, 1981] algorithm was used with a reprojection threshold of one pixel.Additionally, the false correspondences were removed, using a filter that allowed only matches below a specified threshold to participate to the transformation.The threshold was set to a fraction of the maximum distance between the matches.In all our experiments only those matches with a distance less than or equal to 65 percent of the maximum distance participated in the formulation of the transformation.
The homography parameters are defined after the minimization of the following error (Equation 2). (2) where h11, h12, h13, h21, h22, h23, h31, h32, h33 are the homography parameters, xi, yi are the coordinates of the keypoint i in the reference image and x i , y i the coordinates of the keypoint i in the source image.
The proposed MRF-based satellite video registration framework
The proposed approach is based on a deformable registration using different similarity metrics.A MRF model was defined and the solution is minimizing the following energy function (Equation 3) [Glocker et al., 2011].The label space for the model contains all the possible displacements (d 1 , . . ., d n ), such as: lp = [d 1 , . . ., d n ].A graph was superimposed on the target frame, and each node was connected to a neighbourhood of pixels using an interpolation function η(.).The total energy was formulated as below: where p, q are nodes in the graph G and N the neighbourhood of p in the other image, Vp is the unary term, Vpq is the pairwise term and λ is the weight which defines the use of the pairwise term in the energy minimization.
The unary and the pairwise terms are formulated as follows: where ρ() is the similarity function used (normalised cross correlation, mutual information, etc).The interpolation function η which connects with a weight propositional to the distance the pixels with the nodes of the grid and reverse.A typical example of a projection function would be cubic B-splines which is the one employed here.
where Vpq penalises neighbour nodes with different displacement labels depending on the difference of their displacement.The satellite video datasets that were employed for the validation of the developed registration framework.
IMPLEMENTATION
The formulation follows a multiscale approach concerning both the image and the graph, meaning that the energy was calculated at different levels of the grid and the image.Concerning the grid levels a sparse grid was implemented and as the levels of the grid augmented, the grid became more and more dense.At each level a number of iterations was performed in order to calculate the minimum energy.In different grid levels the source image was transformed and updated, so in the next level it was closer to the target one.This way the label space for the displacements was also changing in each grid level, being closer to the optimal.Finally, for different image levels a subsampling of the image was performed for less computational complexity.
For the Burj Khalifa4 Skybox video dataset the set of parameters was defined as follows.The node distance was set to 10 pixels, the grid levels to 3 and the image levels to 2 with 5 iterations at each level.The label space at each grid level changed to 0.8 times of the previous one.Normalized Cross Correlation (NCC) was used as the similarity function, which, according to the literature, performed better than other functions [Karantzalos et al., 2014] for the registration of remote sensing data.Finally, the lambda parameter was set to 40, the sampling steps to 25 and cubic bsplines was used as the interpolation function.All the parameters were tuned after grid search.
Using the above set of parameters, a co-registration between smaller groups was initially performed and then all groups were registered to the first frame.In particular, three groups with a lower number of frames and thus smaller displacements were formed i.e., every 300 frames.The registration of each group was performed using as target image the 1 st , 300 th and 600 th frame, respectively.Then all were registered to the first one.
For the two Las Vegas Skybox video datasets, the configuration consisted as in the previous case of: a node distance of 10 pixels, 3 grid levels and 2 image levels.Moreover, the number of iterations was set to 15, the sampling steps to 65, lambda was set to 15 and the label space to 0.67 times the previous one for each grid level.The similarity function and the interpolation method was the same as for the Burj Khalifa sequence.Again the registration was performed firstly in groups and in particular, for the Las Vegas5 dataset the grouping was every 300 frames and for the Las Vegas-night6 video dataset every 150 frames.
EXPERIMENTAL RESULTS AND EVALUATION
The proposed MRF-based methodology was evaluated both qualitatively and quantitatively.For the quantitative evaluation a number of manually collected GCPs were selected.It is important to note, that for the descriptor-based approach a set of fixed parameters did not perform well for all the video frames, since even the smallest shift between the frames affected the keypoint detection and respectively the registration accuracy.For this reason, the tuning of the parameters was performed for each pair of frames using grid search.This was the main drawback of the descriptor-based framework since even though the multithreaded implementation in OpenCV [Culjak et al., 2012] requires two to three seconds per image pair, the manual tuning of the parameters required significantly more.
The experimental results included satellite video sequences of Burj Khalifa, Las Vegas and Las Vegas Night (Table 1) from Skybox Imaging.The main challenges for the registration of the video datasets were mainly the relative tall buildings, their shadows and any other moving object (e.g., airplanes).In particular, the different angles of the sun and the satellite acquisition affect the geometry of terrain objects and their corresponding shadows.
For the quantitative evaluation the results after the implementation of both registration methods are presented in Table 2.In all cases the proposed MRF-based approach outperformed the descriptor-based one and managed to register all the different frames with a mean displacement error of less than 1.5 pixels.These errors correspond to the overall registration error from all frames since they were calculated between the first and last frame of the video dataset.The resutled higher registration errors from the descriptor-based approach along with the fact that these errors were not equally distributed in image plane indicated a significant lower performance than the proposed MRF-based approach.
Moreover, the registration of the Burj Khalifa dataset to a Google Earth's image mosaic was performed using the proposed MRFbased approach.Quantitative results are quite promising with mean displacement errors less than 1.6 pixels (Table 3).
For the qualitative evaluation different checkerboard visualisations are presented in Figures 2, 4, 3, 5, along with certain zoomin at selected sub-regions.Each checkerboard visualisation is a blend of the first and last frame of the unregistered and registered datasets.After a closer look on the marked with a red color areas one can observed that the unregistered data possessed large initial displacements.In particular, in Figure 2 one can observe quite large displacements between the different frames, with significant spatial discontinuities in roads, bridges and buildings (e.g., inside the red circles).The MRF-based registration recovered the geometry and managed to register accurately the video frames.
Mean Displacement Errors
(in pixels) In all cases the developed approach managed to registered the satellite video frames with a mean displacement error of less than 1.5 pixels.As expected the image regions with the most mis-registration errors were those with significant relief displacements with very tall man-made objects, buildings and skyscrapers.
With a chessboard visualisation, results for two other datasets are presented in Figure 3 and 4. Once again after a closer look one can observe the robustness of the proposed approach towards recovering scene's (frame's) geometry.Moreover, Figures 2 and 4 depict the same region in different acquisition times.Even though in Figure 4, the satellite video dataset was acquired during the night, the proposed MRF-based method performed significantly well, resulting into an overall mean displacement error of less than one pixel in both axis (Table 2).
In order to qualitatively compare the results of the proposed MRFbased approach with the descriptor-based one, results on the same datasets are presented in Figure 5 after the application of the descriptor-based method.Although a large number of correspondences have been established the rigid nature of the transformation could not recover scene's geometry adequately.
CONCLUSION
In this paper an MRF-based registration approach was developed for the accurate co-registration of satellite video frames as well as the registration of the video dataset to reference map/image.The method was applied and validated based on satellite video data from Skybox Imaging and compared with a standard descriptorbased registration framework.Experimental results indicate the great potentials of the proposed approach which managed to recover the geometry in all cases with registration errors of less than 1.5 pixels at both x and y axis.
( a )
Figure 2: Chessboard visualizations from the Las Vegas Skybox dataset.Frames from the unregistered dataset (a) and frames after the registration process (b) are shown in the first two rows.Zoom-in areas are shown in the third row for the unregistered (c) and registered (d) frames.
( a )Figure 3 :
Figure 3: Chessboard visualization from the Burj Khalifa video dataset.Unregistered (left) and registered (right) data before and after the application of the proposed methodology.
( a )
Figure 4: Chessboard visualization from the Las Vegas-night video dataset.Unregistered (left) and registered (right) data before and after the application of the proposed methodology.
Table 2 :
Quantitative evaluation results after the application of the proposed MRF-based registration method.
Table 3 :
Quantitative evaluation results after the registration of the Burj Khalifa satellite video dataset to an image mosaic acquired from Google Earth. | 3,780 | 2016-06-02T00:00:00.000 | [
"Computer Science"
] |
Soap films with gravity and almost-minimal surfaces
Motivated by the study of the equilibrium equations for a soap film hanging from a wire frame, we prove a compactness theorem for surfaces with asymptotically vanishing mean curvature and fixed or converging boundaries. In particular, we obtain sufficient geometric conditions for the minimal surfaces spanned by a given boundary to represent all the possible limits of sequences of almost-minimal surfaces. Finally, we provide some sharp quantitative estimates on the distance of an almost-minimal surface from its limit minimal surface.
Introduction
In the study of soap films under the action of gravity, one is interested in surfaces with small but non-zero mean curvature spanned by a given boundary. Indeed, as explained in section 2 below, the mid-surface M of a film of thickness 2 h > 0 satisfies in first approximation the equilibrium condition where H M is the mean curvature of M with respect to the unit normal ν M , e 3 is the vertical direction, and κ −1 is the capillary length of the film, defined by κ := g ρ σ . (1.2) Here, ρ is the volume density of mass for the film solution, σ denotes the surface tension of the film (with dimensions Newton per unit length), and g is the gravity acceleration on Earth. The interest for this equation lies in the fact that it correctly encodes several physical properties which are missed by the minimal surface equation H M = 0, e.g. the fact that actual soap films cannot be formed under arbitrary large scalings of the boundary curve.
In this setting, the first question one wants to answer is whether minimal surfaces are a good model for their small mean curvature counterpart. In this paper, we provide a general sufficient condition on the boundary data to ensure the validity of this approximation. When the model minimal surface is smooth and strictly stable, we also provide quantitative estimates for almost-minimal surfaces in terms of their total mean curvature. Since formal Notice that it is not necessary that Γ is contained into a convex set, or into a mean convex set, for the condition to hold. On the right, another set of circles defining a boundary Γ which does not satisfy accessibility from infinity. Indeed, there is no way to touch the smaller circle with an acute wedge containing the larger ones. statements require the introduction of a few concepts from Geometric Measure Theory, we present for the moment just an informal and simplified version of our main results.
Theorem. Let Γ be a compact, orientable (n − 1)-dimensional surface without boundary in R n+1 , and let {M j } j be a sequence of compact, orientable n-dimensional surfaces in R n+1 with boundaries Γ j = f j (Γ) for maps f j converging in C 1 to the identity map, and such that (denoting by H n the n-dimensional Hausdorff measure in R n+1 ), Assume that Γ has the following two properties: Finiteness and regularity of the Plateau problem: There are finitely many minimal surfaces {N i } i spanned by Γ, possibly including in the count "singular" minimal surfaces, whose singularities are anyway located away from Γ.
Under these two assumptions, we have the following conclusions: No-bubbling: There exists a single minimal surface N i such that M j → N i as j → ∞, in the sense that there exist open sets {E j } j with smooth boundary such that Here |E| denotes the (n + 1)-dimensional volume of E ⊂ R n+1 .
Strong convergence and sharp estimates: If in addition Γ j = Γ, N i has no singularities, and N i is strictly stable, in the sense that, for a positive constant λ, (where |A N i | is the Hilbert-Schmidt norm of the second fundamental form of N i ֒→ R n+1 ), and if for some p > n we have a uniform bound then there exist smooth functions u j : N i → R with u j = 0 on ∂N i and u j C 1 (N i ) → 0 as j → ∞ such that and the following sharp estimates hold: for a constant C = C(N, p).
Remark 1.1. As shown by simple examples (see Figure 3.2), if accessibility from infinity fails then bubbling can occur in the convergence of {M j } j . In particular, {M j } j could converge to a smooth minimal surface with multiplicity 2, and some pieces of the limiting minimal surface could not be part of any minimal surface spanned by the whole Γ.
Remark 1.2. In the case M j is the boundary of an open set (and thus, necessarily, Γ = ∅), and M j has almost-constant (non-zero) mean curvature, then the occurrence of bubbling is unavoidable, and its description has been undertaken in various papers, see e.g. [BC84,Str84,CM17,DMMN17,KM17,DM17]. From this point of view, the fact that we can avoid bubbling under somehow generic assumptions on the boundary data Γ is a remarkable rigidity feature of Plateau's problem.
Remark 1.3. The finiteness and regularity assumption is well-illustated in the case when Γ consists of two parallel unit circles in R 3 , having centers on a common axis. The idea here is that, depending on the distance between the circles, there should be at most five "generalized" minimal surfaces spanned by Γ (see Figure 3.1): two parallel disks, two catenoids (one stable, the other unstable), and two singular catenoids. Each singular catenoid is formed by attaching a smaller disk to two catenoidal necks so that the disk floats at mid distance from the two boundary circles, and the necks form three 120-degrees angles along the circle. Notice that the floating circle does not count as a boundary curve, but rather as a curve of "singular" points. Observe that accessibility from infinity trivially holds in this case, while the validity of the finiteness and regularity assumption (which is formally introduced in section 3.4) is not obvious, although it seems quite reasonable to expect it to be true. If that is the case, the compactness theorem indicates that a sequence of smooth almost-minimal surfaces spanned by Γ (or with boundaries converging to Γ) must converge to one of these five minimal surfaces, without bubbling. Actually, a simple additional argument can be used to exclude that the singular catenoids are possible limits, see Remark 4.1.
Remark 1.4. Both estimates (1.4) and (1.5) are sharp. When p = ∞, (1.4) generalizes to arbitrary minimal surfaces the fact that an almost-minimal surface bounded by a circle deviates from a flat disk at most linearly in the mean curvature times the area of the disk. The interest of (1.5) is that the L 2 -norm of the mean curvature appears as the dissipation of the area along a mean curvature flow with prescribed boundary data, see for example Huisken [Hui89] and Spruck [Spr07]. Moreover, we notice the close relation between (1.5) and the main result from [DPM14], which addresses the problem of proving global stability inequalities for smooth, area-minimizing surfaces. Finally, we remark that the bound on H M j L p (M j ) for p > n is needed to enforce the graphicality of M j over N i via Allard's regularity theorem. If one knows a priori that M j is a graph over N i , then (1.4) can be proved for every p ≥ 2 with p > n/2 (for example, p = 2 works for two and three dimensional surfaces); see Theorem 5.1 in section 5 below.
The paper is organized as follows. In section 2 we discuss the equilibrium conditions for soap films with gravity, and derive (1.1) under appropriate conditions. An interesting outcome of this discussion is the idea, based on physical grounds, of formulating Plateau's problem as a singular capillarity problem. Section 3 consists in part of a preliminary review of the necessary concepts from Geometric Measure Theory, and in part of a precise formulation of our two main assumptions. In section 4 we give a precise statement and the proof of our main compactness result, see Theorem 4.1. Finally, in section 5, we explain the reduction to graph-like surfaces, and prove various sharp convergence estimates, see Theorem 5.1. These last results show that on graph-like surfaces one can work with a very weak notion of almost-minimality deficit, a fact that will likely prove useful in future investigations.
Soap films with gravity
Due to gravitational forces, surfaces with small but non-zero mean curvature arise naturally in the study of soap films hanging on a wire. This effect is usually neglected in the mathematical literature, leading to an exclusive focus on minimal surfaces. The resulting model describes correctly the physical situation of small soap films. However, as noticed by Defay and Prigogine, "gravitational forces [...] play a dominant role in determining the shapes of macroscopic surfaces"; see [DP66,]. The typical length scale which separates small films from large films is given by the capillary length κ −1 = σ/ρg, introduced in (1.2). For a solution of soap in water at room temperature, the values of the surface tension and of the density are, respectively, σ ≃ 0.03N/m and ρ ≃ 10 3 kg/m 3 , while g ≃ 9.81N/kg is Earth's gravity, so that the length-scale κ −1 is of order of 1.7mm. The deviation of a soap film with gravity from its limit minimal surface is expected to be O(h κ) where h is the average width of the film. For typical soap films, we are in the perturbative region, since we usually have h ≃ 10 −3 mm ≃ 10 −3 κ −1 .
Idealizing the wire frame as a smooth curve Γ without boundary in R 3 , and the soap film as a smooth surface M bounded by Γ, if we neglect gravity then we are led to modeling soap films as minimal surfaces, i.e. surfaces with vanishing mean curvature This condition is derived from balancing the atmospheric pressures on the two sides of the film with the Laplace pressure induced by surface tension [You05,Lap06]. Denoting by σ the surface tension, if S is a small neighborhood of x ∈ M , with outer unit co-normal ν M S with respect to M , then the tension on S is given by Here, H M denotes the mean curvature vector to M , which, once the choice of a unit normal ν M to M is specified, defines a scalar mean curvature H M appearing in (2.1) through the equation H M = H M ν M . If the atmospheric pressures on the two sides of the film are assumed to be equal, as it is the case if we ignore gravity, then the Laplace pressure must vanish, and we find (2.1). Let us recall that (2.1) can also be derived by the principle of virtual works, as first done by Gauss [Gau30], by taking as the total energy of the film the area of M times σ, namely E(M ) = σ H 2 (M ) .
(2.3) Equation (2.1) fails in describing macroscopic soap films in two ways: (i) For a given contour Γ, the minimal surfaces spanned by t Γ, for a scaling factor t > 1, are simply obtained by scaling the minimal surfaces spanned by Γ. This is evidently not the case for real soap films, where there is a competition between the capillary length κ −1 and the length-scale of the boundary curve Γ in determining if a soap film is produced at all. From this point of view, H M = 0 fails completely at describing the macroscopic length-scales at which soap films are actually formed. Equation (1.1), namely H M = κ 2 h ν M · e 3 + O(h 2 ), does not have this problem. Indeed, the solvability of a prescribed mean curvature equation H M = f with ∂M = Γ requires a control on the size of f in terms, for example, of H 2 (M Γ ) −1/2 , where M Γ is the area-minimizing surface spanned by Γ; see, e.g., the papers by Duzaar and Fuchs [DF90,DF92]. In particular, the solvability of (1.1) with boundary condition ∂M = Γ depends on the relative sizes of κ 2 h (which measures the physical properties of the soap solution) and of the length-scale of Γ.
(ii) Equation (2.1) is invariant under rotations, while the effect of gravity is definitely anisotropic. For example, a soap film M hanging from a circular frame Γ of radius r should be exactly a flat disk if Γ is contained in a vertical plane, whereas it should possess a non-trivial curvature if Γ is in horizontal position, with average vertical deviation from the flat disk of order r 2 H M . This deviation is observable depending on the length scale of Γ and on κ. In the case of soap bubbles, where H M = 0 is replaced by H M constant, a deviation is experimentally observed and is substantial; see [CDTR + In order to take the effect of gravity into account, one might be tempted to add to the surface tension energy functional a term corresponding to the potential energy of the film, namely, to consider in place of (2.3), with ρ * denoting surface density of mass. While this would be correct for a solid elastic slab, or a rubber sheet, for a fluid it is clearly incorrect. In fact, it would amount to replace H M = 0 with the equation H M (x) = κ 2 x 3 , which would incorrectly predict that a soap film hanging from a perfectly planar wire contained in a vertical plane should have curvature and lie out of the plane! In [DP66, Section I.4], Defay and Prigogine explain how the effect of gravity should be modeled by balancing pressures. One needs to consider the finite thickness of the film, bounded by two different interfaces, and to take into account the difference in hydrostatic pressures on the two faces caused by the gravitational pull. We now put into equations this idea, and formulate a PDE for the problem. The resulting PDE, see (2.8), justifies (1.1), which, in turn, appears in the literature when M is axially symmetric and very close (in a C 1 -sense) to a plane; see e.g. [dGBWQ03, Equation (2.5)].
Consider a smooth two-dimensional surface M bounded by a smooth curve Γ in R 3 , and oriented by a unit normal ν M . Here M plays the role of an ideal surface lying inside the film. Given a smooth function α defined on M , we denote its graph over M by The two interfaces of the soap film are described by graphs M (α) and M (−β) for positive functions α and β. Up to replacing M with M ((α−β)/2), and then setting ψ := (α+β)/2, we can actually assume that the interfaces are M (ψ) and M (−ψ), where ψ is a smooth positive function on M . However, it does not seem that the symmetric parametrization is always the most convenient, so we shall argue in terms of α and β.
Given x ∈ M , and with reference to Figure 2.1, at equilibrium, the pressure p(x + ) at where H M (α) is the scalar mean curvature of M (α) with respect to the unit normal pointing outside the film, p 0 is the atmospheric pressure, and σ is the surface tension. The pressure where H M (−β) is the scalar mean curvature of M (−β) with respect to the unit normal pointing outside of the film. Subtracting the two equations, we obtain The difference between p(x − ) and p(x + ) is the hydrostatic pressure (2.7) Combining (2.5), (2.6) and (2.7) we obtain the equation for minimal surfaces with gravity (2.8) If |∇α| and |∇β| are sufficiently small at x, and we consider the mid-surface parametrization, then we can assume that locally α ≡ β ≡ h, where h is a small positive constant. Denoting by {κ 1 , κ 2 } the principal curvatures of M , and stressing the smallness of h by requiring 0 < h < max{|κ 1 |, |κ 2 |} −1 , we thus obtain and (2.8) is readily seen to imply We now explain how (2.8) can be derived from energy considerations. The idea is treating the problem of a soap film hanging from a wire frame as a capillarity problem. We model the wire frame as a solid δ-neighborhood of an idealized curve Γ, setting We model the soap film as a set E ⊂ A δ with very small volume ε = |E|, and, following Gauss' treatment of capillarity theory, we define its energy as see Figure 2.2. Here γ ∈ (−1, 1) is a dimensionless parameter taking into account the ratio between the surface tension on the liquid-air interface, and the surface tension on the liquid-solid interface along the wire frame walls. Assuming that E is a smooth critical point of this energy, the Euler-Lagrange equations boil down to the equilibrium condition where H E denotes the scalar mean curvature of ∂E with respect to the outer unit normal to E, and λ is a Lagrange multiplier associated to the volume constraint. Equation (2.9) is coupled with Young's law, Under the assumption that ε H 2 (S) ≪ δ and that δ is sufficiently small in terms of the local and global geometric properties of Γ, it is reasonable to expect the existence of critical points E described by means of midsurfaces M spanned by Γ. More precisely, we consider critical points E corresponding to surfaces M with ∂M = Γ in the sense that, for every x ∈ M ∩ A δ we can find r > 0 such that In this case, (2.9) computed at y = x + α(x) ν M (x) = x + and at (2.11) Notice that our sign conventions on scalar mean curvatures have been such that Subtracting the two equations we deduce indeed the validity of (2.8) as a consequence of the equilibrium condition for Gauss' capillarity energy. Notice that the full set of equilibrium conditions is expressed by considering Young's law together with the two equations (2.11), or with the single equation (2.9), rather than by (2.8) alone. Here the role of (2.8) is stressed because, as explained above, it clearly motivates the study of surfaces with small mean curvature.
In summary, we have seen in this section how surfaces with prescribed boundary and small mean curvature, such as the ones described by equation (2.8), or by its approximation (1.1), arise naturally in the study of soap films hanging from a wire. More generally, the use of capillarity theory to model soap films provides an additional, more physical, point of view on the long-debated issue of prescribing boundary data in the mathematical formulation of Plateau's problem; see [Har04, Dav14, HP16, DPDRG16, DLGM17, GLF17, DLDRG17, ABP17, FK18, DR18] for the most recent developments on this venerable question. Leaving a more complete discussion of this last point to a forthcoming paper, we focus here on a first problem raised by this approach, namely understanding the relation between almost-minimal and minimal surfaces.
Almost-minimal surfaces
Let Γ be a compact (n−1)-dimensional surface in R n+1 without boundary. Motivated by the study of surfaces obeying (1.1), we now consider the general question of understanding the relation between the minimal and the almost minimal surfaces spanned by Γ. The question we want to address is the following: In the class of surfaces spanned by Γ is the family of minimal surfaces rich enough to describe all the possible limits of almost-minimal surfaces?
(3.1) Theorem 4.1 answers affirmatively to this question under the assumptions that Γ is accessible from infinity and spans finitely many minimal surfaces without boundary singularities. The statement of the theorem is actually quite delicate, as it involves several choices and assumptions. In the following paragraphs we shall address these points. In § 3.1 we propose various ways of measuring the almost-minimality of a surface, while in § 3.2 we review two notions of convergence for smooth surfaces arising in Geometric Measure Theory. In § 3.3 we discuss our geometric assumption on the connected components of Γ, and in § 3.4 we make precise the idea that Γ spans at most finitely many minimal surfaces.
3.1. Measuring almost-minimality. Directly motivated by the equation for minimal surfaces with gravity (1.1), we shall consider the uniform deficit as our chief option to measure almost-minimality. But depending on other possible applications of almost-minimal surfaces, the family of integral deficits may be more relevant. For example, δ 2 (M ) definitely plays a role in the study of the gradient flow defined by Plateau's problem, see [Hui89,Spr07]. At the weaker end of the spectrum, and closer to the point of view usually adopted when discussing Paley-Smale sequences in variational problems, one may consider the duality deficits This last definition is motivated by the tangential divergence theorem, stating that if M is a smooth compact n-dimensional surface with boundary Γ, then Here ν M Γ is outer unit co-normal to Γ with respect to M , and div M X is the tangential divergence of X with respect to M , that is An interesting fact is that on surfaces M that are a priori known to be graphs over strictly stable minimal surfaces, the duality deficit δ −∞ (M ) already controls the area deficit, see Theorem 5.1.
3.2.
Convergence of smooth surfaces. In order to provide a better insight into question (3.1), we need to discuss possible notions of limit for a sequence of smooth surfaces.
To introduce the relevant ideas, let us consider a sequence {M j } j of smooth oriented n-dimensional surfaces such that Geometric Measure Theory provides two canonical ways to discuss the convergence of such a sequence {M j } j . Both approaches require the identification of each M j as a linear functional on a space of test functions, or, equivalently, as a Radon measure on a suitable finite dimensional space. The first approach, the theory of currents, allows to transfer the spanning information ∂M j = Γ to a generalized limit surface. The second approach, the theory of varifolds, allows to infer from δ ∞ (M j ) → 0 the existence of a limit surface that is minimal, again in a generalized sense. A subtlety lies in the fact that the generalized limit surface in the varifold sense may be larger that its counterpart in the sense of currents.
The viewpoint of currents. We see each oriented surface M j in (3.4) as a linear continuous functional M j on the space D n (R n+1 ) of smooth, compactly supported n-dimensional differential forms, equipped with the standard topology of test functions. More precisely, if M j is oriented by a continuous choice of a unit normal vector field ν M j , we set where, given ν ∈ S n , ⋆ν denotes the simple unit n-vector corresponding to the n-dimensional plane ν ⊥ oriented by ν, and the duality between n-vectors and n-covectors appears under the integral. Let us recall that ⋆ν M j induces a smooth orientation τ Γ on Γ (that is, a smooth field of simple unit (n − 1)-vectors defining and orienting the tangent planes to Γ) in such a way that Stokes' theorem holds where dω is the exterior differential of the (n − 1)-form ω. In this setting, it is quite natural to define the "boundary" of M j as the linear continuous functional defined on D n−1 (R n+1 ) by setting Of course, Stokes' theorem (3.5) implies that if Γ is oriented by the orientation τ Γ induced by the choice of ν M j then ∂ M j = Γ . The second and the third condition in (3.4) and the compactness theorem for Radon measures imply the existence of a linear continuous functional T on D n (R n+1 ) such that, up to extracting subsequences, Is the linear functional T still represented by the action on forms of an oriented surface with boundary, like the functionals M j are? A deep theorem of Federer and Fleming [FF60] gives a positive answer, provided that we introduce a suitable class of generalized surfaces with boundary. The key notion here is that of a rectifiable set. We say that a Borel set N ⊂ R n+1 is locally H n -rectifiable if, up to a H n -null set, N can be covered by countably many Lipschitz images of R n into R n+1 , and if H n (N ∩ B R ) < ∞ for every R > 0. If N is locally H n -rectifiable, then N has a tangent plane almost-everywhere, in the sense that for H n -a.e. x ∈ N there exists an n-dimensional linear subspace T Analogously to the smooth setting, such a vector field ν N will be called an orientation of the rectifiable set N . Coming back to (3.7), the Federer-Fleming compactness theorem shows the existence of a locally H n -rectifiable set N , of a Borel measurable orientation ν N , and of a function α ∈ L 1 loc (H n N ; Z) (an integer-valued multiplicity on N ) such that T = N, ⋆ν N , α , i.e.
Moreover, as a simple by-product of (3.6), we see that the limit current T has still boundary Γ, in the sense that ∂T = Γ , or, more explicitly: (3.10) The viewpoint of varifolds. The next question is if the rectifiable set N , found by taking the limit of {M j } j in the sense of currents, is minimal, at least in some generalized sense. The starting point is the tangential divergence theorem applied on M j to fields supported away from Γ, which yields Notice that, since δ ∞ (M j ) → 0, the right-hand side of (3.11) converges to zero as j → ∞.
To pass to the limit on the left-hand side we adopt the following point of view. Let us set where ν 1 ≡ ν 2 if and only if ν 1 = ±ν 2 , and denote by [ν] the ≡-equivalence class of ν ∈ S n . The point (x, [ν]) ∈ G n identifies the (unoriented) n-dimensional affine plane orthogonal to ν and passing through x in R n+1 . Given The definition is well-posed, as the right-hand side is invariant when exchanging ν with −ν. In this way Given that δ ∞ (M j ) → 0, the above argument shows that V, ϕ X = 0 for every X compactly supported in the complement of Γ. We then ask the question whether the varifold V can be associated to a generalized surface, and to what extent this surface is minimal. Another deep theorem, this time due to Allard [All72], provides the following answer: there exists a locally H n -rectifiable set N and a function θ ∈ L 1 loc (H n N ; N) (a non-negative integral multiplicity on N ) such that V is represented by N and θ, in symbols V = var (N, θ), in the sense that (3.14) As noticed, under the assumption (3.4), we have V, ϕ X = 0 whenever sptX ∩ Γ = ∅.
In other words, the varifold V = var (N, θ) is minimal on R n+1 \ Γ (or stationary, in the common terminology of Geometric Measure Theory), in the sense that Two remarks are in order: (i) The rectifiable set N arising in the varifold convergence is in general larger than the rectifiable set N obtained by taking the limit of {M j } j in the sense of currents. The typical example is obtained by considering M j = B 1 ∩ (K/j) (for j → ∞) where K is a fixed catenoid. In this case the limit in the sense of currents is trivial, N = ∅, because the two sheets of the catenoid cancel out in the limit due to their opposite orientations; at the same time, if the limit is taken in the sense of varifolds, N is equal to a unit disk with multiplicity θ = 2. For an example with fixed boundary data, see Example 3.4 below. From this point of view, answering question (3.1) partly amounts to determine conditions under which this ambiguity between the two limits, one taken in the sense of currents and the other in the sense of varifolds, does not occur ; (ii) Coming back to the generalized minimal surface condition (3.15), in the next classical example we notice how this condition allows one to include in the theory of minimal surfaces non-smooth examples that are actually physically relevant.
Example 3.1. Let Γ = Γ 1 ∪ Γ 2 be given by two parallel circles in R 3 with centers on a same axis. We can construct generalized minimal surfaces on R 3 \ Γ as multiplicity one varifolds var (N i ) := var (N i , 1), associated to the rectifiable sets D 1 and D 2 are two disks spanned by Γ 1 and Γ 2 resp. ; K 3 and K 4 are the catenoids (one stable, the other unstable) spanned by Γ ; K 5 and K 6 are two catenoids meeting at a 2π/3-angle along a circle Γ 3 lying on the midplane between Γ 1 and Γ 2 , centered along the same axis ; D 7 is the disk spanned by Γ 3 ; K 8 and K 9 are another pair of catenoids meeting at a 2π/3-angle along a circle Γ 4 lying on the midplane between Γ 1 and Γ 2 , centered along the same axis, with the radius of Γ 4 smaller than the radius of Γ 3 ; D 10 is the disk spanned by Γ 4 .
We claim that the var (N i )'s are generalized minimal surfaces. Since N 4 and N 5 are not smooth, we need to check carefully if they satisfy (3.15). By applying the tangential divergence theorem separately on the three minimal surfaces K 5 , K 6 and D 7 , we find that The sum of the above three co-normals is identically zero by the 2π/3-angle condition imposed on K 6 and K 7 , and so (3.15) holds, thus showing that N 4 is minimal. The minimality of N 5 follows analogously. We also notice that every integer valued combination satisfies (3.15), and is thus a possible limit for a sequence {M j } j satisfying (3.4) with Γ = Γ 1 ∪ Γ 2 . If such a limit arises with i q i ≥ 2, we speak of bubbling. In fact, an additional subtlety lies in the fact that varifolds of the form V = q 1,1 var (D 1 ) + q 1,2 var (D 2 ) + 5 i=2 q i var (N i ) with q 1,1 = q 1,2 (3.17) satisfy (3.15), and thus can arise as limits of almost-minimal surfaces (and indeed do so, see Example 3.5 below, if the mean curvature deficit is sufficiently weak). A limit like (3.17) is qualitatively worse than a limit of the form (3.16), in the sense that D 1 and D 2 alone do not span the whole Γ, but just some of its connected components.
3.3. A geometric assumption: accessibility from infinity. Given x ∈ Γ, we say that Γ is accessible from infinity at x if there exist a unit vector e and an angle θ ∈ [0, π) such that where Γ co denotes the convex envelope of Γ. Notice that if (3.18) holds at a given x then every minimal surface N spanned by Γ is automatically contained in the wedge centered at x which appears on the right hand side of (3.18).
Definition 3.2. We say that Γ is accessible from infinity if, for each connected component Γ m of Γ, the set of points x ∈ Γ m such that Γ is accessible from infinity at x has positive H n−1 -measure. Example 3.4 (Negative answer to (3.1) and bubbling with uniform deficit). Consider two concentric disks S 1 and S 2 contained inside a same plane, and bounded by circles Γ 1 and Γ 2 , see Figure 3.2. Set Γ = Γ 1 ∪ Γ 2 , so that N = S 1 \ int(S 2 ) is definitely a minimal surface spanned by Γ. Also, choose orientations on S 1 , S 2 and Γ in such a way that the spanning condition holds for the associated currents, that is ∂ N = ∂( S 1 − S 2 ) = Γ . We construct a sequence of surfaces M j by slightly bending S 1 and S 2 in the radial direction, and then connecting the two pieces with a catenoidal neck, see Figure 3.2. Evidently, this can be arranged so that In particular: (3.20) Bubbling is possible even when Γ is accessible from infinity if a weak notion of deficit is used. Here M j is the surface of revolution obtained by rotating the one-dimensional profile on the right, B εj (Γ 1 ) denotes an ε j -neighborhood of the circle Γ 1 , and M * j is the part of M j lying outside B εj (Γ 1 ). We take ε j such that M j intersects ∂B εj (Γ 1 ) in three circles, and so that the H Mj is uniformly small on M j \ M * j . The limit surface counts one copy of K, and two copies of the disk filling Γ 1 .
On the other hand, the currents M j satisfy because the two copies of S 2 appearing in the limit come with opposite orientations, and hence the corresponding currents cancel out. For this simple boundary curve Γ, we thus have a negative answer to (3.1): indeed, as shown by (3.20), the limit of the {M j } j cannot be described only in terms of minimal surfaces spanned by Γ (which indeed is not spanning S 2 ). In this example the bubbling phenomenon occurs, as part of the limit surface has multiplicity 2. Observe also that Γ is not accessible. Indeed, (3.18) cannot hold at any x ∈ Γ 2 . Finally, the example can be easily generalized to the situation when S 1 and S 2 are two smooth, bounded, simply connected orientable minimal surfaces S 1 and S 2 , spanned by curves Γ 1 and Γ 2 , with S 2 ⊂ S 1 .
Example 3.5 (Bubbling under accessibility from infinity with very weak deficit). As in Example 3.1, let Γ consist of two parallel disks Γ 1 and Γ 2 with centers on a same axis, so that Γ is accessible from infinity. We can give a negative answer to question (3.1) if a too weak notion of almost-minimality deficit is used, arguing along the following lines. Consider a catenoid K spanned by Γ, and construct a sequence M j by slightly deforming K outwards while keeping the boundary data at Γ 2 , sharply turning around along Γ 1 , going all the way towards the center of Γ 1 , turning again downwards with a small catenoidal neck, and then almost filling Γ 1 with a disk; see Figure 3.3. Denoting by M * j the part of M j lying at distance at most ε j from Γ 1 , by suitably selecting ε j → 0 as j → ∞, we entail We claim that Thus the limits in the sense of varifolds and currents do not agree (we observe bubbling), while an almost-minimality deficit goes to zero (although this is indeed the weakest possible deficit in our scale). To show that δ −∞ (M j ) → 0, we fix a vector field X compactly supported away from Γ and with |∇X| ≤ 1. We fist notice that If Γ * j is the component of the boundary of M j \ M * j that is not Γ 2 , then by our choice of ε j we find , where we have used |∇X| ≤ 1 and X = 0 on Γ to deduce: (i) that |X| ≤ ε j on Γ * j ; and, (ii) that |X| ≤ diam(M j ) on M j . Since H 1 (Γ * j ) → 3 H 1 (Γ 1 ) by construction, we have proved our claim.
3.4. Finiteness and regularity of the Plateau problem. The second main assumption we shall consider is that Γ spans finitely many minimal surfaces. This is an idea that has to be formulated with great care, because of the singularities that minimal surfaces can exhibit.
Let Γ be an (n−1)-dimensional compact smooth surface without boundary. As discussed in § 3.2, any varifold V = var (N, θ) corresponding to a compact H n -rectifiable set N in R n+1 and to a function θ ∈ L 1 (H n N ; N) such that can arise as a possible limit of almost minimal surfaces. Possible limits V have two other important properties: (i) As a consequence of (3.23), the support of V is bounded: indeed, an application of the monotonicity identity implies that spt V is contained in the convex hull of Γ, see [Sim83, Theorem 19.2]; (ii) Given our assumptions on M j , V has bounded first variation, in the sense that sup N θ div N X dH n : X ∈ C 1 c (R n+1 ; R n+1 ) , |X| ≤ 1 < ∞ .
In particular, by differentiation of Radon measures, (3.23) is always extended to
where µ * is singular with respect to H n N , and where ν is a Borel unit vector field. Fully understanding the regularity of sptV when (3.23) holds is a major open problem in Geometric Measure Theory. What is known on this specific problem is the following. Define (for any compact set N ) the sets of regular and singular points of N as We further divide Reg(N ) into Reg • (N ), the set of regular points of interior type (i.e., N ∩B ρ (x) is diffeomorphic to an n-dimensional disk), and into Reg b (N ), the regular points of boundary type. Now, let V = var (N, θ) be such that (3.23) holds, and consider any open set A such that θ is constant on A ∩ N . Then Allard's regularity theorem [All72] shows that There is also a boundary regularity theorem [All75], showing the existence of ε(n) > 0 such that if θ = 1 on A∩N and H n (N ∩B ρ (x)) ≤ (1+ε(n)) ω n ρ n /2 for some x ∈ A∩N ∩Γ, then N ∩ B ε(n)ρ (x) is diffeomorphic to a half-disk. The application of Allard's boundary regularity theorem can be quite deceptive. With reference to the notation of Example 3.1, it suffices to take N = D 1 ∪ K 3 with θ ≡ 1 to construct an example of V solving (3.23), with N \ Γ = Reg • (N ), and with Γ 1 = Σ(N ). Notice also that a similar example holds even in the "smoother" case when the measure µ * considered in the extension (3.24) of (3.23) actually agrees with H n−1 Γ, and when ν is H n−1 -a.e. orthogonal to Γ; that is to say, when (3.24) takes the more geometric form (3.25) Indeed, if the distance between the circles Γ 1 and Γ 2 in Example 3.1 is such that K 3 meets with D 1 along Γ 1 at a 120-degrees angle, then adding up the unit conormals of D 1 and K 3 on Γ 1 we obtain a unit vector such that (3.25) holds, but still the boundary regularity theorem cannot be applied at any point of Γ 1 , as N \ Γ = Reg • (N ) and Γ 1 = Σ(N ). Summarizing, the analysis of almost-minimal surfaces spanned by Γ unavoidably leads to consider minimal varifolds in R n+1 \ Γ, but, in turn, these objects are only partially understood. Our compactness theorem will thus be conditional to assuming a rather precise structure for minimal varifolds in R n+1 \ Γ. Namely, we shall require the possibility of decomposing them as linear combinations, with integer coefficients, of finitely many, unit density, connected pieces N i with unit conormals ν co i along finite unions Γ (i) of connected components of Γ (in particular, each piece N i may just be spanned by part of Γ); when removing its singular set and Γ, each piece N i is disconnected into at most finitely many smooth connected components. As explained in Proposition 3.8 below, these assumptions hold in the fundamental case when Γ is a graph over a convex surface.
Definition 3.6 (Finiteness and regularity of minimal varifolds spanned by Γ). Let Γ be a compact (n − 1)-dimensional smooth surface without boundary in R n+1 , and let {Γ m } M m=1 denote the connected components of Γ. We say that Γ spans finitely many minimal surfaces without boundary singularities if there exists a finite family {N i } i of compact H n -rectifiable sets with the following properties: (i) for each i, N i \ Γ is connected, and there exists a finite union Γ (i) = m∈I (i) Γ m of connected components of Γ with and such that for some ν co i : moreover, Reg • (N i ) has finitely many connected components {N i,ℓ } L(i) ℓ=1 such that, for each ℓ, cl (N i,ℓ ) \ Σ(N i ) is an orientable, smooth n-dimensional surface with boundary, whose boundary points are contained in Γ (i) ; (ii) if V = var (N, θ) has bounded support, bounded first variation, and satisfies then there exist q i ∈ N such that Remark 3.7. By Allard regularity theorem and by property (i), for each i, var (N i ) is a minimal varifold in R n+1 \ Γ with constant unit density, and thus we have Notice that we are excluding the possibility that Σ(N i ) intersects Γ: in other words, singularities are allowed, but not up to the boundary. In principle, this is the situation depicted in Figure 3.1. It is not hard, however, to observe soap films with curves of singular points extending up to the wire frame, so we do not expect this assumption to be generic.
The problem of checking Definition 3.6 on some classes of examples, or even in simple explicit situations like the one described in Example 3.1, seems delicate. In the next proposition we address the case of graphs over convex boundaries.
Proposition 3.8. If Ω ⊂ R n × {0} is a bounded connected open set with smooth and convex boundary, and if Γ ⊂ R n+1 is the graph of a smooth function u over ∂Ω, then Γ spans finitely many minimal surfaces in the sense of Definition 3.6.
Proof. Let us assume without loss of generality that 0 ∈ Ω. Let V = var (N, θ) be an integral varifold with bounded support satisfying (3.26). We first prove that sptV is contained in cl (Ω×R), where cl (A) denotes the closure of A ⊂ R n+1 . Indeed let H Ω denote the mean curvature of ∂Ω with respect to the outer unit normal to Ω. Consider the open cylinders K(t) = t (Ω × R) for t > 1. Since the support of V is bounded, for t large enough we have that sptV ⋐ K(t). If t * = inf{t : sptV ⋐ K(t)}, then t * < ∞ and thus there exists x = (x ′ , x n+1 ) ∈ spt V ∩ ∂K(t * ) such that, in the ordering of ν ∂K(t * ) (x) = ν Ω (x * ), x * := (x ′ /t * , 0), the smooth surface ∂K(t * ) touches from above sptV locally at x. Let us assume that x ∈ R n+1 \ Γ. Since ∂K(t * ) is smooth, H ∂K(t * ) (x) · ν ∂K(t * ) (x) = H Ω (x * )/t * ≥ 0, and V is minimal in a neighborhood of x, by the strong maximum principle of Schätzle [Sch04, Theorem 6.2] this is possible only if, locally at x, ∂K(t * ) is contained in sptV . Since sptV is anyway contained in cl (K(t * )), by a continuity argument, and by the connectedness of ∂K(t * ), we obtain ∂K(t * ) ⊂ sptV . This would be a contradiction, since sptV is bounded. Thus it must be that x ∈ Γ, i.e. t * = 1, and sptV ⊂ cl (Ω × R).
The classical area integrand theory (see, e.g. [Giu03, Chapter 1]) implies the existence of a smooth extension of u to the whole Ω, still denoted u, such that G(u) = {(z, u(z)) : z ∈ cl (Ω)} satisfies , properties (i) and (ii) in Definition 3.6 are clearly satisfied by N 1 .
We finally prove that V = q var (G(u)) for some q ∈ N. Since sptV is bounded and contained in the closure of Ω × R, we find that s * = inf s : x n+1 < s + u(z) ∀(z, x n+1 ) ∈ sptV is finite. In particular, s * e n+1 + G(u) touches sptV from above in the ordering of e n+1 . If the touching point x does not belong to Γ, then, again by Schätzle's strong maximum principle we find that s * e n+1 + G(u) ⊂ spt V with s * = 0. But then sptV would have a contact point with ∂Ω×R outside of Γ, where V is minimal, and thus the strong maximum principle would imply ∂Ω × R ⊂ sptV , once again against the boundedness of spt V . The touching point x of s * e n+1 + G(u) and sptV must thus lie on Γ, so that s * = 0, and x n+1 ≤ u(z) whenever (z, x n+1 ) ∈ spt V . An entirely similar argument shows that so that we also have x n+1 ≥ u(z) whenever (z, x n+1 ) ∈ spt V . We have thus proved that G(u) = spt V . The constancy theorem for integral varifolds, [Sim83,Theorem 41.1], implies that V = q var (G(u)) for a constant q ∈ N.
The compactness theorem
We are finally ready to state and prove our main compactness theorem.
Theorem 4.1 (Compactness theorem for almost-minimal surfaces). Let Γ be a smooth (n − 1)-dimensional compact orientable manifold without boundary in R n+1 , and let Γ be the (n − 1)-current corresponding to the choice of an orientation τ Γ on Γ. Assume that Γ is accessible from infinity (see Definition 3.2) and that Γ spans finitely many minimal surfaces without boundary singularities (see Definition 3.6).
Let {M j } j be a sequence of smooth n-dimensional surfaces, oriented by smooth unit normal vector fields ν M j , and with smooth boundaries Γ j oriented in such a way that, if Assume that Γ j converges to Γ, in the sense that there exist Lipschitz maps f j : Then, there exist an H n -rectifiable set N , and Borel vector fields ν N : N → S n and ν : Γ → S n with Remark 4.1. A point that we are not trying to formalize here is that in situations like the one considered in Figure 3.1, when Σ(N ), if present, is "classical", then one can actually prove that Σ(N ) = ∅, thus concluding that smooth M j 's cannot converge to minimal surfaces with singularities. To illustrate the idea, let Γ 1 and Γ 2 be the circles of Example 3.1, and fix orientations on Γ 1 and Γ 2 in order to define the associated currents Γ 1 and Γ 2 . Suppose by contradiction that as a limit of a sequence M j of almost-minimal surfaces with ∂ M j = Γ := Γ 1 + Γ 2 one obtains the singular minimal surface N = K ∪ K ′ ∪ D obtained by gluing two catenoids K and K ′ to a disk D along the boundary circle Σ = ∂D with a 120-degrees angle. Assign orientations to K, K ′ , and D in such a way that The limit current T of the sequence M j must then satisfy Since T is the limit of currents defined by the M j 's, we also have which implies α 1 = 1 = α 2 and σ 1 + σ 2 + α 3 = 0 , which is impossible, given σ 1 , σ 2 , α 3 ∈ {−1, 1}. A general argument along these lines can be repeated if assuming that a number of odd half-spaces meet along points in Σ(N ).
Before giving the proof of the theorem, we need to introduce some notation. Given an n-dimensional varifold V on R n+1 , that is, a Radon measure on G n = R n+1 × (S n / ≡) as described in section 3, we denote by We denote by V T = var (N, |α|) the induced varifold of T .
Proof of Theorem 4.1.
Step one: We start by discussing the varifold limit of the M j 's. By the area formula and by (4.2) we have (4.8) Setting V j := var (M j ), by the tangential divergence theorem we have for every X ∈ C 1 c (R n+1 ; R n+1 ). In particular, (4.8) and δ 1 (M j ) → 0 imply lim sup while at the same time V j (R n+1 ) = H n (M j ). By (4.1), the supports of the V j 's are contained in a fixed ball, and sup (4.11) Indeed (4.2) implies that if spt X ⊂ R n+1 \ Γ, then spt X ⊂ R n+1 \ Γ j for every j large enough. Thus, δ 1 (M j ) → 0 and (4.9) give for every X ∈ C 1 c (R n+1 \ Γ, R n+1 ), as claimed. Since Γ spans finitely many minimal surfaces without boundary singularities, (4.11) implies the existence of finitely many compact H n -rectifiable sets where, for each i, N i \ Γ is connected, there exist a finite union Γ (i) = m∈I (i) Γ m of connected components of Γ with and a vector field ν co i : (4.14) Moreover, for each i, Reg • (N i ) has finitely many connected components {N i,ℓ } L(i) ℓ=1 such that, for each ℓ, cl (N i,ℓ ) \ Σ(N i ) is an orientable, smooth n-dimensional surface with boundary, whose boundary points are contained in Γ (i) . As noticed in Remark 3.7, (4.14) and Allard's regularity theorem imply H n (Σ(N i )) = 0 . (4.15) In particular, N i is H n -equivalent to Reg • (N i ), so that can rewrite (4.12) as with q i,ℓ = q i for every ℓ = 1, . . . , L(i).
Step two: We now take the limit of the M j 's in the sense of currents. Setting T j := M j , by (4.8), sup j H n (M j ) < ∞, and by the Federer-Fleming compactness theorem [FF60], see also [Sim83,Theorem 27.3]), we have that T j → T in the sense of currents, up to extracting subsequences, where T is an integral current. The C 1 -convergence of Γ j to Γ, T j → T , and ∂T j = Γ j , are easily seen to imply ∂T = Γ . Moreover, it is easily seen that, as Radon measures on R n+1 , since the mass of currents is lower semicontinuous, the weight of varifolds is continuous on sequences with bounded supports, and since T j = H n M j = V j . By (4.12), (4.18) Next we introduce the integral n-current T i,ℓ := T N i,ℓ . Notice that N i,ℓ is a smooth, connected n-dimensional surface, and that thanks to (4.13). By the constancy theorem for integral currents (cf. [Sim83,Theorem 26.27]), we find α i,ℓ ∈ Z and realizations N i,ℓ of N i,ℓ as multiplicity one integral currents such that Since H n (Σ(N i )) = 0, (4.18) implies that Applying the boundary operator in the sense of currents to (4.19), and recalling that ∂T = Γ , we find that (4.20) Recall that cl (N i,ℓ ) \ Σ(N i ) is a smooth surface with boundary, with boundary points contained in Γ (i) . If Γ m is one of the components of Γ (i) , then there is exactly one ℓ such that Γ m ∩ Reg b [cl (N i,ℓ ) \ Σ(N i )] = ∅, and, in correspondence to it, In particular, localizing (4.20) to Γ m , and setting Γ m = Γ Γ m , we have and since Γ m itself is connected, In particular, for suitable σ m i,ℓ ∈ {±1}, we deduce from (4.21) that i,ℓ : Γm⊂cl(N i,ℓ ) σ m i,ℓ α i,ℓ = 1 for every m = 1, . . . , M . (4.22) Step three: We now link T to V . Let V T denote the integral varifold associated with T , that is Noticing that and taking into account that Γ j is converging to Γ in C 1 , V j → V as varifolds, and T j → T as currents, we are allowed to apply White's theorem [Whi09, Theorem 1.2] to deduce the existence of an integral varifold W such that Therefore, it has to be and the integrality condition of W in turn yields that there exist β i,ℓ ∈ N such that Indeed, using a ≡ b mod(2) as a shorthand for saying that a and b have the same parity, (4.25) implies that At the same time so that, taking (4.22) into account, (4.26) holds.
This implies σ(x) ∈ (T x Γ) ⊥ for H n−1 -a.e. x ∈ Γ, as well that We are now ready to prove (4.28). Thanks to (4.26), for every m ∈ {1, . . . , M } we can find p ∈ N ∪ {0} such that i : m∈I (i) and we want to show that it must always be p = 0. Since Γ is accessible from infinity, we can select x 0 ∈ Γ m such that (4.34) holds at x = x 0 , and such that there exists a wedge W (strictly contained in a half-space) with vertex at x 0 and containing Γ co . Up to rigid motions, we assume that x 0 = 0 and that The n-plane π := e ⊥ 1 = {x 1 = 0} is then a supporting hyperplane to Γ co at x 0 = 0. Furthermore, since x 0 = 0 is a point on Γ m ⊂ Γ, the tangent space T 0 Γ is a linear subspace of π. We may assume that T 0 Γ = {x 1 = 0 = x n+1 }. Finally, by the classical convex hull property of minimal surfaces, we have N i ⊂ Γ co ⊂ W for every i. Now, for every i such that m ∈ I (i) , ν (i) := −ν co i (0) is a unit vector in the twodimensional plane (T 0 Γ) ⊥ = {x j = 0 for j = 2, . . . , n}. In the coordinates (x 1 , x n+1 ), thanks to N i ⊂ W , we find that ν (i) points inwards W , and thus that ν (i) = cos θ i , sin θ i for some |θ i | ≤ φ .
If {i 1 , . . . , i r(m) } ⊂ {1, . . . , k} is the set of indexes i such that m ∈ I (i) , we define the vectors v 1 , . . . , v 2p+1 by setting so that, by (4.34) applied at x = x 0 = 0, has length ≤ 1. We conclude the proof by showing that, if p ≥ 1, then A proof of (4.36) is in [DR16,Lemma 6.16]. For the reader's convenience and for the sake of clarity, we verbatim repeat the argument used in [DR16]. First, we order the vectors v h in such a way that θ 1 ≤ θ 2 ≤ · · · ≤ θ 2p+1 . For every j ≤ p, set w j := v j + v 2p+2−j . Using simple geometric considerations, one immediately sees that w j is a positive multiple of the vector Since θ j ≤ θ p+1 ≤ θ 2p+2−j , the angle between the vectors w j and v p+1 is so that w j · v p+1 > 0. Then, we can use the Cauchy-Schwarz inequality to estimate This proves (4.36).
Step five: We conclude the proof. By (4.28), for every m = 1, ..., M , adding up over those i such that m ∈ I (i) , we find q i = 1. By exploiting this fact, we find that: • q i ∈ {0, 1} for every i ∈ {1, . . . , k}; in other words, it cannot be q i ≥ 2; • if q i = 1, then q i ′ = 0 for any i ′ = i such that I (i) ∩ I (i ′ ) = ∅: hence, for every m = 1, . . . , M there is one and only one i = i m with m ∈ I (im) and q im = 1; • from (4.25): since q i ∈ {0, 1} for every i, β i,ℓ = 0 for every i ∈ {1, . . . , k} and ℓ ∈ {1, . . . , L(i)}. Thus, if q i = 1 then α i,ℓ = ±1 for every ℓ; if q i = 0 then α i,ℓ = 0 for every ℓ. We can thus argue as follows. We set m 1 := 1, and let i 1 be the only index in {1, . . . , k} such that 1 ∈ I (i 1 ) and q i 1 = 1. Next, let m 2 := min{m ∈ 1, . . . , M : m / ∈ I (i 1 ) }, and let i 2 be the corresponding index. Proceeding inductively, after a finite number h of steps the set {m : m / ∈ I (i 1 ) ∪ · · · ∪ I (i h ) } will be empty. We finally set and claim that N satisfies the conclusions of the theorem. In order to verify (4.6), we define ν : Γ → S n by and use (4.14). Noticing that q i = 0 if i = i r for every r = 1, ..., h, and q i = 1 otherwise, we see that so that var (M j ) → var (N ), which is the second conclusion in (4.7); and as for the first conclusion in (4.7), T ≤ V implies with α ir,ℓ = ±1. Taking into account that H n (Γ ∪ h r=1 Σ(N ir )) = 0, we can now define a Borel orientation ν N : N → S n by setting ν N | N ir ,ℓ := α ir,ℓ ν N ir ,ℓ , where ν N ir ,ℓ is the orientation defining the current N ir,ℓ . With this definition, equation (4.37) reads which implies that M j → N, ⋆ν N , 1 in the sense of currents. This completes the proof of (4.5), thus of the theorem.
Sharp decay estimates
In this last section we refine the conclusions of Theorem 4.1 with sharp quantitative estimates under the additional assumptions that: (i) the boundaries of the surfaces M j are fixed, i.e., we assume Γ j = Γ; (ii) for some fixed p > n, (5.1) and (iii) the limit minimal surface N is classical, that is, Σ(N ) = ∅. Under these assumptions, by combining Allard's regularity theorem [All72] and the implicit function theorem one can show the existence of smooth functions u j : N → R with u j = 0 on ∂N , and such that Assumption (i) is not really needed to parameterize M j over N . Indeed, one could obtain a global parametrization (possibly with non-trivial tangential components) as soon as Γ j converges to Γ in, say, C 1,α , see [CLM16,LM17]. Assumptions (ii) and (iii) are instead needed to have the quantitative regularity estimates of Allard, and the possibility to apply them to the M j 's for proving the graphicality property. We omit the details of the argument leading to the existence of the functions u j , since it has appeared many other times in the literature: for instance, see [FM11,CLM16,LM17,KM17,CM17].
We now collect some formulas concerning the geometry of almost-flat normal Lipschitz graphs over a smooth compact embedded orientable n-dimensional surface N ⊂ R n+1 , and prove a basic C 0 -estimate; see in particular (5.13) below, whose proof should be compared with the argument of [KM17, Section 4]. Let us consider a Lipschitz function u : N → R with u = 0 on ∂N and u C 0 (N ) + Lip(u) ≤ ε small enough depending on N . We set ψ u (p) := p + u(p)ν N (p) , p ∈ N , and let Ψ u := ψ u (N ). We also assume that Ψ u has a distributional mean curvature H Ψu ∈ L 1 (Ψ u ), so that where ν Ψu is the normal to Ψ u induced by ν N through ψ u . By the area formula, it holds for every bounded Borel measurable function g on Ψ u that For every ϕ ∈ C 1 c (N ) and t in a neighborhood of 0, we consider the variation where we denote by π N : B ε 0 (N ) → N the smooth nearest point projection of the ε 0 -neighborhood of N onto N , and where of course we are assuming ε < ε 0 . By the standard first variation formula for the area applied to Ψ u we find that Since π N restricted to Ψ u is the inverse of ψ u , we have We now want to compute H Ψu by using local coordinates. Let us cover N by open sets A ⊂ R n+1 such that at every p ∈ A ∩ N we can define an orthonormal frame {τ i (p)} n i=1 for T p N with ∇ τ i ν N = κ i τ i , where κ 1 ≤ κ 2 ≤ · · · ≤ κ n denote the principal curvatures of N . Setting ∂ i u = ∇ τ i u and Du = (∂ 1 u, ..., ∂ n u) ∈ R n , we find (5.6) Noticing that, on A ∩ N , we find that (ν Ψu • ψ u ) · ν N = 1 1 + |D * u| 2 on A ∩ N .
By (5.5) we also have, again on A ∩ N , Thus, if we test (5.4) with ϕ ∈ C 1 c (A ∩ N ), and then we integrate by parts, we obtain To understand the structure of (5.8), we compute for the ξ-gradient of G, (5.9) and for the z-derivative of G, (5.10) By exploiting u C 0 (N ) + Lip(u) < ε, we thus find that for measurable functions a i and b on N ∩ A with where c is a non-negative, bounded measurable function defined on A ∩ N . Overall (5.8) can be rewritten as where we have set for brevity d = J N ψ u / 1 + |D * u| 2 , so that d − 1 L ∞ (A∩N ) ≤ C(N ) ε. We finally formulate (5.11) as an elliptic PDE on a domain of R n . To this end, up to decrease the size of A, we can introduce coordinates on A ∩ N by means of an embedding F : U ⊂ R n → R n+1 of an open set U with smooth boundary in the unit ball of R n with A ∩ N = F (U ) and A ∩ bd (N ) = F (bd (U )). We set σ i = (∂F/∂x i ) • F −1 so that is also a frame of T p N for each p ∈ A ∩ N , and we have for the symmetric, bounded and uniformly elliptic tensor field Notice that the ellipticity of Λ relies on the facts that {v i (x)} n i=1 is a basis of T x R n , F is an embedding, and {τ i (p)} n i=1 is a basis of T p N . Thus we can understand (5.11) as wherec is non-negative and bounded, and provided q > n/2 and assuming that the right-hand side is finite. Changing variables one more time, and exploiting a covering argument, we thus find (5.13) We now assume the strict stability of N , and use the formulas above in order to obtain a sharp quantitative estimates for Lipschitz graphs which only involves a very weak notion of deficit.
Theorem 5.1 (Weak-deficit estimate on Lipschitz graphs). Let N be a smooth compact orientable n-dimensional surface in R n+1 with boundary, and let u : N → R be a Lipschitz function with u = 0 on ∂N . Consider the almost-mean curvature deficit If H N ≡ 0 and N is strictly stable, in the sense that, for some λ > 0, and if H Ψu ∈ L q (Ψ u ) for some q > n/2, then u C 0 (N ) ≤ C(N, q) H Ψu L 2 (Ψu) + H Ψu L q (Ψu) . (5.18) Proof of Theorem 5.1. We first notice that if H Ψu ∈ L 2 (Ψ u ), then Indeed, by (5.4), by Hölder inequality and by the Poincaré inequality on N , if ϕ ∈ C 1 c (N ) and π N is the normal projection over N , then so that (5.19) immediately follows. In particular, (5.17) is a consequence of (5.15), which we now prove. For the sake of clarity we shall first prove the theorem in the flat case when κ i ≡ 0 for all i = 1, . . . , n, and thus N is an open bounded set with smooth boundary in some n-plane of R n+1 . Clearly we have 0 ≤ a ≤ 1 2 δ(u) ∇u L 2 (N ) .
Having proved (5.24), we now complete the proof as follows. By (5.20), | 15,883 | 2018-07-13T00:00:00.000 | [
"Mathematics"
] |
Global Stability of an SIR Epidemic Model with Delay and General Nonlinear Incidence
An SIR model with distributed delay and a general incidence function is studied. Conditions are given under which the system exhibits threshold behaviour: the disease-free equilibrium is globally asymptotically stable if R0 < 1 and globally attracting if R0 = 1; if R0 > 1, then the unique endemic equilibrium is globally asymptotically stable. The global stability proofs use a Lyapunov functional and do not require uniform persistence to be shown a priori. It is shown that the given conditions are satisfied by several common forms of the incidence function.
1.
Introduction. The prevalence of disease in a population is often described by an SIR model where the population is subdivided into three classes: susceptibles, infecteds and recovereds (or removeds). The simplest forms of these models are ordinary differential equations (ODEs) [10,11]. In [4], a discrete delay model is given to account for transmission by vectors (e.g. mosquitoes), where the delay τ is used to account for a latent period in the vector. Allowing the vectors' latency periods to vary according to some distribution gives a model with a distributed delay [23].
The delay appears in the incidence term which is typically the only nonlinearity, and is therefore the "cause" of all "interesting behaviour". Various forms have been used for the incidence term, both for ODEs and for delay equations. Common forms include mass action βSI [2,18,23], saturating incidence βS I 1+cI [3,24], and standard (or proportional) incidence β SI N [10]. Changing the form of the incidence function can potentially change the behaviour of the system.
In [14] a system of ODEs with a general incidence term f (S, I) is studied. Conditions are found on f under which the standard threshold behaviour occurs: the disease-free equilibrium is globally asymptotically stable for R 0 < 1 and the endemic equilibrium is globally asymptotically stable for R 0 > 1.
The goal of this paper is to present a similar analysis for equations with a bounded distributed delay and a general nonlinear incidence function. The conditions given here are equivalent to those given in [14] for the ODE case. In Section 7, the conditions are shown to apply to mass action, saturating incidence and, for an SI model, standard incidence.
The case of separable incidence, where f (S, I) = a(S)b(I) is studied in [13] for discrete delay. The authors study an SIR system where the delay is included in b(I), modelling vector transmission, and also an SEIR system where the delay appears in both a(S) and b(I), modelling a fixed duration of latency. The work in this paper extends their SIR analysis to distributed delay, while also allowing for a more general class of incidence functions. Example 3 in Section 7 provides the distributed delay version of their SIR analysis.
The approach here is to use a Lyapunov functional of the type used by Goh for ODE models in ecology [7]. Earlier work [17,19,20,21,22] which used a similar Lyapunov functional relied on knowing a priori that the system was uniformly persistent. This is important because the Lyapunov functional is not defined if any of the state variables are zero.
In this paper, we note that if the delay is bounded, then solutions for which the disease is initially present will move to the interior of the state space and so the Lyapunov function becomes defined. This approach greatly simplifies the work (given that the delay is bounded) since it does not require uniform persistence to be shown; rather, uniform persistence follows from the global stability. Issues related to infinite delay are mentioned briefly in a remark at the end of Section 6.
The paper is organized as follows. The model is described in Section 2. In Section 3, the basic reproduction number R 0 is determined and the equilibria are found. Local stability of the equilibria is studied in Section 4. The global dynamics are resolved for R 0 ≤ 1 in Section 5 and for R 0 > 1 in Section 6. In Section 7, examples are given of incidence functions that satisfy the assumptions that are used throughout the paper. 2. The model. A population is divided into susceptible, infectious and recovered (or removed) classes with sizes S, I and R, respectively. Recruitment of new individuals is into the susceptible class, at constant rate Λ. The death rates for the classes are µ S , µ I and µ R , respectively. The average time spent in class I before recovery (or removal) is 1/γ. Thus, the total exit rate for infectives is µ I + γ, which, for biological reasons we assume is at least as large as µ S ; that is, µ I + γ ≥ µ S . Transmission of the disease is through vectors which undergo fast dynamics. Following [4] and [23], the vectors can be omitted from the equations by including a distributed delay τ in the incidence term up to a maximum delay h > 0. The incidence at time t is β h 0 k(τ )f (S(t), I(t − τ ))dτ where k is a Lebesgue integrable function which gives the relative infectivity of vectors of different infection ages. We choose β so that h 0 k(τ )dτ = 1. It is assumed that the support of k has positive measure in any open interval having supremum h so that the interval of integration is not artificially extended by concluding with an interval for which the integral is automatically zero.
The form of the function f is of fundamental importance. In this paper we want to work with a function that is as general as possible, but still possesses the properties necessary for conclusions to be made through mathematical analysis. Because of this, we will introduce conditions on f which may appear technical. However, as shown in Section 7, many commonly used incidence functions satisfy these conditions. For now, we assume only the following.
(H1) f is a non-negative differentiable function on the non-negative quadrant. Furthermore, f is positive if and only if both arguments are positive.
The partial derivatives of f are denoted by f 1 and f 2 . In Sections 4, 5 and 6, it will be shown how extra conditions on f imply the local and global stability of either a disease-free equilibrium or an endemic equilibrium.
In order to avoid excessive use of parentheses in some of the later calcuations, we use the notation S = S(t), I = I(t) and I τ = I(t − τ ). The model equations are and Since R does not appear in the equations for dS dt and dI dt , it is sufficient to analyze the behaviour of solutions to (1). The initial condition for (1) is . Standard theory of functional differential equations [8] can be used to show that solutions of (1) exist and are differentiable for all t > 0. Furthermore, the state space R ≥0 ×C is positively invariant.
Note that d dt (S + I) = Λ − µ S S − (µ I + γ)I ≤ Λ − µ S (S + I), and so lim sup It follows that the system is point dissipative. Without loss of generality, we assume that S(t) + I(t) ≤ 2Λ µS for all t ≥ −h. A consequence of this is that we may assume I is bounded above, which in turn implies dS dt is positive for small S, and so S is positive for t > 0.
We say that disease is initially present if the initial condition satisfies I(θ 0 ) > 0 for some θ 0 ∈ [−h, 0]; since the initial condition is continuous, this means that I is positive on some interval about θ 0 . Then, either I(t) is positive for some t ∈ [0, θ 0 + h] or dI dt (θ 0 + h) > 0, and so I becomes positive. In either case, there exists t 1 ≥ 0 such that I(t 1 ) > 0. Then for t ∈ [t 1 , t 1 +h], we have dI dt ≥ −(µ I +γ)I(t) and so I is strictly positive on this interval. Furthermore, I(t) will remain bounded below by the exponential, Therefore, without loss of generality, we assume that any initial condition for which the disease is initially present satisfies I(θ) > 0 for all θ ∈ [−h, 0]. Furthermore, we assume that for any t ≥ 0, we have I(t + θ) bounded away from 0 for all θ ∈ [−h, 0]. This is useful in Section 6 when evaluating a Lyapunov functional along solutions.
3. Equilibria and R 0 . For any values of the parameters, the disease-free equilibrium is given by The basic reproduction number [5] for the model is The presence and number of endemic equilibria depend on the form of the nonlinearity f , as well as the values of the parameters. In searching for equilibria, we note that the equilibria of Equation (1) are the same as the equilibria of the corresponding ordinary differential equation system. Sufficient conditions for the existence of an endemic equilibrium are given in [6,14,15]. Here, we give the following result. Proof. We look for solutions (S * , I * ) of the equations dS dt = 0, dI dt = 0. We first note that dS dt + dI dt = 0 implies Λ − µ S S * − (µ I + γ)I * = 0, and so S * = Λ−(µI +γ)I *
The function H is continuous and so a sufficient condition for H to have a zero in 0, Λ µI +γ , is that H is increasing at 0. Thus, there is an endemic equilibrium if Since f (S, 0) = 0 for all S, it follows that f 1 (E 0 ) = 0 and so (3) is equivalent to R 0 > 1.
4.
Local stability of the equilibria.
Proof. We begin by linearizing Equation (1) at E 0 . In doing so, we note that f (S, 0) = 0 for all S and so f 1 (E 0 ) = 0. The linearization is Substituting the Ansatz (s(t), i(t)) = e λt (s 0 , i 0 ) into (4) gives Cancelling e λt from each term and rearranging gives the homogeneous linear equation .
There exist non-zero solutions if and only if det(A 0 ) is zero. Thus, the characteristic equation is We show that all solutions λ have negative real part. Suppose λ has non-negative real part. Then, λ + µ S = 0. Also, and so λ cannot be a solution of (5). Hence, all characteristic roots have negative real part and therefore E 0 is locally asymptotically stable [16, Chapter 2, Theorem 4.2].
We now give conditions on f that are used here to show that an endemic equilibrium is locally asymptotically stable, and in Section 6 to show that it is globally asymptotically stable. As a precondition, we assume that R 0 > 1, guaranteeing the existence of an endemic equilibrium E * = (S * , I * ) (see Theorem 3.1). (H4) Either f 1 (S * , I * ) > 0 or f 2 (S * , I * ) < f (S * ,I * ) In order to appreciate that the hypothesis (H4) is not very restrictive, we consider (H2) in a neighbourhood of S = S * , deducing Similarly, considering (H3) at S = S * and in a neighbourhood of I = I * , we deduce We can now see that (H4) is merely requiring that at least one of (H2) and (H3) leads to a strict inequality. On the other hand if (H4) fails to be satisfied, then the endemic equilibrium is still globally attractive (see Section 6) but is not locally asymptotically stable, as the characteristic equation of the linearization at E * will have λ = 0 as a root. Proof. The linearization of Equation (1) at an endemic equilibrium E * = (S * , I * ) is We demonstrate that all zeros of the characteristic equation have negative real part. In order to find the characteristic equation, we substitute the Ansatz (s(t), i(t)) = e λt (s 0 , i 0 ) into (8) to get Cancelling e λt from each term and rearranging gives the homogeneous linear equation .
There exist non-zero solutions if and only if det(A) is zero. Thus, the characteristic equation is
Since (H2) and (H3) hold, the inequalities (6) and (7) are satisfied. Using the equilibrium equation to replace f (E * ) in (7), it follows that Suppose λ is a solution of (10) with non-negative real part. Then, using (6) and (11), we deduce and so the characteristic equation (10) has solutions λ with non-negative real part only if all of the inequalities in (12) are in fact equality. The final inequality is strict unless f 1 (E * ) = 0 (and λ = 0). The second last inequality is strict unless f 2 (E * ) = f (E * )/I * . Assumption (H4) implies at least one is strict and therefore solutions to (10) must have negative real part. Thus, the endemic equilibrium E * is locally asymptotically stable.
5.
Global asymptotic stability for R 0 ≤ 1. The expression for R 0 given in Equation (2) depends on the behaviour of f near the disease-free equilibrium E 0 , which is locally asymptotically stable for R 0 < 1. Results on the global dynamics for R 0 less than one will necessarily require further assumptions on the form of f . Theorem 5.1. Suppose (H5) and (H6) hold. If R 0 < 1, then the disease-free equilibrium E 0 is globally asymptotically stable. If R 0 = 1, and one of (H7.1) and (H7.2) holds, then E 0 is globally attracting.
Proof. We begin by defining and differentiating the function U + (t), which will be one of the terms involved in a Lyapunov functional U . Let Note that ν(τ ) > 0 for 0 ≤ τ < h since the support of k has positive measure near h, and therefore, I ≥ 0 implies U + (t) ≥ 0 with equality if and only if I is identically zero on the interval [t − h, t].
We now find the time-derivative of U + .
Using integration by parts, we obtain From (13), it follows that ν(h) = 0 and dν dτ = −βk(τ ). Using these, as well as the expression for ν(0), we find Next, recall that (H1) requires that f (S, I) be positive if S and I are both positive. Combined with (H6), this implies f 2 (S, 0) > 0 for S > 0, which allows us to make the following definition without fear of division by zero. Let Then dG dx = 1 − f2(S0,0) f2(x,0) which (H5) implies changes from non-positive to nonnegative as x increases through S 0 . Thus, G is minimized at S 0 with G(S 0 ) = 0. Thus, G(x) ≥ 0 for all x > 0. Let Then U (t) is non-negative for S > 0 and I ≥ 0. Using (14) and Λ = µ S S 0 , we obtain Recalling that lim sup(S + I) ≤ Λ µS , we let A 0 = {(S, I) ∈ (0, Λ µS ] × C : dU dt = 0} and let M 0 be the largest invariant subset of A 0 . If dU dt is non-positive, then the Lyapunov-LaSalle Theorem [8, Theorem 5.3.1] implies every omega limit point is contained in M 0 . Case 1: R 0 < 1. By (H5) and (H6), we deduce with equality only if I(t) = 0. So, I is zero at each point in A 0 . Within the set A 0 , we have dS dt = Λ − µ S S, and so M 0 consists of just the point (S 0 , 0). Thus, E 0 is globally attracting. By Theorem 4.1, E 0 is locally asymptotically stable, and so we may conclude that it is globally asymptotically stable.
Recalling that the support of k has positive measure near h, that solutions are continuous and that f is positive when its arguments are positive, we must have I(t − τ ) = 0 for τ near h. Since dS dt = 0 must hold for all t, this implies I is identically zero in M 0 . Thus, M 0 consists of only the point (S 0 , 0). Thus, E 0 is globally attracting. 6. Global asymptotic stability for R 0 > 1. In this section, we resolve the global dynamics for R 0 > 1, given that certain assumptions on f are satisfied. We recall that Theorem 3.1 implies an endemic equilibrium E * exists if R 0 > 1.
Theorem 6.1. Suppose R 0 > 1, and (H2) and (H3) hold. Then the endemic equilibrium E * is unique and all solutions for which the disease is initially present tend to E * . If (H4) also holds, then E * is globally asymptotically stable.
Proof. The uniqueness of E * will follow from the fact that it is globally attracting. We now work towards demonstrating the attractivity of E * . Evaluating both sides of (1) at E * gives Λ = µ S S * + β h 0 k(τ )f (S * , I * )dτ (15) and which will be used as substitutions in the calculations below. Let where We note that ( This determines I to be a constant, and in fact gives I = I * for all t. Thus, each element of M satisfies S(t) = S * and I(t) = I * for all t. We may now conclude that lim t→∞ (S(t), I(t)) = (S * , I * ) = E * . By Theorem 4.2, if (H4) also holds, then E * is locally asymptotically stable, and so it now follows that it is globally asymptotically stable. (1) is modified to have infinite delay, then the basic Lyapunov calculation still works, as long as the delay kernal k is bounded above by a decaying exponential function and the phase space is chosen to be an appropriate fading memory space [1,9,12]. However, it becomes necessary to prove uniform persistence. Even then, since initial conditions could involve I(·) being zero on sets of positive measure, the Lyapunov functional would not be defined for these initial conditions. Since the delay is infinite, the problem would persist for all time. Thus, it becomes necessary to do the Lyapunov calculation for solutions lying in the omega limit sets (or attractor), which by uniform persistence are bounded away from zero. This would show that solutions in the attractor limit to the endemic equilibrium E * . Then, one argues that other solutions must also limit to E * . See [20] for an example of this approach. 7. Examples. We now give examples of incidence functions for which the required hypotheses are satisfied. Example 1: Mass Action Let f (S, I) = SI. Then hypotheses (H1)-(H6) and (H7.1) are satisfied and so the global dynamics are determined by the magnitude of R 0 . The global behaviour of this model was previously studied in [2,18,23] and was fully resolved in [21].
Remark 1. If Equation
Example 2: Saturating Incidence Let f (S, I) = S I 1+cI for some constant c > 0. Then hypotheses (H1)-(H6), as well as both of (H7.1) and (H7.2), are satisfied and so the global dynamics are determined by the magnitude of R 0 . The discrete delay version of this model was previously studied in [24] with the global dynamics being resolved in [22]. The discrete delay version of separable incidence is studied in [13].
Example 4: Standard Incidence For this example, it is necessary to interpret the R class as removed. Then, the total population is S + I. In this case, standard incidence is given by using f (S, I) = SI S+I . Hypothesis (H1) is not satisfied since f is not defined at (0, 0). However, since {S = 0} is repelling, we may relax (H1), requiring only that f ∈ C 1 (R >0 × R ≥0 → R ≥0 ), with f (S, I) = 0 if and only if I = 0. This condition is satisfied, as are hypotheses (H2)-(H6) and (H7.2) and so the global dynamics are determined by the magnitude of R 0 . | 4,844 | 2010-10-01T00:00:00.000 | [
"Mathematics"
] |
Challenges of cloud computing use: A systematic literature review
. Background: In recent years, cloud computing has grown vastly. Cloud computing represents a new model for IT service delivery and it typically provides over-a-network, on-demand, self-service access, which is dynamically scalable and elastic, using pools of often virtualized resources. However, this new paradigm is facing diverse challenges from many fronts. Methods: We conducted a systematic literature review of potential challenges of cloud computing. Documents that described challenges of cloud computing were collected of routinely. We grouped identified challenges in taxonomy for a focused international dialogue on solutions. Results: Twenty-three potential challenges were identified and classified in three categories: policy and organizational, technical and legal. The first three categories are deeply rooted in well-known challenges of cloud computing. Conclusions: The simultaneous effect of multiple interacting challenges ranging from technical to intangible issues has greatly complicated advances in cloud computing adoption. A systematic framework of challenges of cloud computing will be essential to accelerate the use of this technology for working well in fact and in order to face with respect to mitigating IT-related cloud computing risks.
specialization are two important factors, that pool-based computing paradigm rely on, in order to be set up.The effect of a pool-based model is that physical computing resources become 'invisible' to clients, who in general do not have control or knowledge over the exact location, formation, and originalities of these resources (e.g.database, CPU, etc.) .For example, consumers can't indicate the location where their data will be stocked in the Cloud.
Rapid elasticity: One of the characteristics of the cloud computing is the elasticity of the resources.This characteristic allows the users to find quickly new resources so as to be able to answer a rise or a descent in sudden load.It is never obvious to plan the resources which will be vital for the implementation of any IT service, in particular when this need is constantly evolving.The cloud computing so offers a way to release the computing resources necessary for an evolution or for a peak of use of this service.
Measured Service: With view to measure the usage of these resources for each individual consumer via its metering capabilities, the cloud infrastructure can use appropriate mechanisms despite of the fact that computing resources are pooled and shared by multiple customers (i.e.multi-tenancy) [36].
As per NIST mainly the Service Model consists of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) (as shown in Fig. 1).
Software-as-a-service provides business processes and applications [9] that enable clients to use cloud services running on cloud infrastructure [10] through either a thin client interface, like a Web browser.For instance, Google offers services such as Web applications with similar functionality to traditional office suites, including Gmail, Google Calendar, and Docs, among others.Customers do not need to control or manage the underlying infrastructure design for the reason that all new installations (software and hardware) are automatically updated by application vendors.
Platform-as-a-service delivers a computing platform as an integrated solution, solution stack or services like hardware, operating systems (OSs), and storage through an Internet connection.PaaS enables customers to develop, test, and deploy IT services over a cloud platform.By abstracting the complexity of software and infrastructure, this concept facilitates accordingly the efficient and quick development of Web applications.This model assists businesses in leasing virtual IT services through which to run existing applications, as well as to develop, test, or deploy a new application [10,11,12].
Infrastructure-as-a-service is a virtual delivering of computing resources in the form of servers, hardware, networking, and storage services.Rather than installing and purchasing the required resources in their own data center, Clients lease these resources as needed; they don't need to control the underlying cloud infrastructure [13].IaaS generates potential benefit through controlling and paying for the amount of resources demanded by customers [14].
Moreover, The Cloud model comprises four deployment models: Private cloud is a cloud computing model that involves a distinct and secure cloud based environment in which only the specified client can operate.By utilizing an underlying pool of physical computing resource and within a virtualized environment, private clouds will offer computing power as a service.However, under the private cloud model, the cloud (the pool of resource) is only accessible by a unique organization, hence providing that organization with confidentiality and greater control.But data transfer cost [15] from local IT infrastructure to a Private Cloud is still rather considerable.
Community clouds are controlled and shared by multiple organizations and support a particular community that has shared interests, such as mission, policy, security requirements and compliance considerations.It may be maintained by the organizations or a third party and may exist at on or off premise, and the members of the community share access to the data and applications in the community cloud.Community cloud users therefore seek to exploit economies of scale while minimizing the costs related to private clouds and the risks related to public clouds [3].
Public cloud is the most known model of cloud computing to many clients is the public cloud model, under which cloud services are offered in a virtualized environment, built via pooled shared physical resources, and available over a public network like the internet.To some degree they can be determined in contract to private clouds which ring-fence the pool of underlying computing resources, creating a distinct cloud platform which is operated solely within a singular organization.Public clouds deliver services to multiple customers using the same shared infrastructure [3].
Hybrid cloud is a combination of two or more clouds (private, community, or public) that stay unique entities but are related together by standardized or proprietary technology that allows data and application portability.Concerning applications with lacking rigid security, legal, compliance and service level requirements can be deployed in the public cloud, while maintaining business-critical data and services in a protected private cloud [3].
In spite of the widespread adoption of cloud computing, practitioners and researchers have been actively reporting issues and challenges with this new paradigm.Some of the challenges have the aspects of being crucial like issues with confidentiality and security.Other issues such as suboptimal performance and limited bandwidth are a natural result of pushing the barriers of this new model to achieve more [16].Unless these barriers are better understood, solutions may remain inefficient.We conducted a systematic literature review of potential challenges of cloud computing and used this evidence to group these barriers in a taxonomy that can be used as a framework to facilitate an international dialogue on solutions and instruments.The objective of our research is to gain an understanding of the type of issues and challenges that have been emerging.
Methods
In this section, we present the research steps followed to perform this review.We conducted a systematic review according to Kitchenham and al. guidelines [18] to elaborate the review methodology in detail and identify documents that reported on challenges of cloud computing.The challenges were defined as obstacles that could impede or delay the adoption of cloud computing or that could limit the usage of cloud computing in companies.As per these authors, the research methodology for systematic review should contain different strategies that are employed for searching the most significant research works like search strings and the chosen digital libraries.At last, the existing studies selection is realized through set criteria.
The following search string represents our generic search query used for this SLR.
Additional documentation was identified through a variety of databases (such as ACM Digital Library, IEEE Explore, DBLP, Google Scholar, Science Direct, Scopus, Springer, Taylor & Francis and Wiley Online Library) as shown in Table 1.With the regard to make this research up-to date and well-intentioned in the area of cloud computing quick search strategy is used.For this end in order to add recent 2015-2017 publications, we have used the quick search strategy for this research by using the filtering tools in the databases.After using the quick search strategy, we considered the publication from 2008 to 2017 overall for the reason that cloud computing publications started around 2008.
As shown in Fig. 2, the initial search resulted in a total of 800 studies, which were condensed to 200 studies on the basis of their titles, and 100 studies on the base of their abstracts.After that, 100 selected studies were reviewed thoroughly for obtaining a final list of 60 studies on the basis of their content.
Sixty studies were ultimately involved in this review.These studies were primarily read and an initial list of challenge descriptions was extracted.This list was grouped into preliminary categories.Challenge descriptions then classified and generalized within their categories.A final taxonomy and description of barriers were established.For each barrier, we also categorized available evidence to identify knowledge gaps.
Results
We identified 23 potential issues of cloud computing and classified these in taxonomy of three categories, according to the organization European Union Agency for Network and Information Security (ENISA) [19,20,21,34,58]: policy and organizational, technical and legal issues (Table 2).These issues and categories represent a landscape of challenges that is highly dynamic, interconnected, and hierarchical.
Policy and organizational issues
These are business-related IT issues that companies may confront when considering cloud computing service providers [68].Such issues include lock-in, loss of governance, Compliance challenges, supply chain failure [22].
Lock-in vendor lock-in is one of the principal interests declared by IT experts when considering a move from one provider's cloud environment to another [16,20,23].Lock-in refers to the incapability a client to move their data and/or programs away from a Cloud Computing risks cloud computing service provider.It is generally the result of proprietary technologies that are incompatible with those of rivals [12,32,35,73].
Loss of governance: When adopting Cloud services, the Cloud Customer necessarily concedes control to the Cloud provider on a number of issues which may impact security [20,21,58,67,73].
Compliance challenges customers are accountable for the security of their solution, as they can choose between providers that enable to be analysed by 3rd party organizations that control levels of security and providers that don't [10,45,60].
Supply chain failure As per ENISA [58], a Cloud provider can deploy parts of its production chain to third parties, or even, as part of its service, use other Cloud Providers.Thus, a potential for cascading failures is produced [20].
Technical issues
These issues for the most part are well understood as part of resilient challenges of cloud computing adoption and continue to form a major obstacle to the availability and use of this technology [78].They are specified by the failures associated with the technologies and services furnished by the Cloud service vendor [68].
Malicious insiders A malicious insider in the cloud might get hands on an unusual quantity of information and on a widely scale [64] and produce various kinds of damage to a Cloud Customers' assets [9,12,19,20].
Shared technology
Cloud service providers offer their services in a scalable way by sharing infrastructure, platforms, and applications.This way, the threat of shared vulnerabilities exists in all delivery models of Cloud computing [34,39,40].
Encryption is considered a major risk in cloud computing environments is deficient encryption and key management of data [44][45].
Multi-tenancy is a natural result of trying to achieve economic benefit in Cloud Computing by using virtualization and allowing resource sharing [62][63].However, it is a technological issue in Cloud computing [27,33,45,61].
Resource and service management One of the important features of cloud computing is the capacity of obtaining and releasing resources on-demand [20].The purpose of a service supplier in this situation is to allocate and de-allocate resources from the cloud [23].Nevertheless, it is not obvious how a service provider can achieve this goal [19,27,46].
Service level agreement (SLA): it is necessary for customers to have guarantees from suppliers on service offer.Typically, these are offered via Service Level Agreements (SLAs) discussed between the providers and customers [79].The very first problem is to determine SLA particularizations just like that has a convenient level of granularity expected by a customer from a provider [10,16,23].Denial of service attacks (DOS) are attacks meant to impede users of a cloud service to have the ability to access their applications or their data.The attacker (or attackers, as is the case in distributed denial-of service (DDoS) attacks) attacks typically flood servers, systems or networks with traffic by way of obliging the prevents legitimate users to consume inordinate quantities of finite system resources like memory, processor power, disk space or network bandwidth [20,21,23].Consequently, this produces an intolerable system slowdown and leaves all of the victim cloud service perplexed and furious as to why the service isn't responding [9,12,19,42,43,73,77].
Insecure interfaces and APIs: with the aim of managing and interacting with cloud services, cloud computing providers offer consumers a set of software interfaces or APIs to exploit [23].Using these interfaces, provisioning, management, orchestration, and monitoring are all executed.The security of these basic APIs impacts the availability and security of general cloud services.Moreover, companies and third parties MATEC Web of Conferences 200, 00007 (2018) https://doi.org/10.1051/matecconf/201820000007IWTSCE '18 often depend on these interfaces to provide value-added services to clients.This announces the intricacy of the new layered API; it also raises risk [6,42,43,73].
Integrity The integrity of applications, networks, databases and system software in a shared, globally available cloud environment is threatened by much vulnerability when not adequate and timely patched [4,9,16,28,54,67].
Natural disaster such as earthquakes, flooding, and tsunamis can impact the infrastructure of a Cloud vendor [20,21,24].Thus, a Cloud Customers might be affected by natural disasters taking place far away from its own location [33,34,41,50].
Availability means that the data, service, as well as infrastructure are being accessible to authorized clients immediately after a demand has been made.Availability issues may happen at the customer end or the service vendor's end.When a single provider manages a cloud computing service, this way creates a potential environment for a single point of failure [53,63,67].
Loss of backups
The ability to recover data is salient in business that this is not a guarantee of cloud computing [65].Rather than retrieval, this model depends on heavy backup, which could result privacy issues, as this is likely to lead to uninformed consent.A critical threat is that of 'data loss or leakage' where original data is deleted and cannot be recovered [40].Data may also be lost due to dishonest media or data being recorded without a link [66,67,73].
Data transfer bottlenecks arises when bandwidth is unable to accommodate huge quantities of system data at designated data transfer rate speeds [23].Businesses that use cloud computing have to redesign their present technology into new structures of information from being able to care dynamic and great amounts of information, new filing system and storage technologies [15,24,36].
Legal issues:
These consist of the IT-related issues that are legal in nature, and can also have a negative impact on companies using cloud computing services [16].
Legal jurisdiction Converting to Cloud Computing implicates legal restraints [16,19,20].Considering Cloud Computing providers can be multi-national, it is crucial that such vendors are aware of and stand for by national regulations where they do business [36,67,73].
Data privacy and protection: Privacy is one of the longest standing and most essential interests with cloud computing [59].Furthermore, it is a major issue in cloud computing for the reason that its very nature involves storing unencrypted data on a machine owned and operated by someone other than the original owner of the data [8,67,69].
Licensing risk there is also a challenge that firms may pay more than intended to license software on systems hosted by cloud computing service vendors [15,19,20,29,58].
MATEC Web of Conferences 200, 00007 (2018) https://doi.org/10.1051/matecconf/201820000007IWTSCE '18 Subpoena and e-discovery If computer systems are confiscated by law enforcement authorities or through civil suits, the centralization of storage and shared location of physical hardware reveal more risk of undesired data divulgence to cloud computing customers [19][20].
Discussions
Using a systematic review of evidence from reviewed literature, we identified 23 real or potential barriers grouped in taxonomy of policy and organizational, technical and legal issues.Checking the criticality of the situation, many of the significant reports by the organizations including CSA, NIST, and European Union Agency for Network and Information Security (ENISA) etc. have been published to focus the effect of aforestated issues.Reports published by these research major organizations are based on their area of specializations, for example, CSA published reports on the security challenges.These reports are highlighting the risks that are applicable to the cloud computing.
As per [58], most of these threats surround privacy, security and service issues.In other words, above presented threats directly or indirectly impact the confidentiality and security of Cloud resources as well as services at various layers.
Nevertheless, privacy and security risk are considered as a major obstacles to cloud computing adoption that are acting as deterrent and retarding its development [80].
According to table 2, the issues that are addressing data protection and privacy, lock-in, multi-tenancy, availability, integrity, malicious insiders and interoperability have been characterized as essential.
Thus, Most technical issues are deeply embedded in much larger challenges of use of cloud computing.Some solutions are being developed as part of addressing some of aforementioned issues [27].For instance, to overcome the challenges of privacy and protection, two studies [71,72] present an architecture adopting data location strategies, trusted cloud services, and trusted service vendors.In order to handle the issues of regulatory compliance, a study [71] propose a concept of cloud market.By means of this, users can exchange with the market and demand resources conforming to the applications' needs through the cloud broker.Furthermore, to avoid vendor lock-in, a layered architecture was proposed by [17].This architecture offers a unified resource model from different cloud environments.
Political and legal issues will require a different approach.Compared to technical challenges, these challenges are less tangible and transparent and will need to be clearly outlined.Levels of evidence were also different for each issue.The most challenges were very well documented while no empirical evidence was available for other barriers such as supply chain failure, subpoena and e-discovery and data transfer bottlenecks.In-depth formative research is needed to expand the evidence base of these barriers.As knowledge on these challenges will increase, so will opportunities for solutions.
Conclusion
Cloud computing is one of the most advanced digital related technologies, which can bring various business benefits to organizations.Despite several benefits constraints exist with the use of this model impacting on service provision and use of this technology, instead of the conventional in-house technologies, which are physically owned and managed on premises.Thence, a considerable amount of literature on this subject has been published in a nearly short time period.In this research work, the issues of cloud computing adoption are collected and classified in a taxonomy of three categories policy and organizational, technical and legal issues.Thus, these challenges must be addressed by the researchers for making cloud computing work well in reality and in order to further gain the confidence of cloud subscribers.
Fig. 1 .
Fig. 1.NIST defined Essential characteristics, Service models and Deployment models
Table 1 .
Electronic data sources
Table 2 .
Evidence for issues of cloud computing adoption | 4,401.6 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
Anti-MOG Positive Bilateral Optic Neuritis and Brainstem Encephalitis Secondary to COVID-19 Infection: A Case Report
(1) Introduction: There have been numerous reports on the neuroinvasive competence of SARS-CoV-2. Here, we present a case with anti-MOG positive bilateral optic neuritis and brainstem encephalitis secondary to COVID-19 infection. Additionally, we present a review of the current literature regarding the manifestation of anti-MOG positive optic neuritis as well as anti-MOG positive encephalitis after COVID-19 infection. (2) Case Report: A 59-year-old female patient, with a recent history of COVID-19 infection, presented a progressive reduction of visual acuity and bilateral retrobulbar pain for the last 20 days. An ophthalmological examination revealed a decreased visual acuity (counting fingers) and a bilateral papilledema. An MRI scan of the brain revealed a mild thickening of the bilateral optic nerves and high-intensity lesions in the medial and right lateral pons. A high titer of IgG and IgM antibodies against SARS-CoV-2 in serum and antibodies against myelin oligodendrocyte glycoprotein (anti-MOG) in serum and CSF were revealed. The diagnosis of anti-MOG brainstem encephalitis and optic neuritis was set. (3) Conclusions: The history of COVID-19 infection should raise awareness about these autoimmune and infection-triggered diseases, such as anti-MOG antibody disease.
Introduction
Despite the fact that severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) mainly affects the respiratory system and produces corresponding respiratory tract symptoms, since the outbreak of the pandemic in December 2019, there have been numerous reports of the neuroinvasive potential of SARS-CoV-2. The disease mechanisms reported so far include the direct infection, para-and post-infectious, as well as vascular, mechanisms [1]. Manifestations such as encephalitis, acute demyelinating encephalomyelitis (ADEM), myelitis, immune-mediated central nervous system (CNS) and peripheral nervous system (PNS) demyelination, and cerebrovascular disease have been reported to date [2][3][4]. However, several other neurological symptoms during the course of the disease have also been reported. Concerning optic neuritis, several cases have been published; however, only six patients were positive for antibodies against myelin oligodendrocyte glycoprotein (anti-MOG). Of these, one patient had a presumed COVID-19 infection, while the rest had a confirmed one [5][6][7][8][9][10]. Moreover, concerning encephalitis, only a few cases with the presence of anti-MOG in serum after COVID-19 infection have been reported [3,4].
Here, we report a case of a female patient with bilateral optic neuritis and brainstem encephalitis secondary to COVID-19 infection. Additionally, we present a review of the current literature regarding the manifestation of anti-MOG positive optic neuritis as well as anti-MOG positive encephalitis after COVID-19 infection.
Case Presentation
A 59-year-old female patient with a history of hypertension and anxiety disorder, and an unremarkable family history, was referred to the Neurological Emergency Department of the University Hospital of Larissa, a tertiary hospital of central Greece, due to a progressive reduction of visual acuity and bilateral retrobulbar pain for the last 20 days. Forty days prior to the current episode, the patient reported having experienced fever and a cough, and tested positive via a polymerase chain reaction (PCR) test for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), from a nasopharyngeal swab. The latter symptoms, attributed to the viral infection, had completely resolved without the need for hospitalization. Informed consent was obtained. Our study follows the principles of the Declaration of Helsinki.
The ophthalmological examination revealed a decreased visual acuity (counting fingers) and a bilateral papilledema, while the rest of the neurological examination was unremarkable. A 3 Tesla MRI scan of the brain revealed a mild thickening of the bilateral optic nerves (Figure 1a cases with the presence of anti-MOG in serum after COVID-19 infection have been reported [3,4].
Here, we report a case of a female patient with bilateral optic neuritis and brainstem encephalitis secondary to COVID-19 infection. Additionally, we present a review of the current literature regarding the manifestation of anti-MOG positive optic neuritis as well as anti-MOG positive encephalitis after COVID-19 infection.
Case Presentation
A 59-year-old female patient with a history of hypertension and anxiety disorder, and an unremarkable family history, was referred to the Neurological Emergency Department of the University Hospital of Larissa, a tertiary hospital of central Greece, due to a progressive reduction of visual acuity and bilateral retrobulbar pain for the last 20 days. Forty days prior to the current episode, the patient reported having experienced fever and a cough, and tested positive via a polymerase chain reaction (PCR) test for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), from a nasopharyngeal swab. The latter symptoms, attributed to the viral infection, had completely resolved without the need for hospitalization. Informed consent was obtained. Our study follows the principles of the Declaration of Helsinki.
The ophthalmological examination revealed a decreased visual acuity (counting fingers) and a bilateral papilledema, while the rest of the neurological examination was unremarkable. A 3 Tesla MRI scan of the brain revealed a mild thickening of the bilateral optic nerves ( Figure 1a Routine blood tests presented no remarkable abnormalities, while serological studies with autoimmune markers, including antinuclear and anti-dsDNA antibodies, and the rheumatoid factor were not suggestive of other autoimmune diseases. A high titer of IgG and IgM antibodies against SARS-CoV-2 was detected in serum. An initial cerebrospinal fluid (CSF) analysis showed 7 cells/mm3 as well as normal protein (34.5 mg/dL) and glucose (50.9 mg/dL) levels. In addition, a CSF analysis with PCR for several viral and bacterial infections ( Routine blood tests presented no remarkable abnormalities, while serological studies with autoimmune markers, including antinuclear and anti-dsDNA antibodies, and the rheumatoid factor were not suggestive of other autoimmune diseases. A high titer of IgG and IgM antibodies against SARS-CoV-2 was detected in serum. An initial cerebrospinal fluid (CSF) analysis showed 7 cells/mm 3 as well as normal protein (34.5 mg/dL) and glucose (50.9 mg/dL) levels. In addition, a CSF analysis with PCR for several viral and bacterial infections (Mumps virus, Measles virus, Human enterovirus, Parechovirus, Herpes simplex virus 1, Herpes simplex virus 2, Varicella zoster virus, Epstein-Barr virus, Cytomegalovirus, Human herpes virus 6, 7 and 8, Listeria monocytogenes, Heamophilus influenzae, Staphylococcus aureus, Streptococcus pneumoniae, Streptococcus agalactiae, Neisseria meningitis, Borrelia burgdorferi, Escherichia coli K1, Cryptococcus neoformans, Cryptococcus gatii, West Nile virus, and SARS-CoV-2) was negative. The oligoclonal band status in CSF was negative. Furthermore, antibodies against myelin oligodendrocyte glycoprotein (anti-MOG) were revealed in serum (1/32) and CSF. According to the above, the diagnosis of anti-MOG brainstem encephalitis and optic neuritis was set.
Consequently, the patient was treated with 1000 mg of intravenous (IV) methylprednisolone for 5 days and oral phenytoin 4 mg/kg/day for 7 days as a retinoprotective agent [11]. Due to no significant clinical improvement, the patient additionally received intravenous immunoglobulin (IVIg) 2 g/kg, besides the oral prednisolone treatment. Following the treatment, the neurological examination showed no deficits, and the ophthalmological examination showed improvement in the visual acuity (6/10 in the right eye, 7/10 in the left eye) and a remission of the bilateral papilledema. The patient was therefore discharged with a prescription for oral prednisolone (30 mg/day).
Discussion
A comprehensive literature review of reported optic neuritis and COVID-19 infection cases was performed, including as many patients as possible. We used the PubMed database, with the following search terms: ''optic neuritis", ''MOG", ''encephalitis" and "COVID-19". Additional articles were identified by hand-searching references of the included literature (Table 1).
MOG antibody disease is mediated via antibodies against MOG, expressed on oligodendrocytes. MOG antibody disease is responsible for clinical manifestations like optic neuritis, transverse myelitis, encephalitis, and ADEM. Anti-MOG antibodies may be present in the bloodstream without causing symptoms until they enter the CNS. Along with lung cells, which are the main target of the SARS-CoV-2 virus, endothelial cells forming the blood-brain barrier (BBB) also express the angiotensin-converting enzyme 2 receptors (ACE2). By infecting these cells through ACE2 receptors, SARS-CoV-2 causes inflammation of the endothelium and the subsequent breakdown of the BBB. As a result, both anti-MOG and leukocytes can cross the BBB and trigger the onset of the disease [12,13].
Our case report adds to the existing bibliography on the neurological sequelae of COVID-19 [4]. With the ever-rising number of infected patients, it is reasonable to expect that these entities will also be encountered more frequently, and so clinicians must act promptly in diagnosing and treating patients. Therefore, a history of COVID-19 infection should raise awareness about these autoimmune and infection-triggered diseases, such as anti-MOG antibody disease.
Conclusions
Our case expands the spectrum of SARS-CoV-2 neurological disorders, since we here present a MOG-associated encephalitis and anti-MOG optic neuritis secondary to COVID-19 infection. While the incidence of these neuroimmune manifestations is low, early identification and initiation of corticosteroid therapy is essential to avoid disability. Despite the attempts of the international medical community to record all clinical manifestations of SARS-CoV-2, it seems that the emergence of new neuroimmunological manifestations after COVID-19 infection should raise awareness about MOG-related disease. | 2,034.2 | 2022-11-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
The specific targeting of immune regulation: T-cell responses against Indoleamine 2,3-dioxygenase
Indoleamine 2,3-dioxygenase (IDO) is an immunoregulatory enzyme that is implicated in suppressing T-cell immunity in many settings including cancer. In recent years, we have described spontaneous CD8+ as well as CD4+ T-cell reactivity against IDO in the tumor microenvironment of different cancer patients as well as in the peripheral blood of both cancer patients and to a lesser extent in healthy donors. We have demonstrated that IDO-reactive CD8+ T cells were peptide-specific, cytotoxic effector cells, which are able to recognize and kill IDO-expressing cells including tumor cells as well as dendritic cells. Consequently, IDO may serve as a widely applicable target for immunotherapeutic strategies with a completely different function as well as expression pattern compared to previously described antigens. IDO constitutes a significant counter-regulatory mechanism induced by pro-inflammatory signals, and IDO-based immunotherapy may consequently be synergistic with additional immunotherapy. In this regard, we have shown that the presence of IDO-specific T cells boosted immunity against CMV and tumor antigens by eliminating IDO+ suppressive cells and changing the regulatory microenvironment. The current review summarizes current knowledge of IDO as a T-cell antigen, reports the initial results that are suggesting a general function of IDO-specific T cells in immunoregulation, and discusses future opportunities.
IDO and immune suppression
The immune system is delicately balanced between immunity and tolerance to protect the host from pathogens while minimizing local damage to tissues. Indoleamine 2,3-dioxygenase (IDO) is an endogenous molecular mechanism that contributes to this immune regulation in a variety of settings. IDO seems to be critical in limiting potentially exaggerated inflammatory reactions in response to danger signals [33] and in assisting regulatory T-cell effector function [32]. In addition, IDO is an important component of a regulatory system that allows long-term control of immune homeostasis as may be required by tolerance to self or during pregnancy [27].
IDO is a major inhibitor of the effector phase of the immune response [45,50]. IDO expression can suppress effector T cells directly by degradation of the essential amino acid tryptophan. Some of the biological effect of IDO is mediated through local depletion of tryptophan, but is in addition mediated via immune modulatory tryptophan metabolites [4,30]. Thus, regulation of tryptophan metabolism by IDO in dendritic cells (DC) is a highly adaptable modulator of immunity. When IDO ? DC are injected in vivo, they create suppression and anergy in antigen-specific T cells in the LN draining the injection site [3,25]. Effector T cells starved of tryptophan are unable to proliferate and go into G1 cell cycle arrest [25]. An IDO-responsive signaling system in T cells has been identified, comprising the stress kinase GC non-derepressing 2 kinase (GCN2). GCN2 responds to elevations in uncharged tRNA, as would occur if the T cell were deprived of tryptophan [24].
Another effect of IDO is mediated through enhancement of local Treg-mediated immune suppression. Constitutive IDO expression in DC provides T cells with regulatory properties that block T-cell responses to antigenic stimulation [24]. The B7 receptors on IDO ? DC bind to CTLA4 on Tregs causing them to proliferate and induce antigenspecific anergy. Thus, IDO does not only suppress effector T cells directly but also influence Tregs bystander suppressor activity [2,32,39].
It has been described that exposure of Tregs to proinflammatory cytokines like IL-6 induce reprogramming of mature Tregs to acquire a phenotype resembling proinflammatory Th17 cells [6,49,51]. IDO plays a vital role in this conversion [2,39]. IDO stimulates Treg bystander suppressor activity and simultaneously blocks the IL-6 production that is required to convert Tregs into Th17-like T cells [2,39]. The phenotype of reprogrammed Tregs after IDO-blocking have been described as similar to that of ''polyfunctional'' T-helper cells co-expressing IL-17, IL-22, IL-2 as well as TNF-a [39]. Thus, IDO suppression of pro-inflammatory processes may dominantly block effector T-cell responses to antigens encountered. Conversely, absence of IDO activity may not elicit local Treg suppression even when strong pro-inflammatory stimuli are present.
Finally, it was recently shown that IDO has a nonenzymic function that contributes to TGF-b driven tolerance in non-inflammatory contexts [29].
IDO and cancer
IDO expression is widely deregulated in cancer patients. IDO may contribute in a critical manner to inhibit or terminate inflammation and are highly overexpressed in cancer [14,22].
In cancer patients, IDO elevation occurs in a subset of plasmacytoid DC in tumor-draining lymph nodes [26]. In addition, IDO may be expressed within the tumor by tumor cells as well as tumor stromal cells, where it inhibits the effector phase of immune responses [45]. Activation of IDO in either tumor cells or nodal regulatory DC each appears to be sufficient to facilitate immune escape of tumors [24]. In this regard, it has been described that expression of IDO in tumor cells is associated with an impaired prognosis [46]. In a murine model, it was observed that tumor cells transfected with IDO became resistant to immune eradication, even in mice in which a fully protective immune response had been established by immunization [45]. IDO-expressing CD19 ? plasmacytoid DC isolated from tumor-draining LN mediate profound immune suppression and T-cell anergy in vivo [25,37], whereas plasmacytoid DC from normal LNs and spleen do not express IDO. In this respect, it should be noted that very few cells constitutively express IDO in normal lymphoid tissue except in the gut. It is believed that constitutive IDO expression in DC in tumor-draining LN is induced by stimulation from Tregs migrating from the tumor to the draining LN. Tregs have been shown to induce IDO via cell-surface expression of CTLA-4 [44]. The induction of IDO converts the tumor-draining LN from an immunizing into a tolerizing milieu.
All in all, IDO is a critical cellular factor contributing to immune suppression and as such is a crucial mechanism in cancer. Hence, IDO has become a very attractive target for the design of new anticancer drugs and several IDO inhibitors are under investigation in preclinical as well as in clinical studies [16]. In particular, the compound 1-methyltryptophan (1MT) has been widely studied as an inhibitor of IDO activity. Interestingly, recent studies have shown that the racemer D-1-MT has superior antitumor activity compared to the racemer L-1-MT [13]. A novel indoleamine 2,3-dioxygenase (IDO)-like protein designated IDO2 was recently discovered [20]. IDO2 functions like IDO in tryptophan catabolism, but it has been found that D-1MT but not the L-1MT isomer selectively and potently inhibits IDO2 activity suggesting that IDO2 activity may have a role in the inhibition of immune responses to tumors. In this respect, IDO2 expression has been found in human tumors, including gastric, colon, renal, and in pancreatic tumors IDO2 expression have been found both in tumor cells as well as in immune cells in tumor-draining LN [47]. It is not yet known to what extent each isoform of IDO contributes to tumor-related immune suppression and how much clinical benefit (or autoimmune toxicity) targeting one isoform over another confers. Another unknown is whether IDO inhibitors influence other pathways not directly linked to IDO.
CD8 responses against IDO
Despite the fact that neoplastic transformation is associated with the expression of immunogenic antigens, the immune system often fails to respond effectively and becomes tolerant toward these antigens [21]. As described above IDO plays a critical role in the tolerance induction and immune suppression of anti-cancer immune responses. We sat out to determine if and how IDO itself serve as target for specific T-cell responses, which may be exploited for immune therapy. This was done by identifying and characterizing specific T cells spontaneously present among peripheral blood mononuclear cells (PBMC) isolated from cancer patients of different origin. In this regard, we described that peptides comprised in the IDO protein sequence are spontaneously recognized by cytotoxic T cells (CTL) in cancer patients ( Fig. 1) [40].
First, we identified HLA-restricted peptides within the IDO protein to which spontaneous T-cell reactivity were detected in patients suffering from unrelated tumor types, i.e., melanoma, renal cell carcinoma and breast cancer by flow cytometry using HLA/peptide tetramers as well as in ELISPOT assays after in vitro stimulation but also in direct ex vivo assays. Such IDO-reactive CD8 ? T cells were peptide-specific, cytotoxic effector cells. Thus, IDO-specific T cells effectively lysed IDO ? cancer cell lines of different origin, such us colon carcinoma, melanoma, and breast cancer as well as directly ex vivo enriched leukemia cells. IDO driven immune suppression is a general mechanism that has been described in a variety of human cancers and the immune responses against IDO seem likewise to be relevant in cancers of unrelated origin, which emphasize the immunotherapeutic potential of IDO. However, even more distinctive was our finding that IDOspecific CTL recognized and killed IDO ? , mature DC; hence, IDO-specific T cells were in addition able to kill immune-regulatory cells. We could at first not detect spontaneous responses against IDO in the control group of healthy individuals. Thus, although IDO has immune suppressive functions, the constitutive up regulation of IDO expression in cancer patients seemed to induce IDOspecific T-cell responses.
IDO is playing a crucial role in immune regulation and is inducible under normal physiological conditions. Thus, we found the apparent lack of tolerance against IDO intriguing, since it suggested a more general role of IDOspecific T cells in the regulation of the immune system. We hypothesized that such cells could take part in the control of immune homeostasis; IDO-specific CD8 ? T cells could play an important role by eliminating IDO ? cells thereby suppressing and/or delaying local immune suppression. Hence, we continued our search for possible IDO-specific T-cell responses in healthy donors and found that circulating IDO-specific, cytotoxic CD8 ? T cells indeed were present in healthy donors although not as frequent as in patients with cancer [41]. Furthermore, we were able to directly link the up regulation of IDO with IDO-specific T cells by showing that the addition of IDO-inducing mediators like IFN-c and CpG ODN generated measurable numbers of CD8 ? IDO-specific T cells among PBMC. To examine a possible immune-regulatory effect of IDO-specific T cells, we examined their effect on T-cell immunity against viral or tumor-associated antigens. In this respect, we found that the presence of IDO-specific CD8 ? T cells boosted CD8 ? T-cell responses against other antigens probably by eliminating IDO ? suppressive cells (Fig. 2). Consequently, we suggested terming IDO-specific T cells ''supporter T cells'' (Tsup) due to their immune enhancing function [41].
IDO expression contributes to the strength and duration of a given immune response due to its inflammationinduced counter-regulatory function. Thus, any ''supportive'' effect of IDO-specific T cells on other immune cells may well be mediated in several direct and indirect manners. In this respect, the level of tryptophan was elevated, the frequency of Tregs decreased, and the frequency of IL-17 producing cells increased when IDO-specific T cells were present, which taken together suggest an overall decrease in IDO activity. Furthermore, IDO-specific T cells increased the overall production of both IL-6 as well as the other pro-inflammatory cytokine TNF-a. In contrast, we observed a decrease in IL-10. Another possible effect of IDO-specific T cells could be mediated through the metabolites of tryptophan, which have been shown to be directly toxic to CD8 ? T cells and CD4 ? Th1 cells [11], but not Th2 cells. Hence, increased IDO activity seems to tilt helper T-cell polarization toward a Th2 phenotype [48]. The presence of activated IDO-specific, cytotoxic T cells may screw the Th-response in a Th1-direction. Finally, it should be noted that IDO ? cells may be immune suppressive by other means than by the expression of IDO. Hence, the same cells might express, for example, Arginase, PD-L1 or immune-regulatory cytokines (e.g., IL-10 and TGF-b). Hence, IDO-specific, cytotoxic T cells may not only reduce IDO-mediated suppression directly but in addition further immune suppression mediated by IDO ? regulatory cells.
Recently, we identified spontaneous CD8 ? T-cell reactivity against the IDO analogue IDO2 in peripheral blood of both healthy donors and cancer patients [42].
Furthermore, we confirmed that IDO2-reactive CD8 ? T cells were peptide-specific, cytotoxic effector T cells. Hence, isolated and expanded IDO2-specific T cells effectively lysed cancer cell lines of different origin, that is, colon carcinoma cells as well as breast cancer cells. However, IDO2-specific T cells did not seem to kill melanoma cells although they expressed IDO2. At least, we did not observe killing of three different IDO2 ? melanoma cell lines. Likewise, IDO2-specific T cells did not seem to ''support'' other immune responses in the same way as IDO-specific, cytotoxic T cells. Hence, the function and potential role of the IDO2-specific class-I-restricted lymphocytes present in peripheral blood still need to be resolved.
CD4 responses against IDO
We speculated that CD4 ? IDO-specific T cells releasing pro-inflammatory cytokines may play a role in the early phases of an immune response as a counter-response to the we went on to analyze if CD4 ? T cells naturally recognized IDO. Indeed, identified detectable numbers of specific CD4 ? T cells both in cancer patients as well as healthy individuals [23].
We found that such IDO-specific CD4 ? T cells released INF-c as well as TNF-a. Although, we were able to detect both INF-c and TNF-a response toward IDO in healthy donors, the responses were more frequent in cancer patients. The cancer relevance of these CD4 ? T cells were further underlined, since IDO-reacting T cells in addition react toward DC pulsed with IDO ? tumor lysates. Interestingly, we detected a correlation between patients harboring CD4 and CD8 responses against IDO, which that class-I-and class-II-restricted IDO responses co-develop.
Furthermore, we detected frequent IDO-specific CD4 ? T-cell responses when examining for IL-17 release upon stimulation with the IDO-derived CD4 epitope. IL-17 has been the focus of great interest recently since the production of IL-17 is characterized to a subset of CD4 ? T-helper cells (Th17 cells). One of the main roles of Th17 cells is believed to be promoting host defense against infectious agents. Th17 cells are thought to be particularly important in maintaining barrier immunity at mucosal surfaces such as in the lungs, gut, and skin [28]. Interestingly, IDO is expressed at high levels in the gastrointestinal tract, although its precise role in intestinal immunity is not well understood [7]. One could speculate that a fraction of the Th17 that are highly prevalent at the mucosal tissues of healthy individuals [28] is recognizing IDO; however, this is yet to be established. Additionally, it is well described that Th17 cells contribute to autoimmunity [6]. In cancer, Th17 cells might have a protective role in tumor immunopathology by promoting antitumor immunity. Tumorinfiltrating Th17 cells express other cytokines in addition to IL-17, which might be functionally relevant [18]. A large fraction of Th17 cells produce high levels of effector cytokines such as IL-2, INF-c as well as TNF [51]. IDOspecific Th17 cells seemed to exhibit a similar effector T-cell cytokine profile [23]. We could in contrast not detect any release of the Th2 cytokine IL-4 in response to the IDO-derived peptide [23].
It was recently suggested that the Foxp3 ? Treg cell lineage in addition to immune suppression have an unappreciated helper role [38]. These ''Th17-like effector cells'' were distinguished by their unique ability to deliver help immediately and spontaneously, without needing prior priming or pre-activation. It was suggested that these CD4 lineage cells correspond to a pool of constitutively primed ''first responder'' cells [38]. IDO plays an important role in this conversion of Foxp3 ? Tregs to Th17-like effector cells [2,39]. Thus, it is possible that IDO-specific T cells could in addition belong to a Foxp3 ? lineage of constitutively primed ''first responder'' Th17-like T cells; however, it should be strengthen that this is speculation.
Naturally, some CD4-positive IDO-specific T cells could in addition be immune suppressive Tregs. It would be obvious that IDO-specific Tregs may enhance the IDOmediated immune suppression protecting cells from an immune attack. In this regard, we have previously described specific regulatory CD8 ? T cells in cancer patients, which recognized the immune suppressive Heme Oxygenase-1 [1]. IL-10 is mainly expressed by Tregs that have been defined as a specialized subpopulation of T cells that act to suppress activation of the immune system and thereby maintain immune system homeostasis and tolerance to self-antigens [34,35]. We could in addition in some donors detect IL-10 release in response to the IDO-derived CD4 epitope peptide. Hence, the role of IDO-specific CD4 ? T cells in immune-regulatory networks may be a complex balance between activation and inhibition depending on the microenvironment. Interestingly, in some donors we detected background IL-10 release in in vitro pre-stimulated ELISPOT assays. This enabled us to recognize that stimulation with the IDO-derived peptide in two healthy donors triggered an overall suppression of IL-10. In this regard, we have previously observed a decrease in IL-10 when IDO-specific CD8 ? T cells were present [41].
Clinical perspectives
Cancer IDO may exhibit its immune inhibitory functions both in the activation phases (in the draining lymph node) as well as in the effector phases (at the site of the tumor). With regard to the latter, IDO may even by induced as an inflammation-induced counter-regulatory mechanism. Counter-regulatory responses are important in the immune system as they help to limit the intensity and extent of immune responses, which otherwise could cause damage to the host. However, with regard to anti-cancer immunotherapy, counter-regulatory responses antagonize the ability to create an intense immune response against the tumor. Counter-regulation differs from tolerance in the sense that counter-regulation is a secondary event, elicited only in response to immune activation. IDO is known to be induced by both type I and II interferons, which are likely to be found at sites of immune activation and inflammation [31,36]. In this respect, it should be mentioned that the susceptibility of tumor cells to lysis by IDO-reactive T cells were increased by pre-incubation with IFN-c [40].
Hence, in cancer immune therapy, the boosting of IDOspecific immunity could have both direct and indirect effects (Fig. 3). First of all, IDO-specific, cytotoxic T cells are able to directly recognize and kill IDO ? cancer cells. In fact, it may even be speculated that the measurable reactivity to this antigen in normal individuals contributes to immune surveillance against cancer. Furthermore, the induction of IDO-specific immune responses by therapeutic measures could function highly synergistic with additional anti-cancer immune therapy not only by eliminating cancer cells but in addition immune suppressive cells. By definition, anti-cancer immune therapies aim at the induction of an immunological activation and inflammation. The therapy aims to induce as much immune activation as possible (within the limits of acceptable toxicity), and, accordingly, immune suppressive counter-regulation is not desired.
Adoptive transfer of ex vivo expanded tumor-infiltrating lymphocytes (TIL) after host lymphodepletion has the potential to significantly improve the prognosis of patients with metastatic melanoma. The impressive clinical responses associated with adoptive transfer of TIL [9] urge that this strategy is pursued and investigated for the treatment of other types of cancer. In this regard, patient IDOspecific T cells isolated and expanded from PBMC may well be an interesting supplement to the ongoing adaptive T-cell transfer strategies.
It goes without saying that the possible introduction of autoimmunity and toxicity are the major worries when targeting a molecule like IDO. However, the circulation of a measurable number of IDO-specific T cells did not seem to cause autoimmunity. Furthermore, since IDO-specific T cells can be introduced by IFN or CpG this appears to be under solid control. In this regard, an interesting aspect of IDO is that systemic inactivation at the organism level, either pharmacologically or genetically, does not appear to cause autoimmunity [19].
We believe that the findings that presented here justified and warranted clinical testing to evaluate the efficiency and safety of IDO-based vaccinations. Hence, we initiated a phase I vaccination study, which is ongoing (from June 2010) at Center for Caner Immune Therapy, Copenhagen University Hospital, Herlev, in which patients with non-small cell lung cancer (NSCLC) are vaccinated with a IDO-derived peptide with Montanide adjuvant (www.clinicaltrials.gov; NCT01219348).
Additional pathogenic settings
It has been suggested that IDO may rather be involved in tolerance to non-self-antigens than self-antigens in situations where immune non-responsiveness may be important, for example, during pregnancy [19]. In this respect, induction of IDO ? immune-regulatory dendritic cells (DC) have been described to occur during infection of DCs with viruses and intracellular pathogens. In Listeria monocytogenes infections, such IDO ? DC seems to be involved in protection of the host from granuloma breakdown and pathogen dissemination in advanced human listeriosis. Likewise, it was recently described that IDO is increased in lymph nodes in cutaneous Leishmania major infection [17]. IDO is implicated in suppressing T-cell immunity to parasite antigens and IDO inhibition reduced local inflammation and parasite burdens, which suggest that IDO were of benefit for the pathogen, not the host. During HIV Fig. 3 Vaccine induced IDOspecific T cells might kill IDO ? suppressive antigen presenting cells (APC) as well as IDO ? cancer cells both at the tumor site and in the draining lymph nodes. IDO may exhibit its immune inhibitory functions both in the activation phases (in the draining lymph node) as wells as in the effector phases (at the site of the tumor). Hence, an IDO-based cancer vaccine might work directly at the tumor site by the attack of cancer cells as well as stromal cells as well as in the draining lymph node by the attack of IDO-expressing regulatory cells infection, multiple mechanisms involving both viral and cellular components contribute to enhance IDO expression and activity in an uncontrolled manner. Among others, HIV inhibits T-cell proliferation by inducing IDO in plasmacytoid DC and macrophages [5]. Furthermore, it was recently described that IDO is increased in hemodialysis (HD) patients compared to healthy donors [10]. Furthermore, IDO suppresses adaptive immunity in HD patients as it is assessed by the response to HBV vaccination. Hence, the targeting of IDO could have synergistic effects in anti-viral immune therapy, for example, in Hepatitis B vaccines.
The fact that IDO may be involved in tolerance to nonself-antigens might have major implications for IDO-based immune therapy as boosting immunity to neoantigens, but not normal self-antigens, by triggering IDO-specific T cells is very attractive. Since IDO-expressing cells might antagonize the desired effects of other immunotherapeutic approaches targeting IDO-expressing cells by vaccination would consequently be easily implementable and highly synergistic with such therapeutic measures. However, it was recently described that although IDO might play biologically important roles in the host response to diverse intracellular infections like Toxoplasma gondii, leishmaniasis, and herpes simplex virus, the nature of this role that being antimicrobial or immunoregulatory might depend on the pathogen. Hence, IDO inhibition might not always benefit the host. In this regard, IDO inhibition during murine toxoplasmosis led to increased mortality with increased parasite burdens [8]. This should naturally been taken into account when exploring the possible use of IDOspecific T cells in the clinic.
Finally, it should be mentioned that CD14 ? monocytes are major CMV target cells in vivo. CMV is the most immune dominant antigen to be encountered by the human immune system [43]. Monocytes are responsible for dissemination of the virus throughout the body during acute and late phase of infection. CMV has been shown to induce IDO expression in monocytes, which has been suggested to confer an advantage to CMV-infected monocytes to escape T-cell responses [12]. The CD8 ? T-cell response to CMV typically comprises a sizeable percentage of the CD8 ? T-cell repertoire in CMV-seropositive individuals [15]. In light of this, it is possible that IDO-specific T cells might function as support for the constitutive anti-CMV CD8 ? T-cell response. Naturally, this can only be speculation, but notably we found that the presence of IDO-specific CD4 ? T-cell responses correlated to the presence of CMV-responses [23]. | 5,538.4 | 2012-03-03T00:00:00.000 | [
"Biology",
"Medicine"
] |
Development of Learning Everywhere Class (LEKAS) Platform for Economic Education
The main objective of this research is to develop an Androidbased mobile learning platform for economic education students so that they can study anywhere and anytime. Following the current trend, learning must be flexible in accordance with learning styles that tend to lead to online learning. Students will be more comfortable studying with their smartphones anywhere than having to carry books everywhere. This platform consists of several features, namely lecture material, evaluation results and progress of reading material in the form of PDF and video embedding. The design development in this study uses the ADDIE model, which consists of 5 stages, namely Analysis, Design, Development, Implementation, and Evaluation. This product was tested on 60 students of economic education class 2019 in introduction of macroeconomics courses. The final product of this research is an Android-based mobile learning platform with the title LEKAS (Learning Everywhere Class). The results of the evaluation of this product are that there is still a need for improvements in the ease of access to various types of android. The advantage is that students can immediately learn and find out the extent of understanding by taking tests after studying the material that has been shared. Through this platform students can study material and evaluate the material they read anywhere and anytime. The convenience provided on this platform will be able to increase the effectiveness and efficiency of learning in economic education. Keywords—Android-based, Learning Platform, Economic Education, ADDIE model, Learning Everywhere Class (LEKAS)
Introduction
Learning activities are very dynamic activities where teachers are required to have innovation and creativity in delivering material so that learning activities are not monotonous and make students' learning motivation decrease [1]. A teacher's innovation and creativity can be enhanced by using learning models and media that are in accordance with the characteristics of the material to be delivered. Economic material has different characteristics from other social materials. Economic material contains facts, events, concepts, laws and procedures that must be understood by students. In delivering the teaching material, of course, it must have a reference, namely the learn-ing objectives that must be achieved in accordance with the applicable curriculum in each university. The delivery of material in the economic education study program so far still uses the lecture and discussion method in its learning activities. This method is still far behind the current conditions in the Industrial Revolution 4.0 which prioritizes technology in every field, including in the field of education [2]. Due to its ubiquitous expansion and ease of use, the network provides quick access to various areas of interest [3]. Mobile learning increases students' interest and motivation in learning activities [4]. In addition, mobile learning is able to make a pedagogical shift from class-based learning to collaborative and constructivist learning [5].
The development technology allows each person in different ways, which should be used for things that are useful. One of these benefits can be used to facilitate a student in increasing motivation and interest in learning. Learning resources are no longer only from printed books in the library and also information provided by teachers. Students can take advantage of various learning resources available around them. In the world of education today, various platforms are used to support learning, making it easier for students to master teaching material with or without teacher intervention [1]. The face-to-face learning process involving students and teachers in the class is considered less effective and efficient again, because it is limited to scheduled space and time so that students are not maximal in mastering the material taught by the teacher in the classroom and students when they do not understand cannot repeat the same material delivered by the teacher. So that the implementation of classical learning activities is still not used effectively until now. In addition, teachers cannot control students in their learning activities outside the classroom. As stated in [6] the lack of communication and interaction between teachers and students is one of the main challenges for implementing learning. Only by taking advantage of the limited face-to-face meeting time, teachers often pursue material content without paying attention to the boredom that plagues students. So that when face-to-face meetings in class, it is not uncommon for students to find students enjoying playing their smartphones with social media or playing games to reduce boredom when learning in class.
At present, the world has entered the era of the 4.0 industrial revolution where technological developments and sophistication have greatly influenced human life, where everything seems to be limitless space and time due to the development of the internet and digital technology. However, the concept of digitizing education still moves the teaching system from conventional to digital format [7]. The presence of the industrial revolution era 4.0 provides a new color in the world of education. The role of technology in education is undeniable, where currently the Government has regulated the need for technology in various laws and regulations. As stated in [8] that "the use of technology is expected to increase student interest in learning because the conventional learning process is deemed unpleasant and monotonous". Learning that prioritizes teacher activities and textbooks will make students feel bored and bored with learning in class. Therefore, a learning innovation is needed, one of which is learning based on information and communication technology so that it is expected to make students more enthusiastic about learning.
The rapid advancement of technology has resulted in increasingly diverse types and forms of learning sources, as well as the nature of good learning resources, which must be flexible and adapt to where the learning resources are located. Learning sources can also be said to be all sources that contain knowledge and information that can be packaged through computers, cellphones, the internet and others [9]. One of the innovative learning strategies that can be implemented with current trends is to carry out learning anytime and anywhere according to the needs and desires of students in understanding the material [10]. [11] Classifies learning resources into six types, namely; messages, people, materials, tools, techniques and settings. The six types of learning resources contain a core, namely the message to be conveyed to students. Learning resources in the form of applications are sources of learning types of materials. Materials as learning resources include print / non-print media that contain information and can help students achieve their learning goals [12] Materials are also often referred to as software or software.
As stated in [13] " Android is a software (software) that is used on a mobile device (running device) which includes the operating system, middleware and core applications". Android according to [8] is an operating system for smartphones and tablets. The operating system can be illustrated as a bridge between a device and its use, so that users can interact with the device and run applications available on the device. Another reference is found that [14] argues that Android is a Linux-based operating system specifically for mobile devices such as smartphones or tablets. This Android operating system is open source, so a lot of programmer's flock to make applications or modify this system. Programmers have a very big opportunity to be involved in developing Android applications for this opensource reason. Most of the applications on the Play Store are free and some are paid. This also motivates teachers to facilitate student learning preferences by using android as a learning resource for students. The current appropriate teaching method is that students freely use technology to learn and read material repeatedly according to their learning rhythm [15].
Based on the above opinion, one of the alternative learning media that can be offered to students is mobile -based learning. This learning media is considered to be able to support the learning process in the economic education study program. The need for innovation in the development of alternative learning systems that are attractive and interactive in their use. Developing mobile learning for economic education principled paperless and everywhere every time learning. In connection with the need for innovation, it is necessary to develop a learning alternative in the form of the development of a LEKAS (learning everywhere class) platform for economic education. The formulation of the problem in this study is how to design mobile learning using android for economic education. While the goal in this study is to describe the design of mobile learning to use Android for Education in the economy.
Relevant Literature
There is no specific definition to explain mobile learning, but there are four dimensions to explain the purpose of this mobility including technology mobility, student mobility, educator mobility and learning mobility [16]. Mobile learning was originally defined as the technology used in learning. Or in other words, mobile learning is defined as the provision of learning through a set of handheld devices, this means that it can be a cell phone, smartphone, tab, tablet or palmtop computer -a handheld computer, PDA, but not a PC with a large desktop that is not easy to carry [17]. This explains the type and technology used. Along with the development of technology that can be used in learning both hardware, software, and various file formats, this definition becomes unstable and its clarity is questionable [18]. This understanding then develops in the mobility of students, educators and learning where the community is able to convey learning to individuals, groups, communities and countries that were previously socially and geographically restricted. Another current definition explains that mobile learning is able to enrich and enhance learning activities from previous abilities. This is certainly not the end of the development of the definition of mobile learning. Along with the rapid development of science and technology, the need for learning, the pedagogic competence of educators, the shortcomings of previous mobile learning and the budgetary problems they have will continue to be the basis for the rapid development of the definition of this mobile learning.
There is a difference between e-learning and mobile learning. Based on the definition, it is known that mobile learning is learning by utilizing a variety of features and handheld devices, while e-learning makes use of larger devices such as computers with networks. E-learning also has characteristics that are able to present learning massively, can be accessed by network, equipped with various features of structured and interactive learning media. Meanwhile, according to its definition, mobile learning has the characteristics of automatic, instant, portable, personal, informal, small, lightweight [17] and even now it has been connected, customized and interactive. This limitation of use in mobile learning can be an obstacle to learning modes that require content explanation. Meanwhile, the limited connectivity will be a problem for practical materials or courses or interviews and synchronous delivery of material [17]. In other words, the seemingly technical characteristics that differentiate one set of devices from another can be a hindrance as it tends to match the set of tools being developed and used. Therefore, educators need to explore learning models, methods and tools to present fun and engaging mobile-based learning.
Online-based learning using smartphones is also known as mobile learning. This mobile learning explains the role of the new technology being developed to be able to provide a forum for knowledge management, accessibility and delivery and acceptance of material delivered with a design that is suitable for the characteristics of students and learning materials [16] [19]. The use of smartphones to support learning activities is very popular and in demand by students at various age levels, both children [20] to students [21] [22], applicable and effective in improving learning outcomes [23] [24] including social science students [25]. This is also evidenced by the data released by APJII which explains that internet penetration in Indonesia with a sample of population aged more than 15 years has increased to reach 73.7% of the population where 95.4% access the internet from smartphones [26].
Aspects that encourage the growth of online-based learning include the need for distance learning that is easy, comfortable and effective to use anywhere, anytime and by anyone without being limited by gender and age [19][27] as well as wide open access both in formal and informal education [28] especially in the current pandemic era, where online learning is required.
Even though mobile learning is very suitable for millennial students because they are very close to gadgets in everyday life, one of the main things that cannot be forgotten is knowing the needs and readiness of students and educators in mobile learning [29]. To get optimal learning outcomes, careful preparation is needed in each element of mobile learning according to the learning environment. Some of the features that are very helpful in mobile learning include reminders, free settings for profiles, progress reports and download options for offline access that can allow students to be able to study material in minimal conditions even without a network [23] [27] . The limitations faced by online learning are constraints on network access, the absence of gadgets, operational capabilities and limited physical interaction between students and educators [28]. Another challenge faced by mobile learning implementers and developers is balancing internal interests, related to the development of work, and external interests related to the context and the urgency of this development [18]. Therefore, every good in the results of mobile learning development will seem subjective in accordance with the problems that each product wants to solve, with the advantages that are expected to cover any shortcomings of other similar applications, the urgency of product development can be implemented and helps in problem solving, especially in learning [30].
Research Method
This research is a development research. The product produced from this study is an Android-based platform with the title LEKAS (Learning Everywhere Class). The development design that will be used in this research is ADDIE. The researcher describes the ADDIE development design stages as follows:
Fig. 1. Steps for developing the LEKAS platform
The following is an explanation of the ADDIE development stage that the researchers will do.
Analysis
The analysis stage is the stage where the researcher analyzes the need for the development of teaching materials and analyzes the feasibility and development requirements. The stages of analysis carried out by the author include three things, namely needs analysis, curriculum analysis, and character analysis of students. to formulate indicators of learning achievement. The analysis was carried out through interviews with students regarding what problems were faced during the teaching and learning process which consisted of a. Motivation to learn b. Interest to learn c. Learning outcomes d. Critical thinking e. The use of technology in learning f. Learning models and methods g. Interaction and communication between teachers and students.
The curriculum used in the economic education study program at Universitas Negeri Malang is life-based KKNI. Students are intended for generation Z who have characteristics with high and realistic mobility. The majority of Generation Z have an interest in social networks and a high understanding of digital. Students of 2019 class became research subjects in the subject of Introduction of Macroeconomics Offering BB and A, totaling 60 students.
Design (Design)
The second stage of the ADDIE model is the design stage. At this stage, a platform is designed to be developed according to the results of the previous analysis. At this stage, developers and researchers discuss the LEKAS mobile learning design which is adjusted to the analysis in the first stage.
Development
The development stage is the product realization stage. At this stage the development of the platform is carried out according to the design by the developer and then the results of the design are discussed roughly. After that, the platform will be validated by expert lecturers. After the validation process, there will be a process of fixing features that are still not suitable and adding necessary features. The researcher entered the material and also the evaluation on the LEKAS platform to then be implemented.
Implementation (Implementation)
The fourth stage is implementation. Implementation is limited to students who are designated as research sites. The designated students are students of class 2018. Students install the LEKAS platform on their smartphones. Register then enter on the platform according to the subject. Then study the material that has been input by the researcher. After the learning process is complete, students take tests using the questions provided by the researcher to measure the achievement of learning objectives.
Evaluation
At this stage, the researcher made the final revision of the platform developed based on input obtained from the response questionnaire or field notes on the observation sheet. It is intended that the platform developed is truly suitable and can be used by a wider range of students.
The data obtained from the assessment of material experts, instructional media experts, and from potential product users are qualitative and quantitative. Quantitative data were obtained from questionnaires while qualitative data were based on suggestions, input and comments from the assessment of trial experts and product users.
4
Results and Discussion
System interface
The development of Android-based mobile learning as a learning medium can be used as a solution to overcome learning problems, both in terms of limited time, media & broadcasting, and learning methods [31]. The use of Android-based mobile learning media using LEKAS can run well and effectively so that it has an influence on increasing student motivation and interest in learning. This early product developed using instructional media applications in the form of development of software.
This application model uses a thumb focused interaction model. This model aims to make this application run using one hand without difficulty. This application is installed on an Android smartphone only so that if students have an IOS-based smartphone they cannot use this platform to learn.
The flow chart design of this application is as follows:
Fig. 2. Application Registration Page
The image above shows the system on the dashboard where participants can choose what menu they need in the LEKAS application. The feature in LEKAS is home, the material is by selecting the courses to be taken and then entering the material in the form of a PDF file or learning video. There are evaluation results as well as a feature of measuring learning objectives. And there is also a student profile feature. The image above is a display image on the LEKAS dashboard. Participants can choose the material menu to be viewed for study, an evaluation to be carried out, update their personal profile, or the logout menu to exit the application. The image above describes the flow on the course menu. In this part of the course menu, participants can choose the course menu that will be followed by participants. After selecting the existing course menu, participants will be taken to the course content menu. On this menu, participants can choose what material to study on this course. The following is a display on the material menu provided by the teacher. The next picture is a picture of material evaluation.
Fig. 6. Course Evaluation
The picture above explains the flow of the course evaluation that will be used by the participants to carry out learning evaluation activities on the courses they are participating in. Participants can choose what type of evaluation to use. There are 2 types of evaluation that can be used, namely multiple choice and essay. Students by participating in the evaluation can measure their ability to understand and master the material. Students can repeatedly do evaluations until they get the maximum score. The average score obtained by students after doing the evaluation is in the range 60 -95. The questions that are done are multiple choice with a total of 25 questions.
Fig. 7. Assessment
The image above explains the assessment flow in the LEKAS application. Participants who have carried out evaluation activities can directly see the results they have obtained. In the picture on the side, you can see the trial implementation of the evaluation with a score of 48, in the upper right corner the time is used to do the evaluation. If students have finished working on 25 questions, students can submit them immediately or wait until the time runs out. When it is finished, students can press the finish button then the score will appear from the evaluation that has been done.
The effectiveness of using LEKAS applications in learning activities
The process of learning economic education activities mostly still uses conventional learning. The use of e-learning is still not used to it. So that students can only learn through books and explanations from lecturers. In today's era, the use of smartphones is increasing and almost all students use them [32]. From the results of the preliminary analysis of the problems faced by students regarding learning motivation, interest in learning, and learning outcomes are strongly influenced by the use of technology in the learning process.
After LEKAS application was tested, students filled out a questionnaire and were interviewed by the researcher. The students expressed that they were happy if learning was packaged using technology, especially using smartphones. Students feel they can study the material and access the tests given by the lecturer anytime and anywhere [33]. Today's students cannot be separated by the use of smartphones, they use their devices more for social media and playing games than for studying [34]. Students are easily distracted when a notification comes in when they learn to use their device. It is a negative effect if full learning uses applications on the device [34]. Regarding motivation and interest in learning, students will be more excited when they are given assignments by surfing in cyberspace, exploring their curiosity rather than having to find out in reference books [35]. After implementing the application, they find it easy to repeatedly learn material that is still not understood. Students can adjust to their respective learning styles in learning material. They can also adjust their learning rhythm and speed after this application. Students do not feel worried about missing material because they can read it anytime. Students only have a limited time to be able to focus on reading material, because they will be bored and bored if they read material on their devices for a long time [36]. They will even open other applications that are fun instead of learning continuously.
Students' understanding of the achievement of learning objectives is measured by providing an evaluation in the form of a written test. In this application there is an evaluation feature that can be provided with multiple choice written tests and essays with short answers. Students will do an evaluation according to the design of the lecturer, each question will have time to do it. After finishing working on the evaluation questions, a score will appear to see the extent to which students understand the learning material. The most important thing here is that students realize an improvement in the quality of understanding after studying the material through the LEKAS application. In accordance with the results of research [37], that there is an increase in the productivity of using the application and an increase in interest and motivation to learn after using the LEKAS application. This shows that the use of the LEKAS application is more effective than using conventional learning.
The feature designed on the LEKAS platform is a material feature that contains material with PDF files and embed videos from YouTube. At the implementation stage, the researcher also conducted application testing on students to find out deficiencies that needed to be fixed. The drawback of this application is that it can only be used on certain types of Android-based smartphones, so that when registering students had difficulty entering the application. Students who use iOS-based smartphones are still unable to use this platform as a learning medium. The features offered on this platform are still relatively simple, namely only material in the form of PDFs and embed videos from YouTube. The evaluation compiled is still relatively simple, only in the form of multiple choices and a brief description. The use of short descriptions can only show keywords with correct answers. Students at the beginning of the installation also still experience many problems when registering. So, the next improvement is to allow all types of android to be able to access this platform.
Conclusion
Thus, it can be concluded that the LEKAS platform can be implemented well in the research subject, namely 2019 economic education students. This platform can be used easily and flexibly even though its features are still very simple. Material can be entered in the course menu and in PDF and video form making it easier for students to learn the material. After students study the material, students can measure their understanding and mastery by working on evaluation questions which can be in the form of multiple choice and short descriptions and can immediately find out the results directly. The drawback of this platform is that it can only be used on smartphones based on Android. Students are very enthusiastic about technology-based learning innovations, students feel happy with lectures that can be carried out anywhere and anytime. Many improvements and additional features are needed for the perfection of this LEKAS platform. So that the quality of learning can be improved.
This study also concludes that mobile-based learning using the LEKAS application is more effective when compared to conventional learning. Evidenced by an increase in understanding of learning outcomes through the acquisition of scores when doing evaluations. Student responses in using this application are also positive seen with the enthusiasm and intensity of their interactions with this application. This research is limited to economic materials, so the researcher cannot be sure whether this simple feature is suitable for exact learning material. The need for improvements in features and additional features will make this LEKAS application more perfect and useful. | 6,013 | 2021-05-04T00:00:00.000 | [
"Computer Science",
"Education",
"Economics"
] |
Full-color structured illumination optical sectioning microscopy
In merits of super-resolved resolution and fast speed of three-dimensional (3D) optical sectioning capability, structured illumination microscopy (SIM) has found variety of applications in biomedical imaging. So far, most SIM systems use monochrome CCD or CMOS cameras to acquire images and discard the natural color information of the specimens. Although multicolor integration scheme are employed, multiple excitation sources and detectors are required and the spectral information is limited to a few of wavelengths. Here, we report a new method for full-color SIM with a color digital camera. A data processing algorithm based on HSV (Hue, Saturation, and Value) color space is proposed, in which the recorded color raw images are processed in the Hue, Saturation, Value color channels, and then reconstructed to a 3D image with full color. We demonstrated some 3D optical sectioning results on samples such as mixed pollen grains, insects, micro-chips and the surface of coins. The presented technique is applicable to some circumstance where color information plays crucial roles, such as in materials science and surface morphology.
Over the past few decades, varieties of optical sectioning technologies have been invented and played increasingly important roles in the researches of biomedical sciences. The confocal microscopy and the two-photon microscopy are the most regularly used techniques for obtaining high quality of optically sectioned images [1][2][3] . Fundamentally, both of them involve raster scanning point source of excitation laser and detecting the fluorescence signal with photomultiplier tube detectors. With the emergence of new fluorescent molecular probes that can bind proteins with high specificity, the multiple labeling allows the visualization of multiple protein interactions in living cells simultaneously 4 . Besides, multiple labeling also provides improved imaging contrast and definition. High-end multicolor scanning microscopes developed so far are based on the multi-channel integration geometry. Multiple laser excitation sources and detectors for different color channels are employed, and the signals from each channel (red, green, and blue) are detected sequentially and combined into a single file [5][6][7] . Laser scanning microscopies have axially sectioning capability and high spatial resolution, but the high power of laser may be harmful to living tissues. Besides, non-optical-sectioning methods for full-color 3D imaging, such as spectroscopic optical coherence tomography based on low-coherence interferometry 8,9 , have also been developed. However, their spatial resolutions are limited to a few of microns.
Wide-field fluorescence microscopies with optically sectioning power, including light-sheet microscopy [10][11][12] and structured illumination microscopy (SIM) [13][14][15][16][17] have recently received lots of attentions due to the advantages of high spatial resolution, short image recording time, and less photobleaching. SIM has found numerous applications for time-lapse imaging of living tissues and cellular structures. It was first invented by Neil et al. 13 as a method of eliminating the out-of-focus background encountered in the wide-field microscopy. It has been demonstrated that the axial resolution of SIM is equal to that of laser scanning microscopy 17 . Furthermore, Gustafsson et al. [18][19][20][21] exploited the SIM to improve the spatial resolution of microscopy, i.e., super-resolution SIM. In this paper, we only focus on the optically sectioning SIM. By projecting a sinusoidal fringe pattern onto the specimen, SIM images the fringe efficiently only on the parts of the specimen that are in focus. The out-of-focus background can be removed by decoding the in-focus information. The most commonly used decoding algorithm was proposed by Neil et al. 13 For each slice, three raw images with an adjacent phase-shift of 2π /3 are obtained. By taking the root mean square (RMS) of the differences of the each two adjacent images, an optically sectioned image can be reconstructed. Recently, Mertz et al. 22,23 also introduced a new algorithm called HiLo imaging to synthesize an optically sectioned image, which uses two wide-field images acquired under structured and uniform illumination, respectively. The optically sectioned image is reconstructed by combining the in-focus high and low frequency components.
So far, most SIM systems use monochrome CCD or CMOS cameras to acquire images and discard the natural color information of the specimens. Here we propose a method for full-color SIM system by using a color CMOS camera. The color digital camera can produce color image is due to the use of Bayer filter. The Bayer filter is a special filter array for arranging trichromatic color filter elements on the pixels of the image sensor. The three trichromatic channels are not independent with each other and there must be a spectrum overlap between them. Because each single pixel is filtered to respond to only one of the three colors, the signal from each pixel cannot determine the trichromatic values by itself. To acquire an image with full-color, color restore algorithms developed by the camera manufacturers are used to calculate a set of complete trichromatic values for each pixel, which use the neighbor pixels of the relevant colors to identify the values for a specific pixel. For the multicolor microscopes based on the multi-channel integration geometry, there is a narrow band-pass filter put in front of each detector, the integrated image cannot restore the full-color information of the specimen. So, the use of color digital camera could provide a much broader color range compared with the multi-channel integration geometry.
The CCD or CMOS cameras used in majority of the wide-field microscopes have a pixel depth of 12 or 16 bits that are able to obtain 4096 or 65536 gray scales. In most instances, this is sufficient for processing the typical amount of photoelectrons that a single pixel can contain. However, because the optically sectioning decoding algorithm executes a subtraction operation (see Eq. (1)), this will cause a reduction in the gray scales of images 24 , which makes the color restoration of multi-channel integration scheme distorted. Image processing methods can stretch the existing pixel values to fill the full dynamic range, but cannot add any new information. As a result, when three post-processing images from R, G, B channels with only 1000 out of a possible 4096 gray scales are stretched to fill the full dynamic range for example, the resulting integrated color image appears color cast artifact.
To solve the above issue for a full-color structured illumination optical sectioning microscopy, we propose a new scheme based on our developed Digital Micro-mirror Device (DMD)-based LED-illumination SIM system 16 , in which a color CMOS camera and a white-light LED illumination is employed to realize SIM with full natural color. In contrast to other SIM techniques, the DMD-based LED-illumination SIM is cost-effective, ease of multi-wavelength switchable and speckle-noise-free. A new data processing algorithm based on the HSV (Hue, Saturation, Value) color space is developed, which transforms the recorded three raw images in RGB color space into the HSV color space, and calculates the slicing images in the three HSV channels separately, and then recombines to a 3D image with full-color. To our knowledge, it is the first time to realize 3D SIM image for both fluorescence and reflection imaging with full natural color.
Results and Discussion
In order to evaluate the spatial resolution of the color SIM microscope, we used 170 nm in diameter green fluorescent microspheres (520 nm emission@475 nm excitation) as test samples. The size of the microsphere is far below the resolution limit of the microscope using the objective of 20×/NA0.45. The intensity distributions of the 170 nm fluorescent beads in lateral and axial planes are shown in Fig. 1d,e. We sliced 225 layers at the axial stepping interval of 50 nm and captured 675 raw images, so the optical sectioning depth is 11.2 μ m. The Gaussian fits and statistical values of the full width at half maximum (FWHM) over 50 microspheres indicate that the lateral and axial resolutions of the system are 0.58 ± 0.02 μ m and 2.4 ± 0.1 μ m, respectively, which are close to the theoretical resolution limits.
An experimental result of a mixed pollen grain specimen that emits strong auto-fluorescence signals under the illumination of 405 nm light is shown in Fig. 2. A dichroic mirror (420 nm long-pass, 45 o incident angle) is used to separate the illumination and the fluorescence signals. Another long-pass filter (420 nm long-pass, 0° incident angle) is set before the color camera to block the additional illumination light. The long-pass filters set guarantee a broad range of the auto-fluorescence signals in the visible range can be detected by the color camera. Figure 2a presents a sequence of 251 layers (1280 × 1024 pixels/layer) of a volume with 50 μ m in depth and sliced with the axial interval of 200 nm. The total data acquisition time is 164 seconds, that is, (215 ms exposure time + 0.031 ms DMD switching time) × 3 patterns × 251 layers + 10 ms Z-stage settling time × 250 axial slice intervals. The maximum-intensity projection image of the 251 layers along Z-axis is shown in Fig. 2b. The data acquisition speed for the color SIM system is restricted by the sensitivity and speed of the color CMOS camera. At the present system, a color CMOS camera with 10 bits gray depth and 60 fps full-frame rate is adopted. The acquisition speed can be much higher by using more sensitive and faster cameras, for instance, using a scientific color digital camera with a filter array of cyan-magenta-yellow combination that has higher transmittance efficiency.
It is known that different pollen grains have different shapes and sizes. Actually, they also emit different colors of auto-fluorescence under the same excitation. To demonstrate this feature, we used our Scientific RepoRts | 5:14513 | DOi: 10.1038/srep14513 setup to image the mixed pollen grain specimen in different positions. After that, we put these 3D sectioning images together in a single volume to exhibit the colorful pollen grains. Figure 2c presents the maximum-intensity projection of 135 planes under the excitation of 405 nm light by stitching four positions in the specimen. The differently shaped and colorful pollen grains are clearly revealed. The Supplementary Movie 1 gives the "3D color image" of the mixed pollen grains after 3D reconstruction viewed from different angles.
The results of maximum-intensity projection of a Congo red stained mite and a Clavicornaltica's hindleg femora are shown in Fig. 3a,b, respectively. For the mite, we sliced 105 layers at 800 nm axial intervals and captured 315 raw images for each data set, excited by the wavelength of 405 nm. The whole image is stitched by 8 data sets, and the field-of-view (FOV) is 0.407 × 0.610 mm 2 . It is recognized that the mouth and the tentacles of the mite emit differently fluorescent colors distinguished from the body. The Clavicornaltica's hindleg femora, excited by the wavelength of 450 nm, emits auto-fluorescence signal. Here a 475 nm long-pass dichroic mirror and filter are used to separate the excitation and fluorescence signals. We sliced 223 layers at 800 nm axial intervals and captured 669 raw images for each data set. The whole image is stitched by 4 data sets, and the FOV is 0.410 × 0.324 mm 2 . The Supplementary Movie 2 and Supplementary Movie 3 present the "3D color images" of the mite and the Clavicornaltica's hindleg femora after 3D reconstruction viewed from different angles. Since the DMD-based color SIM system is in epi-detection geometry, it is also suitable for mapping of surface morphology. For example, we used the system to acquire the 3D structure of the metallic surface that has strong reflection to the visible light. Here a white-light LED is applied to illuminate the metallic sample. In order to collect the reflective light from the surface, a 50/50 beam-splitter is used instead of the dichroic mirror, and the long-pass filter before the camera is also removed. Before experiment, a calibration of white balance for the color CMOS camera is adopted by using a silver-coated mirror as a sample. Figure 4 shows a 3D color image of a piece of DMD micro-chip rendered from the optically sectioned 25 planes along Z-axis with the 20 × objective, where different colors correspond to different materials. The data acquisition time for the 25 slicing layers is 2.3 seconds, that is, (27 ms CMOS exposure time + 0.031 ms DMD switching time) × 3 patterns × 25 layers + 10 ms Z-stage settling time × 24 axial slice intervals. Because of the strong reflective signal from the sample, the exposure time of camera is set much shorter than that for the mixed pollen grains, thus, the data acquisition time is greatly reduced. The Supplementary Movie 4 shows the "3D color image" of the micro-chip after 3D reconstruction viewed from different angles.
For 3D morphology of objects, techniques such as surface profiling and phase imaging are well known [25][26][27] . Both of the techniques depend on the phase-shifting scheme. The phase information of an object can be retrieved according to a phase unwrapping algorithm that recovers the true optical path difference map of the specimen. But for sudden and abrupt phase changing objects, such as specimens with high roughness surface or large step height, this method will fail 28 . Recently, Nguyen et al. 29 proposed a method based on multi-view image fusion scheme for capture of natural-color 3D models of insects. It used the color texture extracted from the specimen to map the 3D model of the reconstructed object. But this method is not available for getting structural details, such as steps or hairs. Differing from the above methods, SIM can measure objects with sudden and abrupt phase changing with high resolution and high SNR, moreover, with the natural color restoration of the specimen. It is also possible to get the surface morphology of large specimen whose dimension is beyond the FOV of the objective lens. To do this, we implemented the image stitching technique of multiple images array captured by scanning the sample on the motorized XY stage. Distinctive features are extracted from adjacent FOVs and then matched. The exact overlap among sub-images and identical exposures are crucial to avoiding conspicuous object cut and color inconsistency 30 . This technique provides a submicron resolution and a large FOV up to 2.25 mm 2 in the experiment. As an example, we took the surface morphology of a coin with the 10 × objective, whose FOV is 0.457 × 0.365 mm 2 . Figure 5 presents a stitched 3D color image of a convex star on a Chinese commemorative coin. The whole FOV is 1.505 × 1.528 mm 2 by stitching 20 data sets. For each data set, 173 layers of the volume at Z-step of 500 nm are captured. So, 10380 raw color images in 1280 × 1024 pixels are obtained, corresponding to a total data capacity of about 38 Gbytes. The Supplementary Movie 5 shows the "3D color image" of the convex star after 3D reconstruction viewed from different angles.
In summary, we have proposed a scheme for full-color structured illumination microscopy (C-SIM) based on DMD fringe projection and LED illumination, which has practical value to acquiring the broadband fluorescence signals and natural color of reflected light from object surfaces in three dimensions, rather than the artificial color that usually generated from gray-scale data post-processing. With the built apparatus, we are able to obtain optical sectioning images with full natural color of fluorescent specimens or reflection-type objects. We have demonstrated some 3D optical sectioning results on samples such as mixed pollen grains, insects, micro-chips and metallic surface of coins. This technique may find potential applications in such fields as biology (e.g. the study of structural color mechanism of animal 31 ), materials science, microelectronics, where color information may play crucial roles. Figure 6 illustrates the schematic diagram of the proposed color SIM apparatus.
Four-wavelength (405 nm, 450 nm, 475 nm and white light) high-power LEDs are applied as the illumination sources. A total internal reflection (TIR) prism is used to separate the projection and the illumination paths. The LED light is reflected by the TIR-prism and illuminates the DMD chip. The modulated light then passes through achromatic collimating lens, dichroic mirror, and focused by the objective lens (either a 10 × objective, NA = 0.45, Edmund Optics Inc., USA or a 20 × objective, NA = 0.45, Nikon Inc., Japan) to illuminate the sample. The sample is mounted on a XYZ motorized translation stage (3-M405, Physik Instrumente Inc., Germany) with a minimum movement of 50 nm. For each axial plane in Z-scanning, three fringe-illuminated raw images with an adjacent phase-shift of 2π /3 are captured. Volume data for different axial layers are obtained by axial moving the specimen at different Z positions to acquire the three-dimensional light intensity distribution of the specimen. Moving the specimen stage in XY directions under the objective enable to extend the FOV by use of the image stitching technique. A USB3.0 color CMOS camera with a maximum full-frame rate of 60 fps (DCC3240C, 1280 × 1024 pixels, 10 bits gray depth, Thorlabs Inc., USA) is used to record the 2D images. DMD controlling, image collection, stage movement and image stitching are carried out by custom developed software programmed in C + + .
Optically sectioning decoding algorithms. Structured illumination is introduced to wide-field fluorescence microscopy as a method of getting higher axial resolution and discriminating against out-of-focus background. A sectioned image is obtained from the RMS of the sum of squared differences between 3 raw images of the specimen given by Eq. (1), with each raw image captured under sinusoidal fringe illumination by a phase shift of 2π /3 mutually 13 . I I I I I I 1 z 0 120 2 120 240 2 240 0 2 Simultaneously, the conventional wide-field image can also be obtained with This implies that the optically sectioned image and the wide-field image of the sample can be obtained from the same raw data.
It is known that the gradation of color of an image is dependent on the gray scales of the digital camera. It is always expected to fill the available pixel depth in the data acquisition procedure. Since the scanning microscope measures a single pixel each time, which means that the microscope spends only 0.2 microseconds measuring the fluorescence signal for every pixel at a speed of 5 frames per second for an image with 1000 × 1000 pixels for example. Whereas, for CCD/CMOS array detectors a two-dimensional image is captured for all pixels in parallel, which means much longer per-pixel measurement time is allowed at even higher frame rate. Then more photons can be collected and more dynamic range of such images can be obtained. For these reasons, wide-field images are particularly well suited for digital image processing approaches.
Because of the subtraction operation in Eq. (1), the above RMS decoding algorithm will lose part of gray scales of the raw image 24 . To illustrate the effect of image dynamic range loss of the RMS decoding algorithm, we first set the CMOS camera working at the monochrome mode and record the auto-fluorescence signal of a pollen grain with our setup. Figure 7a shows the wide-field image of the specimen, Fig. 7b is the optical sectioned image calculated by using the Eq. (1), and Fig. 7c is the product of the normalized Fig. 7a,b. Figure 7d-f give the histograms of Fig. 7a-c, correspondingly. It is seen that Fig. 7a has the full dynamic range of the raw data, but the image appears blurred due to the background fluorescence from the defocus planes. The RMS algorithm of Eq. (1) removes the background but also sacrifices the dynamic range of the raw data. The loss of some gray scales can also be observed from the histograms shown in Fig. 7e. The reduction of gray scales will result in color cast problem in the color restoration. To avoid this problem, we first make a normalization of the intensity of Fig. 7b, and then multiply it with Fig. 7a. By doing so, the reduced dynamic range will be partially restored, as seen in Fig. 7f. With such treatment, the color cast problem for multi-channel integration of color image will be solved in the next step.
Color restoration in HSV space. The color of images in most electronic display products is formed by combining the trichromatic (Red, Green and Blue) light with varying intensities. Because the R, G and B components of a digital image are not fully independent with each other and the RGB color space does not separate the luminance component from the chrominance ones, the image description in terms of RGB components will suffer from color distortion in the case of processing color images in the RGB channels independently. In contrast to RGB space, HSV color space is much closer to people's perception of color. This model is based on the principle of color recognized in human vision in terms of hue, lightness and chroma 32,33 . HSV is an intuitive model that removes the contact between luminance information and color information, thus, is suitable for color image processing 34 .
HSV color space is represented with a hexahedral cone as illustrated in Fig. 8. The angle around the central vertical axis corresponds to "hue", which describes what a pure color is. Starting at the red primary at 0 0 , "H" passes through the green primary at 120 0 and the blue primary at 240 0 , then wraps back to red at 360 0 . The distance from the vertical axis corresponds to "saturation", which represents the purity of colors. It takes values from 0 to 1. The height corresponds to the color brightness in relation to the saturation, for which V = 0 means black while V = 1 means white 35 . Figure 9 shows the flowchart diagram of the optical sectioning decoding algorithm for one slice using the proposed full-color structured illumination microscopy. Three raw images with an adjacent phase-shift of 2π /3 are recorded by the color CMOS camera, and transformed from RGB color space into HSV color space to get the three H, S and V components, respectively. Then the RMS decoding algorithm is applied in the three HSV channels separately to obtain the sectioned images of each channel I iz (x, y), as well as the wide-field images I iwide (x, y), where i = H, S, V. After that, the sectioned images and the wide-field images for the three HSV channels are recombined and transformed back into the RGB space so as to display the images in devices. It is essential for color restoration to multiply the normalized sectioned image with the wide-field image to recover the loss of some gray scales caused by the RMS decoding algorithm. | 5,265.4 | 2015-09-29T00:00:00.000 | [
"Physics"
] |
Time of Arrival Complementing Method for Cooperative Localization of a Target by Two-Node UWB Sensor Network
Recently, the detection, localization and tracking of moving persons in emergency situations using ultrawideband (UWB) sensors have attracted the attention of researchers and final users as well. Experiences with single UWB sensors in real applications have shown that their reliability and accuracy in person detection and localization may be considerably reduced. In contrast, the improved performance of a UWB sensor-based localization system can be provided by a UWB sensor network, which benefits from cooperation among spatially distributed sensor nodes. This cooperation extends the coverage of the monitored area and improves detection capability and localization performance, especially in the case of complex environments and multiple targets. In this paper, we will introduce a new approach to cooperative localization of a target, referred to as the time of arrival complementing method (TOACOM). TOACOM, developed for a two-node UWB sensor network, is based on the time of arrival (TOA) complementing and combining algorithms in combination with the conventional direct calculation method (DC). Its properties will be analyzed for through-the-wall single moving person localization. The obtained results will show the superior performance of TOACOM as compared with person localization by a single UWB sensor, or by a two-node sensor network. In the conclusion, we will outline that the presented version of TOACOM can be further modified for a multiple target scenario and an N-node sensor network.
Introduction
The detection and localization capability of human beings is one of the most attractive features of UWB radars.Sensors of that kind, operating in the frequency band DC-5 GHz, allow for detection and tracking of living persons not only in line-of-sight scenarios, but also of persons located behind non-metallic obstacles (e.g.behind a wall).Therefore, they can be very helpful in such applications as searching for people who have survived at natural disaster but are under rubbles (e.g. after earthquakes, tsunamis, earth slides, avalanches, building collapses, etc.), or for detection and tracking of criminals, terrorists, hostages and soldiers located behind a wall (e.g. for the support of law enforcement and military troops) [1].
For that purpose, handheld UWB radar systems can be used with advantage.Usually, they are equipped with one transmitting and two receiving channels.Hence, a trilateration method can be applied to target localization.The reliability and accuracy of UWB sensor performance depend on their construction and operational parameters (e.g.operational frequency band, emitted power level, maximum range, range resolution, radar antenna system, etc.), on the complexity of the investigated scenarios (single person or multiple person scenarios, operation in the presence of interference or jamming, etc.) and on the complexity of the environment in which the sensors are used (e.g. the presence of metal objects or large reflectors, non-homogeneous objects, etc.).Our experiences with applying a the single UWB sensor to complex scenarios and complex environments have shown that the reliability and accuracy of its performance under these conditions may be reduced.Such performance of UWB sensors is characterized by a drop in detection probability and localization accuracy of the target.These difficult conditions are typical for standard applications of UWB sensors to person detection, localization and tracking.Therefore it has been necessary to look for proper approaches to improving their operation.It has been shown (e.g. in [2][3][4][5][6][7][8]), that a proper approach to improving target detection and localization by UWB sensors is to use a multistatic radar (more precisely, a sensor equipped with more receiving antennas than a standard handheld localization system) or more (at least two) networked independent handheld systems for monitoring the area of interest.Such a UWB sensor network benefits from the cooperation of spatially distributed sensor nodes.This cooperation extends the coverage of the monitored area and improves the detection capabilities and localization precision, especially in the case of multiple targets and complex environments.
Very interesting approach for target localization based on the multistatic localization system has been introduced in [8].In this paper, an asynchronous elliptical position measurement system employing 1 transmitter (T x) and N receivers (Rx) has been proposed for line-of-sight (LOS) indoor localization.The mentioned system consists of a UWB transmitter and energy detection receivers whose positions are known.The position measurement process starts with the locator (Tx) emitting a UWB pulse.Upon arrival, the pulse is amplified and retransmitted by the target to be located.Signals from both the locator and the target are captured by the receivers.Together with the knowledge of the transmitter and receiver positions, the absolute range that the pulse travels is calculated.The sum of transmitter-target range and target-receiver range defines an ellipse and the target resides on the intersections of several such ellipses.For the target localization, three least-square (LS) position estimation algorithms have been considered, namely the ordinary LS, the constrained LS and a combination of LS method and the iterative Gauss-Newton method (referred to as the recursive LS).The obtained experimental results for N = 4 have shown that the described approach can provide the target positioning with the mean value of target position estimation error about 1 cm at the standard deviation smaller than 12 cm.
On the other hand, several approaches for the cooperative localization of moving persons by UWB sensor networks have been suggested in the past.These methods have included an imaging method [2], application of 2D probabilityhypothesis density filters (PHD filters) [3], [4], a UWB sensor network with a centralized architecture employing single or multiple target tracking system (STT or MTT) [5], [6] and the method of joining intersections of ellipses (JIEM) [7].
The imaging method and PHD filters are based on the creation and processing of so-called radar images or 2D PHD functions where the targets are represented by moving hot spots.These methods provide a direct approach to the fusion of data obtained from particular receiving channels of different sensors.Because they are based on 2D signal processing (2D image, 2D PHD filters), they are characterized by high computational complexity, which is their substantial disadvantage.
On the other hand, the UWB sensor network with a centralized architecture [5] is based on the fusion of data representing target coordinates estimated by particular sensor network nodes.In the case of a sensor network of this kind, the STT/MTT system can be regarded as a key method in terms of the cooperative localization and tracking of moving targets [9], [10].STT/MTT provides data association, fusion and target tracking and hence very efficient estimation of target positions with an acceptable computational complexity.However, because of the complex monitored environment, there are situations where not all receiving channels of the node of the sensor network are capable of detecting the targets.Under these conditions, the radar is unable to determine target coordinates, and hence, target localization and tracking efficiency by a UWB sensor network with a centralized architecture can be decreased.In order to overcome this problem, data fusion at the level of the estimated times of arrivals (TOAs) associated with the targets detected by particular receiving channels can be used.This idea has been exploited e.g. by the JIEM algorithm [7].This method is based on the combination of the direct calculation method (DC) and the creation of a proper cluster of potential target positions.It has been shown in [7] that JIEM can provide an improvement of target localization accuracy in comparison with DC.However, a deeper analyse of JIEM has revealed some shortcomings of this approach.We have found that some outliers can be found in estimates of the target trajectory obtained by JIEM.This behavior of JIEM (due to the creation of imperfect clusters of possible and at the same time proper positions of target) will be illustrated also in this paper.This deficiency of JIEM could be eliminated e.g. by the application of a more robust method for the creation of the cluster of proper positions of the target.Moreover, the high computational complexity is another drawback of the JIEM.
In order to overcome the outlined problems, a new approach to cooperative localization of a target, referred to as the time of arrival complementing method (TOACOM), developed for a two-node UWB sensor network, is presented in this paper.TOACOM is based on the combination of DC employing all TOAs estimated by the sensor network nodes, the estimation of target positions combining TOAs provided by the different sensors (the TOA combining algorithm), the TOA complementing algorithm and finally target localization (the arithmetic average application).The performance of TOACOM will be compared with those provided by the simple DC (for single UWB sensors), JIEM and two-node sensor network with a centralized architecture (SN).The comparison of the mentioned methods will be done through processing of radar signals obtained by through-the-wall measurement with a two-node UWB sensor network for a single person scenario.The obtained results will show the superior performance of TOACOM in terms of its higher probability of detection and better accuracy of the localization of the target in comparison with DC, JIEM and SN.TOACOM as a new cooperative method of localization was originally introduced in [11] and [12].Compared to these papers, we will present in this contribution only a short description of TOACOM, but also a more detailed description of the problems to be solved by TOACOM (Sec.2), a slightly adapted description of TOACOM (Sec.3), a deeper analysis of the TOACOM performance properties (Sec.4) and an outline of the TOACOM modification for a multiple moving person scenario and N-node sensor network (Sec.5).
Problem Statement
Let us consider the basic scenario of a through-the-wall localization of a moving target by means of two UWB radar systems, denoted as radar system A (RS A ) and radar system B (RS B ) (Fig. 1).Here, every radar system is equipped with one transmitting (T x R , R = A, B) and two receiving antennas (Rx R,i , R = A, B, i = 1, 2).In the analyzed scenario, it is assumed that the antenna positions are known and their coordinates are given by T 2 for transmitting and receiving antennas, respectively.We will assume, that the antennas of RS A (RS B ) are located on the x-axis (y-axis) for x > 0 (y > 0), and the monitored area is defined as the part of the x y-plane for y > 0. In order to localize and track a moving target, the transmitting antenna emits electromagnetic waves into the monitored area, they are reflected from objects located there (including a target), and finally the reflected waves are received by the receiving antennas.Raw radar signals retrieved from particular radar systems can be interpreted as a set of impulse responses of the surroundings through which the electromagnetic waves were propagated [1].Hereinafter, we will assume that the sensor systems are synchronized in such a way that both radar devices are controlled approximately by the same system clock.Then, we can assume that the radargrams obtained by the measurements by all four receiving antennas have the same propagation and observation time axes and that the radargram samples are taken in the same time instants.The other kind of radar system synchronization is not assumed.For the target track estimation by a single UWB sensor (RS A or RS B ), the complex procedure of raw radar signals for moving person detection, localization and tracking consisting of signal processing phases such as background subtraction, target detection, TOA estimation, wall effect compensation, trace connection, localization and tracking can be used (e.g.[13]).In terms of the objectives of this paper, TOA estimation can be considered the most important and interesting phase of this procedure.Therefore, we will outline our approach to TOA estimation in the next passage.
Similarly to the positioning methods introduced in [8], the localization method, we will develop in this paper, is also based on TOA estimation corresponding to the distance Tx-target-Rx.However in contrast to [8], we will deal with through-the-wall localization (i.e.not LOS scenario) of tagfree moving person not providing any retransmission of signals emitted by the radar (i.e. the target echo to noise and clutter ration is very low).As it was shown in [14], TOA estimation is mainly affected by noise, multipath components, obstacles and interferences.In dense multipath channels, which have to be considered for our scenarios, the first path is often not the strongest, making the estimation of the TOAs challenging.Moreover, a human being represents so-called distributed target, i.e. the same target can provides several reflections of an incidence waves at the same time instant, but these reflections propagating through a multipath environment are received multiply however with the different TOAs.
Taking into account these facts, we have to estimate the target TOA (i.e. one TOA per the target for one time instant) for time-variable dense multipath channel (environment) and very low target echo to noise and clutter ratio.Therefore, for TOA estimation we cannot use simple solutions based on energy detector applications.The TOA estimator considered in this paper is based on the combination of a CFAR detector [15] and trace connection TOA estimator [16].The application of CFAR detector provides the first rough estimate of TOAs of the potential targets.On the other hand, the trace connection TOA estimator subsequent to CFAR provides the only one TOA estimate per target.For that purpose, the specific association methods described in [16] are used (an association of the CFAR detector response corresponding to the same target, an association of the outputs of the receiving channels corresponding to the same target).The detail description of the trace connection TOA estimator is too complex and hence it is beyond this paper.A reader interesting in this topic can find it in [16].
Let us return to the analyzed scenario.Let TOA R,i for R = A, B, i = 1, 2 represent the estimation of TOA of the electromagnetic wave transmitted by T x R , reflected by the target T = (x, y) and received by Rx R,i .We presume that TOA R,i has been estimated by the algorithms mentioned in the previous paragraph.Then, the distance d R,i , between the transmitting antenna T x R , the target T and the receiving antenna Rx R,i (usually referred to a bistatic range) can be expressed as or where c is the propagation velocity of the electromagnetic waves emitted by the radar.In our consideration, c is The expression (2) represents the equation of the ellipse with the foci T x R = (x R,t , y R,t ) and Rx R,i = (x R,i , y R,i ) and with the length of the semimajor axis d R,i /2.Thus, a group of the four ellipses for T x R -Rx R,i pair (R = A, B, i = 1, 2) with the foci in T x R and Rx R,i can be created, for all possible values of d R,i [7].Since the target coordinates have to satisfy (1) and ( 2) and the coordinates of the transmitting and receiving antennas are known, the target coordinates can be determined as the intersection of the ellipses formed by two different T x R -Rx R,i pairs.These intersections can be obtained by the solution of a couple of the corresponding nonlinear equations of (2).If the target coordinates are computed as the intersection of two ellipses corresponding to the same radar system, then this approach is referred to as DC [7].The outlined approach of target localization by the evaluation of the ellipse intersections is usually referred to as a geometrical interpretation of the target localization.Since this approach is indeed very visual, it will be used throughout this paper.Of course, an analytical calculation of the intersections of two ellipses (in general located in any mutual position) is still necessary for target localization.Because of the limited scope of this paper, a detailed solution of this mathematical problem is not provided here.Readers can find a comprehensive solution of this mathematical task e.g. in [7].
Let us return to the basic scenario outlined in Fig. 1.By using TOA R,i for R = A, B, i = 1, 2, four ellipses E i for i = 1, 2, 3, 4 can be constructed.A possible system of the ellipses E i for perfect estimates of TOA R,i is sketched in Fig. 2. It can be seen in this figure that there is only one joint intersection of all ellipses in the monitored area.This intersection, estimated by the solution of (2), represents the target position.Unfortunately, TOA R,i is normally never estimated with zero error.This is due to the non-zero resolution of the radar range, the radar antenna lay-out, the low level of target echo-to-noise and clutter ratio, etc.The scenario for imperfect estimations of TOA R,i for R = A, B, i = 1, 2 is outlined in Fig. 3.It can be observed from this figure that there are 4 intersections representing the possible positions of the target.Moreover, there are situations where the target is not detected by some receiving channel.This is usually due to the complex environment (e.g.shadowing effect, localization of relatively large metal components and other strong reflectors in the monitored area, etc.), the radar antenna patterns, the low level of target echo-to-noise and clutter ratio, etc.If the target is not detected, it is not possible to estimate the corresponding TOA, and hence, some TOAs and ellipses as well are missing.Depending on which TOAs are missing, the target can be localized using DC by only one sensor (the pair of TOAs corresponding to the same sensor is available), or the target cannot be localized using DC (the pair of TOAs corresponding to the same sensor is not available).
Summarizing these facts, at the standard operation of a two-node UWB sensor network applied to a person localization, we can create 0-4 ellipses having 0-4 intersections (i.e.0-4 possible positions of the target) in the monitored area.The problem to be solved within our paper is estimating the target coordinates for this scenario.For that purpose, we will introduce TOACOM in the next section.
TOA Complementing Method
Let us assume that the raw radar data gathered by the particular radar systems have been processed by the radar signal processing procedure described in [13].With the exception of the localization phase, other phases of the procedure mentioned are independent of the number of radar systems applied to target tracking.Therefore, we will focus in this Section on the solution of the localization task consisting in the estimation of the target coordinates T = (x, y).Here, it is assumed, that input data of the localization phase are represented by the set of TOA R, j for R = A, B, j = 1, 2 obtained as the result of the TOA estimation phase.As we mentioned in the previous section, some TOAs can be missing.Then, depending on the number of the estimated TOAs, the localization of the target by TOACOM can be described as follows: 1.No TOA or only one TOA is estimated.Under such an assumption, the target position cannot be estimated.
2. TOA A,1 and TOA A,2 from RS A have been estimated.TOA B,1 and TOA B,2 from RS B are missing.Thus, there is a pair of ellipses E 1 and E 2 for the T x A -Rx A,i pair (i = 1, 2).The target position T is given by the intersection of the ellipses E 1 and E 2 (Fig. 4).The target coordinates can be obtained by DC [7].3. TOA B,1 and TOA B,2 from RS B have been estimated.
TOA A,1 and TOA A,2 from RS A are missing.Thus, there is the pair of the ellipses E 3 and E 4 for the T x B -Rx B,i pair (i = 1, 2).The target position T is given by the intersection of the ellipses E 3 and E 4 (Fig. 5).The target coordinates can be obtained by DC [7]. 4.Only one TOA A,i for i = 1 or i = 2 from RS A , and only one TOA B,i for i = 1 or i = 2 from RS B have been estimated.The other TOAs are missing.Then, the target position is given by the intersection of the ellipses E 1 or E 2 and E 3 or E 4 (Fig. 6).The ellipses E 1 or E 2 are determined by the pairs T x A -Rx A,i (i = 1 or i = 2) and the ellipses E 3 or E 4 are determined by the pairs T x B -Rx B,i (i = 1 or i = 2).The target coordinates T = (x, y) can be determined using Bezout's theorem [17].This step of TOACOM is referred as TOA combining algorithm.5. Three TOAs have been estimated.One TOA is missing.Let us assume e.g.TOA A,2 , TOA B,1 , TOA B,2 have been estimated and TOA A,1 is missing.Then, the ellipses E 2 , E 3 , E 4 determined by the pairs ) can be created.The potential target position estimate referred to T B is given by the intersection of E 3 and E 4 (Fig. 5).The target coordinates T B can be obtained by DC [7].
In order to use the estimated TOA A,2 for target localization by RS A , TOA A,1 has to be also determined.For that purpose, the following algorithm referred to TOA complementing algorithm can be used.Firstly, the intersection of E 2 with E 3 referred to P 1 and the intersection E 2 with E 4 referred to P 2 are computed (Fig. 7).The ellipse intersections located out of the monitored area are removed.In the next step, point P can be constructed (Fig. 7).Its coordinates are given as the average of the corresponding coordinates of points P 1 and P 2 .After that, the missing TO A A,1 can be computed as , where the symbol |X, Y | is set for the Euclidean distance between points X and Y .Then, using the computed TOA A,1 , ellipse E 1 can be constructed.The potential target position estimate referred to T A can be obtained as the intersection of E 1 and E 2 .The final estimate of target coordinates T = (x, y) is obtained as the arithmetical average of the corresponding coordinates of points T A and T B (Fig. 3).6.Four TOA R,i for R = A, B and i = 1, 2 have been estimated.No TOA is missing.In this case, the potential target positions T A and T B are obtained as the intersections of E 1 and E 2 , and E 3 and E 4 , respectively.In contrast to JIEM, the final estimate of the target coordinates T = (x, y) is obtained as the arithmetical average of the points T A and T B (Fig. 3) only.Because these points estimated independently represent with a high probability the positions of the target and not ghosts, we expect that the performance of TOACOM will be more robust compared with JIEM which uses almost all intersections of the four ellipses.
Experimental Results
To evaluate the TOACOM properties, we performed the measurement intent on through-the-wall localization of a moving person by two M-sequence UWB radar systems (Figs.8-10).The analyzed scenario is outlined in Fig. 10.The thickness of the first and the second brick wall was 24 cm and 28 cm, respectively.The person to be localized and tracked was walking inside a fully furnished room (Fig. 8) from reference position P1, through positions P2, P3, P4, P5, P6, P7 up to position P8 (Fig. 10).A moving person was detected and localized by means of two M-sequence UWB radar systems equipped with one transmitting and two receiving antennas [1].The radar antenna lay-outs are outlined in Fig. 10.The system clock frequency of both radar devices was about 4.5 GHz, which results in the operational bandwidth of about DC-2.25 GHz.The impulse responses provided by the radars cover 511 samples regularly spread over 114 ns.The measurement rate was 13.5 impulse responses per second.The total power transmitted by the particular radars was about 1 mW.
The raw radar data acquired by RS A and RS B have been processed by the radar signal procedure described in [13].The true and estimated TOA R,i for R = A, B and i = 1, 2 are depicted in Fig. 11 and 13.In these figures, the intervals of missing TOAs can be clearly identified.Therefore the TOA complementing algorithm has been used for the estimation of the missing TOAs.Then, the true and complemented TOA R,i for R = A, B, i = 1, 2 are depicted in Fig. 12 and 14.By using the estimated and complemented TOAs (in the case of TOACOM only), the target trajectory has been estimated by DC for RSA (DCA), DC for RSB (DCB), SN, JIEM and TOACOM.In the case of SN, the target coordinates have been estimated as the arithmetic average of the target coordinates estimated independently by RSA and RSB.The true and estimated trajectories obtained by these methods are given in Figs.15-19.The standard approach how to increase the accuracy of target position estimation obtained by the localization phase is to apply tracking filters.Because we have dealt with the Now, after this short summary of the obtained results, we can discuss some outcomes in detail.Let us begin with a comparison of the true and estimated TOA given in Fig. 11 and 14.It can be seen in these figures that RS A has been able to estimate TOA quite well.The probability of target localization by RS A is about 0.78.The target trajectory and track follow the true direction of the target motion, but they are shifted along the y-axis.On the other hand, the performance of RS B based on DCB is the worst in comparison with all the tested approaches.In this case, quite a number of TOA has been missing, and hence the probability of the target localization has only been 0.52.The estimated trajectory of the target is spread.The target track tries to follow the true direction of target motion, but it is shifted along the x-axis.We assume that the shifts of the trajectories and tracks estimated by RS A and RS B could be due to a wall effect impact [18].Unfortunately, the impact of that effect is clearly visible in the Fig. 11 and 14, even though the wall effect compensation method of the 1st kind has been used in order to decrease the TOA estimation error [18].Here, better results could be provided by the application of the more efficient wall effect compensation method (e.g.wall effect compensation method of the 2nd kind [18]), but at a cost of higher computational complexity.Summarizing these facts, it can be concluded that the reliability and accuracy of person localization by using a single radar system depend strongly on radar localization, and hence a robust performance of a single sensor system for complex scenarios cannot be expected.
On the other hand, cooperative localization methods such as SN, JIEM and TOACOM are capable of proving a more robust performance than a single sensor system.This is confirmed by the estimates of the target trajectories and tracks (Fig. 18-19, Fig. 23-24).A deeper comparison of the mentioned trajectories and tracks, and performance indicators (Tab. 1) has shown that the best performance is provided by SN and TOACOM.This is enhanced especially by the probabilities PL (PT) for TOACOM indicating that almost 60 % (70 %) of the estimates of the target coordinates at the localization (tracking) phase output is smaller than 0.60 m.The better performance of TOACOM compared to those provided by DC, SN and JIEM is also confirmed by the values of further indicators such as ME and RMSE of the estimated target positions (Tab.1).In spite of the fact that the wall effect compensation method of the 1st kind has been used for the wall effect compensation also of the SN and TOACOM application, no significant shift of the estimated trajectories and tracks in the xy-plane as in the case of DCA and DCB can be identified.The values of the performance indicators for TOACOM are a bit better than those for SN.This results from the fact that TOACOM benefits from the TOA complementing algorithm whereas the SN is based only on the fusion of the target coordinates estimated by RS A and RS B .
Conclusion
In this contribution, we have dealt with through-thewall localization and tracking of a moving person by using a two-node UWB sensor network.For that scenario, we have suggested using TOACOM for target localization.TOACOM is a cooperative method of localization based on the fusion of data retrieved from particular sensors on the TOA level using TOA combining and TOA completing algorithms.
The obtained results for single moving person localization and tracking have confirmed clearly our assumption that the cooperative methods of target localization (two-node sensor network) can provide a more robust performance and at the same time better accuracy than a single UWB sensor application.We have also shown that TOACOM can provide the better performance than the other tested cooperative and non-cooperative methods of target localization.
In this contribution, we have developed TOACOM for single target localization and a two-node sensor network only.This TOACOM version can be further modified also for a multiple target scenario and an N-node sensor network for N > 2. The creation of proper clusters of intersections of all the created ellipses associated with the particular targets will be the new key part of this TOACOM modification.We assume here, that one cluster will include the intersections associated with the same target.Therefore, the TOACOM presented in this paper will then be applied in successive steps to particular clusters.It is well known that in the case of multiple moving person localization, the probability of target detection can be dramatically decreased due to mutual shadowing of the targets [19].Therefore we believe that the outlined modification of TOACOM will be able to provide further improvement of TOACOM efficiency as compared to the other methods of the target localization mentioned in this paper.The development of the outlined modification of TOACOM will be done within our subsequent research.
2 Fig. 1 .
Fig. 1.The basic scenario of target localization by a two-node UWB sensor network.
Fig. 22 .
Fig. 22. Target tracking by SN.single target scenario, we have applied STT for that purpose.This approach is based on the combination of data gating algorithm and linear Kalman filtering.The target tracks obtained by that approach applied to the target trajectories obtained by DCA (STT DCA), DCB (STT DCB), SN (STT SN), JIEM (STT JIEM) and TOACOM (STT TOACOM) are given in Figs.20-24.Finally, using true and estimated TOAs and target trajectory and tracks, the set of proper indicators illustrating the performance properties of the tested localization methods has been evaluated and summarized in Tab. 1.The set of indicators includes the probability of target localization (PrL), mean (ME) and root mean square (RMSE) values of the target localization errors for the estimated positions. | 7,085.4 | 2016-09-15T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Membrane Topology and Essential Amino Acid Residues of Phs1, a 3-Hydroxyacyl-CoA Dehydratase Involved in Very Long-chain Fatty Acid Elongation*
Yeast Phs1 is the 3-hydroxyacyl-CoA dehydratase that catalyzes the third reaction of the four-step cycle in the elongation of very long-chain fatty acids (VLCFAs). In yeast, the hydrophobic backbone of sphingolipids, ceramide, consists of a long-chain base and an amide-linked C26 VLCFA. Therefore, defects in VLCFA synthesis would be expected to greatly affect sphingolipid synthesis. In fact, in this study we found that reduced Phs1 levels result in significant impairment of the conversion of ceramide to inositol phosphorylceramide. Phs1 proteins are conserved among eukaryotes, constituting a novel protein family. Phs1 family members exhibit no sequence similarity to other dehydratase families, so their active site sequence and catalytic mechanism have been completely unknown. Here, by mutating 22 residues conserved among Phs1 family members, we identified six amino acid residues important in Phs1 function, two of which (Tyr-149 and Glu-156) are indispensable. We also examined the membrane topology of Phs1 using an N-glycosylation reporter assay. Our results suggest that Phs1 is a membrane-spanning protein that traverses the membrane six times and has an N terminus and C terminus facing the cytosol. The important amino acids are concentrated in or near two of the six proposed transmembrane regions. Thus, we also propose a catalytic mechanism for Phs1 that is not unlike mechanisms used by other hydratases active in lipid synthesis.
Sphingolipids are abundant lipid components of eukaryotic plasma membranes that have roles in a wide range of biological processes, such as proliferation, apoptosis, differentiation, cell cycle control, adhesion, and intracellular trafficking (1)(2)(3). Ceramide (CER), 3 the backbone of sphingolipids, is composed of a long-chain base (LCB) attached to a fatty acid (FA) via an amide bond. In mammals, FA chain length ranges from C14 to C26, except in certain tissues such as skin, testis, and sperm, which have even longer FAs (4 -7). In most tissues, however, C16 (C16:0) FA is predominant, with the C24 (C24:0 and C24:1) species following. In contrast, in the yeast Saccharomyces cerevisiae the chain length of the FA moiety in CER is predominantly C26. FAs with chain length of 20 or longer are known as very long chain (VLCFAs), and they themselves function in numerous cellular processes, including glycosylphosphatidylinositol anchor biogenesis (8), maintenance of a functional nuclear envelope (9,10), protein transport (11), and production of signaling molecules such as arachidonic acid (12). In fact, yeast mutants deficient in VLCFA production are inviable (13,14), indicating that VLCFAs perform essential functions that cannot be substituted for by more common long-chain FAs (LCFAs) such as C16 and C18. Furthermore, any impairment of VLCFA production also affects the function of those sphingolipids that carry these long chains.
In yeast or mammals, the synthesis of LCFAs is carried out by a multienzyme known as soluble fatty-acid synthase (FAS) (15). FA elongation occurs by cycling through a four-step process (condensation, reduction, dehydration, and reduction), during which the FA chain is bound covalently to the acyl carrier protein (ACP) domain of FAS. Mammalian and yeast FAS (type I FAS) incorporate all catalytic activities of the cyclic reaction as discrete domains on one and two polypeptide chain(s), respectively (15,16). In bacteria, plants, and mitochondria, however, FAS (type II) includes a dissociated system wherein each component is encoded by a separate gene (17). Mammalian and yeast LCFAs can be further converted to VLCFAs by an endoplasmic reticulum (ER) membrane-bound elongase complex (18). This reaction is also carried out by cycling through a fourstep process similar to that performed by FAS (Fig. 1); however malonyl-CoA and acyl-CoA are not covalently attached to the complex but exist as separate compounds. The yeast elongase complex is composed of at least four different polypeptide enzymes. The 3-hydroxyacyl-CoA dehydratase Phs1, which is responsible for the third step, catalyzes the dehydration of the 3-hydroxyacyl-CoA (18).
Phs1 was first reported as a factor involved in sphingolipid metabolism (19). This study found that decreases in Phs1 levels were accompanied by increases in the LCBs dihydrosphingosine (DHS) and phytosphingosine (PHS) and in their phosphorylated forms, the LCB phosphates (19); in fact, LCB accumulation is a general phenotype of VLCFA synthesis mutants (14,20). Phs1 is highly conserved among eukaryotes, constituting a novel protein family. Mammals have two homologs, PTPLA and PTPLB. Expression of PTPLA mRNA is restricted to heart and muscle (21), whereas PTPLB mRNA is ubiquitously expressed (22). PTPLA has been linked to certain muscle diseases, and disruption of the dog gene PTPLA results in centronuclear myopathy (23). Moreover, mutations in the human PTPLA gene were found in patients with arrhythmogenic right ventricular dysplasia (21), although the relationship between the mutation and the disease is not clear.
Despite having a similar activity, Phs1 shares no sequence similarity to either the 3-hydroxyacyl-ACP dehydratase of FAS II or to the 3-hydroxyacyl-ACP dehydratase domain of FAS I. The dehydratase active sites of the FASs share a conserved His residue (17); however, no conserved His residue exists in Phs1 family members. Furthermore, Phs1 enzymes are predicted to be multispanning membrane proteins, whereas the FASs are soluble. Therefore, both the overall structures and the catalytic mechanisms of Phs1 family members and 3-hydroxyacyl-ACP dehydratase (or domain) of FASs may differ greatly. Determining the essential amino acid residues and membrane topology of Phs1 is an important step in understanding its catalytic mechanism. In this study, we changed each of 22 amino acid residues conserved in the Phs1 family to Ala. We identified six important residues, two of which were essential for Phs1 function. In addition, we examined the membrane topology of Phs1 using an N-glycosylation reporter assay, and we now propose that Phs1 is a membrane-spanning protein that traverses the membrane six times, with its N terminus and C terminus facing the cytosol.
Quantitative Analysis of LCB Levels-Cells from each line tested (ϳ3.8 ϫ 10 7 cells) were harvested by centrifugation and suspended in 100 l of water. After 60 pmol of sphingosine (chain length C18) was added as an internal control, lipids were extracted from cells by successively adding and mixing 375 l of chloroform/methanol/HCl (100:200:1, v/v), 125 l of chloroform, and 125 l of 1% KCl. Phases were separated by centrifugation, and the organic phase was recovered, dried, and suspended in 120 l of ethanol by sonication and by heating for 25 min at 67°C. The obtained lipid solution was then treated with 15 l of OPA reagent (1 mg/ml o-phthalaldehyde and 0.2% 2-mercaptoethanol in 3% boric acid (pH 10.5)) for 1 h at room temperature. After a centrifugation at 20,000 ϫ g for 5 min, the supernatant (10 l) was resolved by HPLC (Agilent 1100 series; Agilent Technologies, Palo Alto, CA) on a pre-packed C18 reverse phase column (COSMOSIL 5C18-AR-II; Nakalai Tesque, Kyoto, Japan) using an isocratic eluent composition of methanol, 10 mM potassium phosphate (pH 7.2), 1 M tetrabutylammonium dihydrogen phosphate (83:16:1, v/v), at a flow rate of 1.5 ml/min at 40°C. Lipids modified with o-phthalaldehyde were monitored at an excitation wavelength of 340 nm and an emission wavelength of 455 nm.
In Vivo Labeling Experiments-Prior to [ 14 C]Ser labeling, yeast cells were grown to ϳ1.25 ϫ 10 7 cells/ml in SC medium lacking Ser and Thr, and then 1 ml of cells were labeled with 1 Ci of [ 14 C]Ser (157 mCi/mmol; PerkinElmer Life Sciences) at 30°C. After labeling, cells were chilled on ice, collected by centrifugation, and suspended in ethanol, water, diethyl ether, pyridine, 15 N ammonia (15:15:5:1:0.018, v/v). After a 15-min incubation at 60°C, cell debris and extracted lipids were separated by a 2-min centrifugation at 2,000 ϫ g and at room tempera- ture. Radioactivity was measured using a liquid scintillation system (LSC-3600; Aloka, Tokyo, Japan), and samples containing lipids of equal radioactivity were used for further study. Alkaline treatment was performed on each lipid solution by incubating it with a 1:5 volume of 0.5 N NaOH in methanol for 30 min at 37°C, followed by neutralizing with acetic acid. Lipids were dried and then suspended in 100 l of water-saturated 1-butanol. To desalt the samples 50 l of water was added, and the solution was mixed vigorously and then separated into phases by centrifugation. Lipids in the water phase were re-extracted by adding 100 l of water-saturated 1-butanol. Organic phases were mixed, dried, and suspended in 20 l of chloroform/methanol/water (5:4:1, v/v). Lipids were separated by TLC on Silica Gel 60 high performance TLC plates (Merck) with chloroform, methanol, 4.2 N ammonia (9:7:2, v/v) or chloroform, methanol, 15 N ammonia (60:12:1, v/v) as the solvent system.
[ 3 H]Inositol labeling was performed on yeast cells grown in YPD medium to ϳ1.25 ϫ 10 7 cells/ml, and then 1 ml of cells was labeled with 20 Ci of [1,2-3 H]inositol (60 Ci/mmol; PerkinElmer Life Sciences) for 1 h at 30°C. Lipid extraction and TLC separation were done as described above.
Plasmids-The plasmid pUG23, a yeast expression vector encoding a fusion protein with a C-terminal enhanced green fluorescent protein under the control of the MET15 promoter, was a gift from Dr. J. H. Hegemann (Heinrich-Heine University, Düsseldorf, Germany). The enhanced green fluorescent protein region of pUG23 was removed altogether or replaced with a 3xFLAG tag, creating the pWK151 or pAK881 plasmid, respectively.
The plasmid pSH14 (PHS1-3xFLAG) was constructed in our laboratory. The PHS1 gene was amplified from yeast genomic DNA by PCR using the primers 5Ј-TTTTCTAGATTTCTAC-AATATGTCAAAAAAACTTGC-3Ј (XbaI site underlined) and 5Ј-TTTGCTAGCAATTAGTTTCTTCCCGAAAGAG-GAT-3Ј (NheI site underlined). The resulting fragment was digested with XbaI and NheI and then cloned into the XbaI-SpeI site of pAK881, producing pSH14. Point mutants of PHS1-3xFLAG were then constructed from the pSH14 plasmid by site-directed mutagenesis using a QuikChange site-directed mutagenesis kit (Stratagene, La Jolla, CA). The primers used are listed in Table 1.
Mutants with inserted N-glycosylation sites were constructed as follows. First, a DNA fragment of the SUC2 glycosylation cassette was amplified from yeast genomic DNA by PCR using the primers 5Ј-TTTTTGGATCCTTGACTAATTG-GGAAGATCAACC-3Ј (BamHI site underlined) and 5Ј-AAAAA-GGATCCATAAGTCCAAATCGCAACGCATC-3Ј (BamHI site underlined). A BamHI site was then created in the PHS1-3xFLAG gene of pSH14 at each of the desired positions using a QuikChange site-directed mutagenesis kit and the primers listed in Table 1. Finally, the BamHI-treated fragment of the region encompassing the SUC2 glycosylation cassette was inserted into the BamHI site of the respective constructs.
The pAK400 plasmid, a derivative of pRS426 (27), was constructed to produce a C-terminally 3xHA-tagged fusion protein. The plasmid pSH86 (YBR159w-3xHA) was constructed as follows. The YBR159w gene was amplified from yeast genomic DNA by PCR using the primers 5Ј-AACCCGGGCTTTAAAC-TCATTTCCAAATCTGGC-3Ј (SmaI site underlined) and 5Ј-TTACTAGTTTCCTTTTTAACCTGTCTTGCGGC-3Ј (SpeI site underlined). The resulting fragment was digested with SmaI and SpeI and then cloned into the SmaI-SpeI site of pAK400, producing pSH86.
Sucrose Gradient Fractionation-Sucrose gradient fractionation was performed as described previously (28), with minor
Primers used in this study
Only sense primers are presented.
In Vitro IPC Synthase Assays-In vitro IPC synthase assays were performed essentially as described elsewhere (30) with minor modifications. Each sucrose gradient fraction (50 l) was mixed with 50 l of reaction mix (20 mM HEPES-NaOH (pH 7.5), 2 mM MgCl 2 , 2 mM MnCl 2 , 2 mM CHAPS, 10 M BODIPY FL C5-CER (Invitrogen), 10 mg/ml fatty acid-free bovine serum albumin, 150 M phosphatidylinositol (PI) liposome, 250 mM sucrose, 1ϫ Complete TM protease inhibitor, 1 mM phenylmethylsulfonyl fluoride, and 1 mM dithiothreitol) and then incubated for 2 h at 30°C. Lipids were extracted by successively adding and mixing 333 l of chloroform, methanol, 1 N HCl (4:10:1, v/v), 42 l of 1 N HCl, and 83 l of chloroform. Phases were separated by centrifugation, and the organic phase was recovered, dried, and suspended in 20 l of chloroform. Lipids were separated by TLC on Silica Gel 60 high performance TLC plates with chloroform, methanol, 30 mM KCl (11:9:2, v/v) as the solvent system. The amount of BODIPY-IPC was quantified using a fluoroimaging analyzer FLA-2000 (Fuji Photo Film, Tokyo, Japan).
Deglycosylation of Proteins-Endoglycosidase H (Endo H) was purchased from New England Biolabs (Beverly, MA). Deglycosylation of Phs1 carrying the inserted N-glycosylation cassette was performed using Endo H according to the manufacturer's instruction.
In vitro 3-hydroxyacyl-CoA dehydratase assays were performed by mixing purified Phs1-3xFLAG protein in reaction buffer (total volume of 50 l; 150 mM HEPES-KOH (pH 6.8), 2 mM Mg(OAc) 2 , 0.5% digitonin, and 1 mM dithiothreitol) with 0.05 Ci of 3-[ 14 C]hydroxypalmitoyl-CoA (55 mCi/mmol; American Radiolabeled Chemicals, St. Louis, MO) and then incubating the mixture at 37°C for various times. The reactions were terminated by adding 25 l of 75% KOH (w/v) and 50 l of ethanol, then saponified at 70°C for 1 h, and acidified by adding 100 l of 5 N HCl with 50 l of ethanol. Lipids were extracted twice, each with 700 l of hexane, and then the extracts were pooled, dried, and suspended in 35 l of chloroform. Lipids were separated by TLC on LK50F Silica Gel 150A TLC plates (Whatman, Kent, UK) with hexane/diethyl ether/acetic acid (30:70:1, v/v) as the solvent system.
RESULTS
Decreases in Phs1 Levels Cause a Significant Accumulation of PHS with C20 Chain Length-Previous studies found that decreases in Phs1 cause an accumulation of LCBs (18,19), yet quantitative analyses have not been performed. Therefore, we investigated this using HPLC. Because the PHS1 gene cannot be deleted because of its essential function, we used the yeast strain TH_3237 and its derivative (referred to here as Tet-PHS1), which carries the PHS1 gene under the control of the TetO 7 promoter. The Tet-PHS1 cells were unable to form colonies on YPD plates containing doxycycline (DOX), which shuts off gene expression under the TetO 7 promoter (data not shown). When DOX was added to the culture medium, the growth of the Tet-PHS1 cells began to slow at 4 h ( Fig. 2A). On the other hand, in the absence of DOX the growth rate of the Tet-PHS1 cells was only slightly slower than that of the wild type cells (Fig. 2A). Thus, to avoid nonspecific effects caused by growth inhibition, we studied DOX-treated Tet-PHS1 cells at earlier time points (4 -6 h after adding DOX).
HPLC analysis of wild type cells cultured in YPD medium revealed the major cellular LCBs produced to be PHS 18 (PHS with C18 chain length) and DHS 18 , followed by DHS 16 (Fig. 2, B and C). In the Tet-PHS1 cells, all three of these LCBs were significantly increased compared with those in wild type cells, even in the absence of DOX (Fig. 2C). Moreover, PHS 20 , which was barely detected in wild type cells cultured in YPD medium, was found at high levels in the Tet-PHS1 cells, levels even greater than those of DHS 18 and DHS 16 (Fig. 2, B and C). When DOX was added to the medium, PHS 18 , DHS 18 , and DHS 16 increased slightly in the Tet-PHS1 cells, and the increase in PHS 20 was more prominent (Fig. 2C). These results suggest that C18-CoA, the precursor of PHS 20 , accumulates in Tet-PHS1 cells.
The accumulation of LCBs observed in the Tet-PHS1 cells not treated with DOX may have been because of weaker expression of Phs1 from the TetO 7 promoter compared with the natural PHS1 promoter. We tagged the endogenous PHS1 gene with 3xFLAG and examined gene expression. We estimated that the Tet-PHS1 cells expressed ϳ1:20 the amount of Phs1 protein compared with wild type cells (data not shown). These results indicate that such a decrease in Phs1 levels does affect the cellular VLCFA levels, as well as the levels of LCFA and LCB, but still supports nearly normal cell growth.
Decreases in Phs1 Levels Cause an Accumulation of CER and a Reduction in Complex Sphingolipids-To examine the effect of decreased Phs1 levels on sphingolipid metabolism, we labeled wild type and Tet-PHS1 cells with [ 14 C]Ser in the presence of DOX. [ 14 C]Ser was incorporated into phosphatidylserine, converted to phosphatidylethanolamine, and further to phosphatidylcholine (Fig. 3A). Alkaline treatment abolished the labeled glycerophospholipids by hydrolyzing ester linkages (Fig. 3, A and B). The amounts of labeled phosphatidylserine, phosphatidylethanolamine, and phosphatidylcholine were similar between the wild type and Tet-PHS1 cells, but sphingolipid synthesis was greatly affected in the Tet-PHS1 cells. DHS, PHS, and CER were all increased, whereas IPC, mannosylinositol phosphorylceramide (MIPC), and mannosyldiinositol phosphorylceramide (M(IP) 2 C) were decreased (Fig. 3, A and C). CER accumulation was the most prominent. These results indicate that CER-to-IPC conversion is largely affected by a reduction in Phs1 levels. Furthermore, when labeled sphingolipids were separated by TLC using another solvent system, it became apparent that the CER species differed between the wild type and Tet-PHS1 cells (Fig. 3B). In wild type cells the most abundant CER contains ␣-hydroxy-C26 FA and PHS (20). However, the most prominent band in the Tet-PHS1 cells migrated more slowly on the TLC, i.e. was more hydrophilic, compared with the CER band from the wild type cells. The more hydrophilic band is likely a CER with ␣-hydroxy-C16 FA and PHS, considering that another VLCFA elongation-deficient cell line, ⌬ybr159w, is known to accumulate this CER (20). The amount of this putative CER carrying ␣-hydroxy-C16 FA and PHS was 2.3-fold higher in the Tet-PHS1 cells than the amount of CER (wild type; WT) and SAY32 (Tet-PHS1; Tet) cells were grown at 30°C in YPD medium in the presence or absence of 10 g/ml DOX. At the indicated time points following the addition of DOX to the culture media, aliquots of cell suspensions were measured for cell density (A 600 ) as a determination of growth. Values represent the mean Ϯ S.D. from three independent experiments. B and C, R1158 (wild type) and SAY33 (Tet-PHS1) cells were grown for 6 h at 30°C in YPD medium in the presence (B and C) or absence (C) of 10 g/ml DOX. Lipids were extracted, treated with o-phthalaldehyde, and analyzed by reverse-phase HPLC. The area of each peak representing an LCB was quantified using sphingosine with C18 chain length (SPH 18 ) as an internal control and is presented in C. Values represent the mean Ϯ S.D. from three independent experiments.
carrying ␣-hydroxy-C26 FA and PHS found in wild type cells. Other CER species present in the Tet-PHS1 cells (Fig. 3B, upper bands) likely also correspond to those found in ⌬ybr159w cells, including CER with nonhydroxy-C16 FA and PHS (20).
We also labeled the wild type and Tet-PHS1 cells with [ 3 H]inositol. In the wild type cells a significant amount of PI was converted to complex sphingolipids (IPC, MIPC, and M(IP) 2 C) and PI monophosphates (phosphatidylinositol 3-phosphate and phosphatidylinositol 4-phosphate) (Fig. 4). Synthesis of these complex sphingolipids was again reduced in the Tet-PHS1 cells, and addition of DOX furthered the reduction (Fig. 4). Compared with the reduction in MIPC, reductions in IPC and M(IP) 2 C were much more pronounced. Conversely, in the Tet-PHS1 cells the lyso-PI levels were increased (Fig. 4), something often observed in sphingolipid synthesis-deficient cells (31), although the molecular mechanism of its production is not known. The amount of labeled PI was also slightly increased compared with that in wild type cells, probably because of a block of further metabolism (Fig. 4). Noticeably, the production of PI monophosphates was also decreased in the Tet-PHS1 cells (Fig. 4) (see "Discussion").
Although CER-to-IPC conversion was greatly affected in the Tet-PHS1 cells, any causative mechanism was unclear. CER is synthesized in the ER and transported to the Golgi, where it receives a phosphoinositol moiety transferred from PI by the PI:CER phosphoinositol transferase (IPC synthase) Aur1, generating IPC. Aur1 is an integral membrane protein, so if it is synthesized in the ER it must also be transported to the Golgi. One possible mechanism causing the impairment in CER-to-IPC conversion is an indirect effect by the defective VLCFA synthesis on the localization or activity of Aur1. To examine the intracellular localization of Aur1, we performed a sucrose gradient fractionation. The ER and the Golgi were nicely separated by this fractionation (Fig. 5A). However, the activity of Aur1 and its Golgi localization were indistinguishable between the wild type and Tet-PHS1 cells (Fig. 5B). Therefore, the accumulation of CER in Tet-PHS1 cells is because of something other than decreased activity or mislocalization of this enzyme (see also "Discussion").
Tyr-149 and Glu-156 Residues Are Essential for Phs1 Function-The Phs1 family is conserved among eukaryotes. To begin to identify residues in Phs1 family members essential to its activity, we first compared amino acid sequences of 31 family members from 24 organisms and found that 22 amino acid residues are conserved (supplemental Fig. S1). We changed each residue to Ala and expressed each resulting mutant in Tet-PHS1 cells as a C-terminally 3xFLAG-tagged protein. Of the mutants tested, 16 restored the growth defect of the Tet-PHS1 cells on SC plates containing DOX, as did the wild type Phs1-3xFLAG protein (Fig. 6A). On the other hand, the Q79A, R83A, R141A, and G152A mutants supported cell growth only weakly; moreover, the Y149A and E156A mutants exhibited no growth (Fig. 6A). HPLC analysis demonstrated that the Q79A, R83A, R141A, Y149A, G152A, and E156A mutants could not suppress PHS accumulation (Fig. 6B), although the proteins were expressed normally (Fig. 6C). PHS also accumulated at low levels in the E60A, E116A, and R119A mutants compared with wild type cells (Fig. 6B). These results indicate that Gln-79, Arg-83, Arg-141, and Gly-152 are important, and Tyr-149 and Glu-156 are essential for the function of Phs1.
Most likely, Tyr-149 and Glu-156 are directly involved in the catalytic reaction. However, one cannot exclude the possibility that mutation in these residues caused a change in overall protein structure that resulted in a loss of enzyme activity. To exclude this possibility, we also investigated the interaction of Phs1 with the 3-ketoacyl-CoA reductase Ybr159w, another component of the VLCFA biosynthetic machinery that reportedly interacts with Phs1 (18). In co-immunoprecipitation experiments, both mutants interacted with 3xHA-tagged Ybr159w, as did the wild type Phs1 protein (Fig. 6E). These results suggest that the overall protein structure of these mutant proteins was not disrupted.
Phs1 Is a Membrane Protein That Traverses the Membrane Six Times-The SOSUI program (available on line) predicts that Phs1 spans the membrane six times. To examine whether Phs1 does indeed exhibit such membrane topology, we performed N-glycosylation reporter assays. Because N-glycosylation occurs only in the lumen of the ER, the glycosylation of certain amino acid residues indicates that the hydrophilic region of a protein containing the residue is located in the lumen. We chose the invertase glycosylation cassette, a 53amino acid fragment of the invertase Suc2 (Lys-81 to Tyr-133), as the glycosylation reporter, because it has been used successfully in analyzing the topology of several membrane proteins, including the Ser palmitoyltransferase Lcb1 and the CER synthases Lag1 and Lac1 (32,33). This glycosylation cassette was inserted into the predicted loop regions, e.g. between Gly-38 and Gln-39 (G38/Q), R70/S, T97/S, G132/A, and Q170/Y, as well as into the N terminus and the C terminus. When introduced into the Tet-PHS1 cells, all constructs restored the growth of the defective cells on DOX-containing plates (data not shown), indicating that each Phs1 with an insertion was functional so the membrane topology must not be perturbed. Total lysates were prepared from cells expressing each construct, incubated with or without Endo H, separated by SDS-PAGE, and examined by immunoblotting. The gel mobilities of proteins carrying insertions in the N terminus, R70/S, G132/A, and C terminus were unaffected by Endo H treatment (Fig. 7A), indicating that these proteins were unglycosylated. However, compared with these unglycosylated constructs, untreated G38/Q, T97/S, and Q170/Y migrated more slowly on SDS-PAGE, and upon incubation with EndoH, their bands shifted to the position of the unglycosylated forms (Fig. 7A). Therefore, G38/Q, T97/S, and Q170/Y were N-glycosylated. With respect to these results, we illustrate a topological model for Phs1 in Fig. 7B. Our results suggest that Phs1 traverses the membrane six times and that its N terminus and C terminus are located in the cytosol. Because Phs1 carries the ER retention signal KKXX at its C terminus, and this type of signal functions in the cytosolic side of ER membrane proteins, the cytosolic orientation of the C terminus of Phs1 in our model is reasonable.
DISCUSSION
Phs1 belongs to a novel 3-hydroxyacyl-CoA dehydratase protein family and is conserved among eukaryotes. Members of FIGURE 5. IPC synthase in Tet-PHS1 cells exhibits normal activity and localization. Total membrane extracts prepared from SAY31 (wild type; WT) and SAY32 (Tet-PHS1; Tet) cells were fractionated on a sucrose density gradient. A, fractions were analyzed by immunoblotting with antibodies against Anp1 and Sec61, specific markers for the Golgi and ER, respectively. B, fractions were subjected to an in vitro IPC synthase assay using fluorescencelabeled CER (BODIPY FL C5-CER). Lipids were extracted and separated by TLC. The amount of BODIPY-IPC was quantified using a fluoroimaging analyzer FLA-2000. this family exhibit no significant sequence similarity to 3-hydroxyacyl-ACP dehydratases or similar functional domains of FASs, and information regarding its active site and catalytic mechanism has been completely lacking. In the present study we identified several amino acid residues (Gln-79, Arg-83, Arg-141, Tyr-149, Gly-152, and Glu-156) that are important for Phs1 function. In particular, Tyr-149 and Glu-156 are essential for Phs1 activity, because substitution of either residue resulted in a loss of growth restoration of Tet-PHS1 cells (Fig. 6A) and an absence of enzyme activity (Fig. 6D). We also examined the membrane topology of Phs1 and propose that it spans the membrane six times and that its N terminus and C terminus are located in the cytosol (Fig. 7B). In this topology model, the six important residues above are located within or near transmembrane regions 3 and 5. Transmembrane region 5 is particularly important because four important residues, including the two essential residues Tyr-149 and Glu-156, are located within it or nearby.
In the 3-hydroxyacyl-ACP dehydratases (or domains) of FASs, His and Asp/Glu residues are essential for catalysis, and their reaction mechanism has been proposed. For instance, in the bacterial 3-hydroxyacyl-ACP dehydratase FabA, the His-70 residue acts as a catalytic base to abstract a proton from the C-2 of 3-hydroxyacyl-ACP, and the Asp-84 residue promotes the removal of the hydroxyl group at C-3 (17). Furthermore, Cys-80, four residues before the Asp-84, is also important as it interacts with the hydroxyl group at C-3 of 3-hydroxyacyl-ACP and that in H 2 O, before and after the catalysis, respectively, via hydrogen bonds (17). The sole His residue in Phs1, His-196, is not conserved, and in fact an Ala mutant of this residue restored the growth defect of the Tet-PHS1 cells, indicating that this residue is not important for enzyme activity (data not shown). Therefore, we surmised that another amino acid residue must serve as the catalytic base. The deprotonated form of Tyr is one proton-accepting amino acid residue. For example, a Tyr residue acts as one of a catalytic triad in members of the short-chain were grown for 48 h at 30°C on plates of SC medium lacking His and Met in the presence of 10 g/ml DOX, and growth was determined visually. Symbols indicate the following: Ϫ, no growth; Ϯ, slow growth; and ϩ, normal growth, compared with SAY32 cells expressing wild type Phs1-3xFLAG. B, cells were grown at 30°C for 6 h in SC medium lacking His and Met but containing 10 g/ml DOX. Lipids were extracted and derivatized with o-phthalaldehyde. Samples equivalent to 2.8 ϫ 10 6 cells were analyzed by reverse-phase HPLC, and the area of the peak representing PHS 18 was quantified. Each value represents the PHS 18 amount relative to that in SAY32 cells expressing wild type Phs1-3xFLAG and presents the mean Ϯ S.D. from three independent experiments. Statistically significant differences are indicated (*, p Ͻ 0.05; **, p Ͻ 0.01; t test). C, total lysates from B were prepared, and equal amounts of proteins (5 g) were subjected to immunoblotting with anti-FLAG antibodies. Uniform protein loading was demonstrated by immunoblotting with anti-Pgk1 antibodies. Phs1-3xF, Phs1-3xFLAG. D, using anti-FLAG M2-agarose, Phs1-3xFLAG proteins were affinity-purified from SAY32 (Tet-PHS1) cells bearing pUG23 (vector), pSH14 (wild type PHS1-3xFLAG), pSH28 (Y149A), or pSH31 (E156A). Cells had been grown for 6 h at 30°C in SC medium lacking His and Met but containing 10 g/ml DOX. Purified Phs1-3xFLAG proteins (wild type, 1 ng; Y149A and E156A, 4 ng) or mock enzyme solution were incubated with [ 14 C]3-hydroxypalmitoyl-CoA for the indicated times. After termination of the reactions, lipids were saponified, acidified, extracted, separated by TLC, and visualized by autoradiography. 3-OH C16:0, 3-hydroxypalmitic acid; 2,3-C16:1, 2,3-trans-hexadecenoic acid. E, DEY113 cells bearing pSH86 (YBR159w-3xHA) were transfected with pWK151 (vector), pSH14 (wild type PHS1-3xFLAG), pSH28 (Y149A), or pSH31 (E156A). Cells were grown for 6 h at 30°C in SC medium lacking His, Met, and uracil. Total cell lysates were prepared from the cells and solubilized with 1% digitonin. Following immunoprecipitation with anti-FLAG M2 agarose, bound material and input fractions were subjected to immunoblotting with anti-FLAG or anti-HA antibodies. WT, wild type; IP, immunoprecipitation; IB, immunoblotting. dehydrogenase/reductase family, and its deprotonated form functions as the proton acceptor (34). We found only two amino acids (Tyr-149 and Glu-156) essential for the function of Phs1 (Fig. 6A), so we hypothesize that these two residues constitute an active site that functions in the abstract of a proton from the C-2 of 3-hydroxyacyl-CoA and in the removal of the hydroxyl group at C-3, respectively. Analogous to the Cys-80 of FabA, the Gly-152 of Phs1, which is also located four residues before the Glu-156, may interact with the hydroxyl groups at C-3 of 3-hydroxyacyl-CoA and in H 2 O, prior to and after the catalysis, respectively. It is possible that the Gln-79 and Arg-83 residues in transmembrane region 3 are positioned near the Tyr-149, Gly-152, and Glu-156 in transmembrane region 5 in the folded state, and that Arg-83 stabilizes the deprotonated state of Tyr-149. However, in our topology model the putative active site (comprising Tyr-149, Gly-152, and Glu-156) is located in the interior of the membrane. It is also possible that the substrate 3-hydroxyacyl-CoA is deeply embedded in the membrane during the reaction. Consistent with this notion, a previously reported biochemical study investigating the sensitivity of rat liver microsomal membranes to several proteases and observations with an anti-3-hydroxyacyl-CoA dehydratase antibody suggested that the active site of the mammalian enzyme is embedded in the microsomal membrane, in contrast to those of the condensing enzyme and 2,3-trans enoyl-CoA reductase (35). Alternatively, the prediction of the transmembrane region is incorrect, and the location of the putative active site is actually more proximal to the cytosolic surface.
To further determine the role of VLCFAs in sphingolipid metabolism, we investigated the effect of exogenous C26 FA (20 M) on the Tet-PHS1 cells. Exogenous C26 FA had almost no effect on the growth of the cells (data not shown). LCB levels did decrease upon addition of C26 FA, but only slightly, with the amount of PHS 18 in the Tet-PHS1 cells decreasing from 940 pmol/10 7 cells in the absence of C26 FA to 770 pmol/10 7 cells in its presence, still much higher than levels in wild type cells (18.7 pmol/10 7 cells). [ 14 C]Ser labeling demonstrated no apparent change in the sphingolipid pattern after the addition of the C26 FA (data not shown). Thus, exogenous C26 FA was not able to be utilized by these cells. Several reasons for this are conceivable. The most likely possibility we considered is low efficiency of the conversion of C26 FA to C26-CoA. Indeed, cellular very long-chain fatty acyl-CoA synthetase activity in yeast cells is significantly lower than long-chain fatty acyl-CoA synthase activity (36). Yeast does contain the very long-chain fatty acyl-CoA synthetase Fat1, which is specific for substrates with acyl chains C20 and longer (36), as well as five other yeast acyl-CoA synthetases with other specificities (37). In contrast to exogenously added VLCFAs, endogenous very long-chain fatty acyl-CoAs are synthesized by the VLCFA elongation system, mainly from C16-CoA produced by FAS without the release of free VLCFAs. Another possibility regarding lack of effect of exogenous C26 FA is difficulty in importing C26 FA or in transporting it from the plasma membrane to the ER. Indeed, an absence of phenotypic reversion following treatment with exogenous VLCFA has been reported for another mutant defective in VLCFA synthesis, ⌬ybr159w (38).
Of the steps in sphingolipid biosynthesis, the step most affected by a reduction in Phs1 levels is the CER-to-IPC conversion, although a causative mechanism is unclear. CER with a shorter chain length was accumulated in the Tet-PHS1 cells (Fig. 3B). It is possible that the IPC synthase Aur1 exhibits low activity toward such short-chain CER. However, another possibility may be more likely, because Aur1 efficiently converts even shorter C6 7-nitrobenz-2-oxa-1,3-diazol-4-yl-CER to IPC in vivo and in vitro (30). More convincing, however, is that the localization and activity of Aur1 were not affected in the Tet-PHS1 cells (Fig. 5B). We speculate then that the transport of CER and/or PI is impaired in the Tet-PHS1 cells. Both CER and FIGURE 7. Phs1 is a membrane protein that traverses the membrane six times with its N terminus and C terminus exposed to the cytosol. A, total lysates were prepared from DEY102 cells bearing pSH14 (PHS1-3xFLAG without insertion), pSH69 (N terminus; N-term), pSH38 (G38/Q; insertion between Gly-38 and Gln-39), pSH39 (R70/S), pSH40 (T97/S), pSH41 (G132/A), pSH42 (Q170/Y), or pSH70 (C-term). Lysates were untreated or treated with Endo H and separated by SDS-PAGE, followed by immunoblotting with anti-FLAG antibodies. B, model for the topology of Phs1 is illustrated. Gray circles indicate amino acid residues conserved among 31 Phs1 homologs. Amino acid residues whose mutation resulted in weak or no growth in Fig. 6A are illustrated as black circles and squares, respectively. Arrowheads indicate the insertion sites of the glycosylation cassette for the topology assay.
PI are synthesized in the ER and delivered to the Golgi for conversion to IPC. However, their transport mechanism in yeast is not known, although in mammals the transport of CER occurs by the transfer protein CERT (39). Consistent with the theory of impaired transport, the production of both complex sphingolipids and PI monophosphates, which also requires transport from the ER (40), was affected in the Tet-PHS1 cells (Figs. 3 and 4). Moreover, in [ 3 H]inositol labeling experiments the CER-to-IPC and MIPC-to-M(IP) 2 C steps, both of which require supply of PI, were more affected than the IPC-to-MIPC step in the Tet-PHS1 cells (Fig. 4). These results suggest that PI transport is indirectly impaired by deficient VLCFA synthesis.
Mammals have two Phs1 homologs, PTPLA and PTPLB, although their functions remain unclear. Forced expression of PTPLA or PTPLB in the Tet-PHS1 yeast cells rescued the growth in the presence of DOX, 4 indicating that both proteins possess functions identical to Phs1, i.e. 3-hydroxyacyl-CoA dehydratase activity. Disruption of the PTPLA gene in dogs causes myotubular (centronuclear) myopathy (23). Interestingly, mutations in the MTM1 gene encoding myotubularin also result in this disease in humans (41). Myotubularin catalyzes the dephosphorylation of phosphatidylinositol 3-phosphate and phosphatidylinositol 3,5-bisphosphate (42)(43)(44). Thus, altered metabolism of phosphoinositides may be one cause of this myopathy. It is possible that disruption of the PTPLA gene also affects the phosphoinositide metabolism as described above for Phs1, leading to myotubular myopathy.
Fen1/Sur4, Ybr159w, Phs1, and Tsc13 form elongase complex(es) (18). In addition to the VLCFA elongation, this complex seems to function in or couple with other cellular processes by interacting with other proteins. For example, Tsc13 interacts with Nvj1, which is involved in the formation of nucleus-vacuole junctions (45). Nucleus-vacuole junctions are sites of piecemeal microautophagy of the nucleus, during which nonessential portions of the nucleus are pinched off into invaginations of the vacuole membrane and then degraded in the vacuole lumen (46). VLCFAs have been proposed to be required for the efficient biogenesis of the highly curved blebs and vesicles observed during the microautophagy of the nucleus by promoting the formation of highly curved membrane structures (45). In addition to Nvj1, the Fen1/Sur4, Ybr159w, Phs1, and Tsc13 complex seems to interact with numerous proteins. Comprehensive protein-protein interaction analyses, by affinity capture and a yeast two-hybrid method, determined that proteins in this complex interact with more than 100 proteins (47)(48)(49). These included proteins involved in lipid metabolism such as ergosterol synthesis (Erg3, Erg11, and Erg25), sphingolipid metabolism (Csg2, Lac1, Ypc1, Ydc1, and Sur2), and monounsaturated fatty acid synthesis (Ole1), and in protein transport from the ER to the Golgi (Emp24, Emp46, Emp47, Erp4, Erp5, Erv14, Erv29, Erv41, and Yop1). Although these interactions must be confirmed by other methods such as co-immunoprecipitation, it is possible that Fen1/Sur4, Ybr159w, Phs1, and Tsc13 might form extremely large complex(es) functioning or coupling with other cellular processes. To coordinate VLCFA synthesis and sphingolipid synthesis, it is possible that CER and/or PI transport is regulated by this hypothesized large complex. However, future studies are required to determine the details of any such complex. | 8,590.4 | 2008-04-25T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Performance Analysis of Wireless Information Surveillance in Machine-Type Communication at Finite Blocklength Regime
The Internet of Things (IoT) will feature pervasive sensing and control capabilities via the massive deployment of machine-type communication devices in order to greatly improve daily life. However, machine-type communications can be illegally used (e.g., by criminals or terrorists) which is difficult to monitor, and thus presents new security challenges. The information exchanged in machine-type communications is usually transmitted in short packets. Thus, this paper investigates a legitimate surveillance system via proactive eavesdropping at finite blocklength regime. Under the finite blocklength regime, we analyze the channel coding rate of the eavesdropping link and the suspicious link. We find that the legitimate monitor can still eavesdrop the information sent by the suspicious transmitter as the blocklength decreases, even when the eavesdropping is failed under the Shannon capacity regime. Moreover, we define a metric called the effective eavesdropping rate and study the monotonicity. From the analysis of monotonicity, the existence of a maximum effective eavesdropping rate for a moderate or even high signal-to-noise (SNR) is verified. Finally, numerical results are provided and discussed. In the simulation, we also find that the maximum effective eavesdropping rate slowly increases with the blocklength.
Introduction
The vision of the Internet of Things (IoT) promises to bring wireless connectivity to anything ranging from tiny static sensors to vehicles and unmanned aerial vehicles (UAVs) [1][2][3]. Meanwhile, short packets are the typical form of traffic generated by sensors and exchanged in machine-type communications [4]. In these scenarios, the Shannon capacity, which assumes the infinite blocklength, is no longer achievable. In comparison to the Shannon capacity regime, reference [5] developed a pioneering framework and identified a tight bound of the channel coding rate at the finite blocklength regime, which presents many new research opportunities with a wide range of applications.
The IoT can offer many benefits for daily life; however, machine-type communications, such as vehicle to vehicle communication and UAV communication among others, can be illegally used (e.g., by criminals or terrorists), which is difficult to monitor, thus presenting new challenges with respect to public security [6]. Thus, legitimate eavesdropping by legitimate parties should be necessary to effectively discover and prevent the information transmitted between the suspicious users. Further, proactive eavesdropping has recently attracted much interest in research as an approach to improve eavesdropping performance.
As a common point, all the above studies are under the Shannon capacity regime, where the length of the block is assumed to be infinite. The Shannon capacity is not achievable when the information transmitted in short packets. To our best knowledge, there is no research on the legitimate proactive eavesdropping under the finite blocklength regime. Therefore, this paper analyzes the performance of a legitimate surveillance system via proactive eavesdropping at the finite blocklength regime. In the system, there is a suspicious transmitter-receiver pair, which may be two stationary UAVs etc, and a legitimate monitor. The legitimate monitor operates in a full-duplex mode with simultaneous information reception and relaying. The main contributions are summarized as follows.
In this paper, under the finite blocklength regime, we analyze the channel coding rate of the eavesdropping link and the suspicious link. Meanwhile, we find that the legitimate monitor can still eavesdrop the information sent by the suspicious transmitter as the blocklength decreases, even when the eavesdropping is failed under the Shannon capacity regime. Moreover, we define a metric called the effective eavesdropping rate and analyze the monotonicity. From the analysis of monotonicity, the existence of a maximum effective eavesdropping rate for moderate or even high signal-to-noise (SNR) is verified. Finally, numerical results are provided and discussed. In the simulation, we also find that the maximum effective eavesdropping rate slowly increases with the blocklength, and the increment is almost negligible when the blocklength reaches a relatively large value.
The rest of this paper is organized as follows. The system model and assumptions are described in Section 2. Section 3 analyzes the performance of the legitimate surveillance system at finite blocklength. Numerical results are presented in Section 4. Finally, the paper is concluded in Section 5.
System Model and Assumptions
As shown in Figure 1, we consider a legitimate surveillance system consisting of a suspicious transmitter-receiver pair (i.e., S-D) and a full-duplex legitimate monitor E. S transmits information to D during n channel uses, in this way, we consider that each block spans over n channel uses. We assume that both S and D are unaware of the presence of E and the decode-and-forward (DF) relaying is adopted by E. If E decodes the block received from S successfully, it forwards the block to D, which aims to enhance eavesdropping the suspicious link. S and D are each equipped with a single antenna, and E is equipped with two antennae, one for eavesdropping (receiving) and the other for relaying (transmitting). S can adaptively adjust its transmission rate. The self-interference from the relaying antenna to the eavesdropping antenna at the legitimate monitor is assumed to be perfectly cancelled by using advanced analog and digital self-interference cancellation methods [13]. DF can be assumed here as in [8,27]. In addition, E can act as a fake relay and thus obtain the channel state information and the symbol format of the suspicious link, and synchronize with S and D [19,20].
Sensors 2019, 19, x 3 of 12 D during n channel uses, in this way, we consider that each block spans over n channel uses. We assume that both S and D are unaware of the presence of E and the decode-and-forward (DF) relaying is adopted by E. If E decodes the block received from S successfully, it forwards the block to D, which aims to enhance eavesdropping the suspicious link. S and D are each equipped with a single antenna, and E is equipped with two antennae, one for eavesdropping (receiving) and the other for relaying (transmitting). S can adaptively adjust its transmission rate. The self-interference from the relaying antenna to the eavesdropping antenna at the legitimate monitor is assumed to be perfectly cancelled by using advanced analog and digital self-interference cancellation methods [13]. DF can be assumed here as in [8,27]. In addition, E can act as a fake relay and thus obtain the channel state information and the symbol format of the suspicious link, and synchronize with S and D [19,20]. We consider a Rayleigh quasi-static block-fading channel [28], where fading process is considered to be constant over the transmission of a block and independently and identically distributed from block to block. Let 0 h , 1 h and 2 h denote channel coefficients from the suspicious transmitter to the suspicious receiver, from the suspicious transmitter to the eavesdropping antenna of the legitimate monitor, and from the relaying antenna of the legitimate monitor to the suspicious receiver, respectively. The corresponding channel gains are defined as = g h . In addition, we assume that E perfectly knows the channel state information of all links, which can be obtained by utilizing the methods given in the literature [14,17,19,20].
Channel Coding Rate for Finite Blocklength
For a given decoding error probability ε , the channel coding rate R (in bits per channel use) with blocklength n is [28,29] ( ) is the inverse Q-function and as usual the Q-function is given by Note that Equation (1) is a very tight approximation when ≥ 100 n , i.e., the difference from the exact value can be neglected [28,29]. Thus, we consider ≥ 100 n in this paper and use equal sign in Equation (1). Based on the above results, R can be transformed into Equivalently, for a given channel coding rate R, the decoding error probability ε can be given by We consider a Rayleigh quasi-static block-fading channel [28], where fading process is considered to be constant over the transmission of a block and independently and identically distributed from block to block. Let h 0 , h 1 and h 2 denote channel coefficients from the suspicious transmitter to the suspicious receiver, from the suspicious transmitter to the eavesdropping antenna of the legitimate monitor, and from the relaying antenna of the legitimate monitor to the suspicious receiver, respectively. The corresponding channel gains are defined as g 0 = |h 0 | 2 , g 1 = |h 1 | 2 and g 2 = |h 2 | 2 . In addition, we assume that E perfectly knows the channel state information of all links, which can be obtained by utilizing the methods given in the literature [14,17,19,20].
Channel Coding Rate for Finite Blocklength
For a given decoding error probability ε, the channel coding rate R (in bits per channel use) with blocklength n is [28,29] where Q −1 (.) is the inverse Q-function and as usual the Q-function is given by In addition, C = log 2 (1 + γ) is Shannon capacity function of the SNR γ. Note that Equation (1) is a very tight approximation when n ≥ 100, i.e., the difference from the exact value can be neglected [28,29]. Thus, we consider n ≥ 100 in this paper and use equal sign in Equation (1). Based on the above results, R can be transformed into Equivalently, for a given channel coding rate R, the decoding error probability ε can be given by
Performance at Finite Blocklength
In this section, under the finite blocklength regime, we first analyze the performance of the legitimate surveillance system in terms of the channel coding rate of the eavesdropping link and the suspicious link in comparison with the Shannon capacity regime. Afterwards, we define a metric called the effective eavesdropping rate and analyze the monotonicity. From the analysis of monotonicity, the existence of a maximum effective eavesdropping rate for moderate or even high SNR is also verified.
Analysis of Channel Coding Rate
According to Equation (2), the channel coding rate of the eavesdropping link can be obtained as where power of noise at E, and ε E is the decoding error probability at E. Likewise, the effective channel coding rate of the suspicious link can be obtained as where C D = log 2 (1 + γ D ), γ D = (g 0 P 1 + g 2 P 2 )/σ D 2 is the effective SNR at D, P 2 is the transmit power at E, σ D 2 is the power of noise at D, and ε D is the decoding error probability at D. E can act as a fake relay and alter the effective channel of the suspicious link from S to D [17]. Thus, we use effective channel coding rate, which includes the suspicious link and the relaying link. ε D results from the error probability of each link and is given by where ε 0 and ε 2 are the decoding error probabilities of the suspicious link and the relaying link, respectively.
In summary, we can obtain as follows It can be known that Q(x) < 0.5 when x > 0. So according to Equation (3), ε < 0.5. In this way, we immediately have ε E < 0.5. Thus, we can derive ε D < ε E from Equation (7).
When ε E < ε 2 , we can obtain ε D < ε 2 . But, we consider ε E ≥ ε 2 is more reasonable. The reasons mainly include the following: ε 2 decreases as the transmission rate of E decreases; ε 2 decreases as the transmit power of E increases; meanwhile, as the transmit power of E increases, ε E increases. Overall, ε 2 can be controlled at a very small value by reducing the transmission rate of E or increasing the transmit power of E.
In general, under the Shannon capacity regime, the Shannon capacity of the eavesdropping link is C E , accordingly, the effective Shannon capacity of the suspicious link is C D , as in [17]. Next, we give the following proposition. Proposition 1: R E > R D when C E > C D , i.e., under the finite blocklength regime, E can eavesdrop the information sent by S the same as the condition under the Shannon capacity regime.
Proof: See detailed proof of Proposition 1 in Appendix A. The corresponding simulation is shown in Figure 2. stated, we set the transmit power at S as = 1 20 P dB. We assume that the transmit power 2 P is large enough to facilitate the eavesdropping. In Figure 2, E R with E C and D R with D C are shown for given blocklength n and error probability ε . Here, the transmit power 2 P is set to be 2 dB. Without loss of generality, n is set to be 100 and 400 channel uses, E ε and D ε are set to be 10 -3 and 10 -4 , respectively. As shown in the and that D R also increases with D C . For example, when n is 400 channel uses, for = =1.63 Thus, under the finite blocklength regime, E can eavesdrop the information sent by S the same as the condition under the Shannon capacity regime, which is in line with Proposition 1.
Next, we give the following proposition, which is different from the results under the Shannon capacity regime where the legitimate monitor can eavesdrop the information sent by the suspicious transmitter only when C E ≥ C D .
Proposition 2: E can still eavesdrop the information sent by S as n decreases even though in some conditions of C E < C D , i.e., when n decreases, R E ≥ R D can still be achieved even in some conditions of C E < C D .
Proof: Based on Equation (A1), it is known that R E − R D > 0 when C E = C D . Further, according to Equation (A1), the value of R E − R D decreases with n because n is in the denominator. Therefore, the value of R E − R D increases as n decreases. In this way, in some conditions of C E < C D , R E ≥ R D can still be achieved as n decreases, which is investigated by simulation in Figure 3. Thus, E can still eavesdrop the information sent by S as n decreases even though in some conditions of C E < C D . Figure 4 shows the effective eavesdropping rate eff R with the eavesdropping rate eav R at E given in Equation (9) Blocklength n (channel uses)
Analysis of Effective Eavesdropping Rate
When R E > R D , there is always a potential chance, such as increasing the relaying power of the legitimate monitor, to improve the eavesdropping rate by increasing R D until R E = R D , which means that R D reaches the optimal value. Then, any more improvement of R D will lead to R E < R D , which means the failure of eavesdropping. So, when the suspicious link is eavesdropped with optimal eavesdropping rate, the relation of R E = R D is always realized.
Next, under the finite blocklength regime, we define a metric called effective eavesdropping rate to analyze the system performance. Mathematically, the effective eavesdropping rate is given by where R eav is the eavesdropping rate, and R eav = R D = R E . According to Equation (3), we can reformulate Equation (8) as a function of R eav as where a = C E = log 2 (1 + γ E ), and b = 1 − 1 (1+γ E ) 2 /n · log 2 e. Next, we study Equation (9), for which we have the following lemma.
Lemma 1:
Under the finite blocklength regime, the effective eavesdropping rate R e f f is monotonically increasing over [0, R * eav ] and monotonically decreasing over (R * eav , a) for moderate or even high SNR, where R * eav is the eavesdropping rate that maximizes the effective eavesdropping rate R e f f .
Proof: See detailed proof of Lemma 1 in Appendix B.
Base on the proof of Lemma 1, we prove that there exists a maximum effective eavesdropping rate, R * e f f , corresponding to R * eav . However, unfortunately, the general closed-form for R * eav cannot be derived. Therefore, it is investigated by simulation in Figure 4. Furthermore, we consider the optimal eavesdropping rate R opt eav = max(R * eav , R 0 ), where R 0 is the channel coding rate of the suspicious link with no relaying power. Here, we first simply explain it as follows. We consider the eavesdropping rate R 0 ≤ R eav < a. First, consider the case when R * eav ≥ R 0 . In this case, the legitimate monitor should use a positive relaying power to facilitate the eavesdropping, such that the effective channel coding rate R D of the suspicious link is improved from R 0 to R * eav , thus, we have R opt eav = R * eav and the optimal effective eavesdropping rate R opt e f f = R * e f f . Next, consider R * eav < R 0 . In this case, we have R opt eav = R 0 , which means that no relaying is required for the legitimate monitor to obtain its optimal effective eavesdropping rate.
Sensors 2019, 19, 3031 7 of 12 monotonically increasing and then monotonically decreasing and there is a maximum value of the eavesdropping rate, * eav R , which is corresponding to the maximum value of the effective eavesdropping rate, * eff R . For example, * eav R is around 3.7 when E γ is 11.6 dB. Moreover, we can also note that eff R is larger when E γ is 11.6 dB compared with E γ is 4.81 dB. Thus, for a given blocklength n , eff R increases with E γ for the same eav R . So far, the Lemma 1 is demonstrated by simulation.
Numerical Results
Next, we present numerical results obtained by simulations for the considered legitimate surveillance system. We consider the Rayleigh quasi-static block-fading channel and set the channel coefficients h 0 , h 1 and h 2 to be independent circularly symmetric complex Gaussian random variables with mean zero and variance 1. Here, the transmit powers are normalized over the receiver noise powers such that we can set the noise powers at E and D to be σ E 2 = σ 2 D = 1. Unless otherwise stated, we set the transmit power at S as P 1 = 20 dB. We assume that the transmit power P 2 is large enough to facilitate the eavesdropping.
In Figure 2, R E with C E and R D with C D are shown for given blocklength n and error probability ε. Here, the transmit power P 2 is set to be 2 dB. Without loss of generality, n is set to be 100 and 400 channel uses, ε E and ε D are set to be 10 −3 and 10 −4 , respectively. As shown in the figure, when C E ≥ C D , it is clear that R E > R D . Meanwhile, we can note that R E increases with C E , and that R D also increases with C D . For example, when n is 400 channel uses, for C E = C D = 1.63, R E − R D = 0.04, while for C E = 2.14 and C D = 2.1, R E − R D = 0.09, so R E − R D > 0 when C E ≥ C D . Thus, under the finite blocklength regime, E can eavesdrop the information sent by S the same as the condition under the Shannon capacity regime, which is in line with Proposition 1.
In Figure 3, we plot the ratio of R E and R D with n when γ E = 1.04γ D , γ E = 1.02γ D , γ E = γ D , γ E = 0.98γ D and γ E = 0.96γ D , where γ E = 0.98γ D and γ E = 0.96γ D represent some conditions of C E < C D . We set ε E and ε D to be 10 −3 and 10 −4 , respectively. As shown in the figure, we can note that when γ E ≥ γ D , R E /R D > 1 and R E /R D decreases with n. Meanwhile, in comparison to γ E = γ D , R E /R D can still be larger than or equal to 1 when γ E = 0.98γ D and γ E = 0.96γ D as shown in the figure. For example, when R E /R D = 1, the blocklengths n of the red and green curves are respectively around 1400, 400 channel uses, thus, n decreases. So even in some conditions of C E < C D , E can still eavesdrop the information sent by S as n decreases, which demonstrates proposition 2. Figure 4 shows the effective eavesdropping rate R e f f with the eavesdropping rate R eav at E given in Equation (9). Here, the results are obtained when a is 2.01 and 3.95 bits per channel use, thus, we can obtain that γ E is 4.81 dB and 11.6 dB, which are supposed to moderate SNRs. Without loss of generality, we set n to be 400 channel uses. As shown in the figure, we can note that R e f f is first monotonically increasing and then monotonically decreasing and there is a maximum value of the eavesdropping rate, R * eav , which is corresponding to the maximum value of the effective eavesdropping rate, R * e f f . For example, R * eav is around 3.7 when γ E is 11.6 dB. Moreover, we can also note that R e f f is larger when γ E is 11.6 dB compared with γ E is 4.81 dB. Thus, for a given blocklength n, R e f f increases with γ E for the same R eav . So far, the Lemma 1 is demonstrated by simulation.
In Figure 5, we plot the maximum effective eavesdropping rate R * e f f with the blocklength n. Here, corresponding to Figure 4, the results are obtained when a is 2.01 and 3.95 bits per channel use. As show in Figure 5, we can clearly note that R * e f f increases with n. We can also note that the increments of the curves are almost negligible when n reaches a relatively large value. For example, the increment of the red curve is very small in the range of 1500 channel uses to 2000 channel uses. Moreover, it is easy to see that R * e f f increases with a, thus, R * e f f increases with γ E .
Sensors 2019, 19, x 8 of 12 In Figure 5, we plot the maximum effective eavesdropping rate * eff R with the blocklength n.
Here, corresponding to Figure 4, the results are obtained when a is 2.01 and 3.95 bits per channel use. As show in Figure 5, we can clearly note that * eff R increases with n. We can also note that the increments of the curves are almost negligible when n reaches a relatively large value. For example, the increment of the red curve is very small in the range of 1500 channel uses to 2000 channel uses. Moreover, it is easy to see that * eff R increases with a, thus, * eff R increases with E γ .
Conclusion
In this paper, under the finite blocklength regime, we analyze the performance of a legitimate proactive eavesdropping system, which consists of a suspicious transmitter-receiver pair and a legitimate monitor. We consider that the legitimate monitor operates in a full-duplex mode with simultaneous information reception and relaying. Moreover, we analyze the channel coding rate of the eavesdropping link and the suspicious link. We find that the legitimate monitor can still eavesdrop the information sent by the suspicious transmitter as the blocklength decreases, even when the eavesdropping is failed under the Shannon capacity regime. Furthermore, we define a metric called effective eavesdropping rate and analyze the monotonicity. From the analysis of monotonicity, the existence of a maximum effective eavesdropping rate for moderate or even high SNR is verified. Finally, numerical results are provided and discussed. In the simulation, we also find that the maximum effective eavesdropping rate slowly increases with the blocklength, and the increment is almost negligible when the blocklength is relatively large.
Conclusions
In this paper, under the finite blocklength regime, we analyze the performance of a legitimate proactive eavesdropping system, which consists of a suspicious transmitter-receiver pair and a legitimate monitor. We consider that the legitimate monitor operates in a full-duplex mode with simultaneous information reception and relaying. Moreover, we analyze the channel coding rate of the eavesdropping link and the suspicious link. We find that the legitimate monitor can still eavesdrop the information sent by the suspicious transmitter as the blocklength decreases, even when the eavesdropping is failed under the Shannon capacity regime. Furthermore, we define a metric called effective eavesdropping rate and analyze the monotonicity. From the analysis of monotonicity, the existence of a maximum effective eavesdropping rate for moderate or even high SNR is verified. Finally, numerical results are provided and discussed. In the simulation, we also find that the maximum effective eavesdropping rate slowly increases with the blocklength, and the increment is almost negligible when the blocklength is relatively large.
Proof of Proposition 1
First, when C E = C D , we have where it can be known that (1 − 2 −2C D )/n · log 2 e > 0. We have obtained ε D < ε E , so we can derive Q −1 (ε D ) > Q −1 (ε E ) by using the fact that Q −1 (x) is the decreasing function of x. Thus, we can obtain Afterwards, Equation (1) can be approximated as As is shown in the Figure A1, the approximation, i.e. Equation (A2), is very tight for the range of SNR.
As is shown in the Figure A1, the approximation, i.e. Equation (A2), is very tight for the range of SNR. According to Equation (A2), it can be known that R increases with C . Thus, D R increases
Proof of Lemma 1
To demonstrate there is the value of eav R that maximizes the effective eavesdropping rate eff R , we next examine the monotonicity and concavity of eff R with respect to eav R . For this purpose, we derive the first and second derivatives of eff R with respect to eav R respectively. According to Equation (A2), it can be known that R increases with C. Thus, R D increases with C D , therefore, if C E > C D , which means that C D is smaller in comparison with the condition C E = C D , R E is definitely larger than R D .
In conclusion, R E > R D when C E ≥ C D .
Proof of Lemma 1
To demonstrate there is the value of R eav that maximizes the effective eavesdropping rate R e f f , we next examine the monotonicity and concavity of R e f f with respect to R eav . For this purpose, we derive the first and second derivatives of R e f f with respect to R eav respectively.
Based on the differentiation of a definite integral in terms of a parameter [30], the first derivative of R e f f with respect to R eav is given by . Likewise, the second derivative of R e f f with respect to R eav is obtained as We can easily note that a > 0 and b > 0. In this way, we can immediately obtain that which is due to 0 < Q a b < 0.5. Besides, we can also obtain that which is due to m > 0. Moreover, we find that R e f f (R eav ) < 0 within 0 ≤ R eav < a. So, R e f f (R eav ) keeps decreasing in the range of 0 ≤ R eav < a. We next confirm that the value of R e f f (a) is larger than zero or smaller than zero. According to Equation (A3), we have It is easy to know that the value of R e f f (a) decreases as γ E increases, and also decreases as n increases. In general, the SNR is relatively small when γ E = −5 dB. Note that Equation (3) is just an approximation when n is large enough [29], e.g. n ≥ 100. By bringing γ E = −5 dB and n = 100 channel uses into Equation (A7), we obtain that R e f f (a) < 0. So for moderate or even high SNR, R e f f (a) is definitely smaller than zero with a given n.
Summarizing, R e f f (R eav ) keeps decreasing within 0 ≤ R eav < a, meanwhile R e f f (0) > 0 and R e f f (a) < 0 for moderate or even high SNR. So there must exist a value R * eav of R e f f (R eav ) = 0, where R * eav is the value of R eav that maximizes the effective eavesdropping rate R e f f . So far, Lemma 1 is proved. | 6,967.8 | 2019-07-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Current Novel Advances in Bronchoscopy
Screening for lung cancer has changed substantially in the past decade since The National Lung Screening Trial. The resultant increased discovery of incidental pulmonary nodules has led to a growth in the number of lesions requiring tissue diagnosis. Bronchoscopy is one main modality used to sample lesions, but peripheral lesions remain challenging for bronchoscopic biopsy. Alternatives have included transthoracic biopsy or operative biopsy, which are more invasive and have a higher morbidity than bronchoscopy. In hopes of developing less invasive diagnostic techniques, technologies have come to assist the bronchoscopist in reaching the outer edges of the lung. Navigational bronchoscopy is able to virtually map the lung and direct the biopsy needle where the scope cannot reach. Robotic bronchoscopy platforms have been developed to provide stability and smaller optics to drive deeper into the bronchial tree. While these new systems have not yet proven better outcomes, they may reduce the need for invasive procedures and be valuable armamentarium in diagnosing and treating lung nodules, especially in the periphery.
INTRODUCTION
Screening for lung cancer has changed substantially in the past decade. The National Lung Screening Trial demonstrated significant utility for low dose computed tomography (CT) scans in patients with high risk profiles by increasing early detection of lung cancer with decreased mortality (1,2). With the advent of screening, 1.6 million new pulmonary nodules are detected annually, posing diagnostic dilemmas in evaluating these lesions (3). While many of these nodules are small and can be monitored with serial imaging, many require tissue for diagnosis and eventual treatment. The number of invasive diagnostic procedures has subsequently increased in kind.
Prevailing modalities for obtaining tissue diagnosis of pulmonary nodules include transthoracic image guided biopsy and endoscopic bronchoscopy and ultrasound (EBUS). Both have limitations. Transthoracic biopsy is largely useful for peripheral, small lesions, and has higher yield than bronchoscopy, but is inadequate for sampling central lesions and mediastinal lymph nodes (4). There is a substantial risk of lung injury and iatrogenic pneumothorax, which is increased in patients with emphysematous changes (5), resulting in a reluctance for transthoracic biopsy for lesions close to major vascular structures or in bullous lungs. Endoscopic bronchoscopy is often still required staging in malignant lesions. EBUS remains one standard method of tissue diagnosis, as the modality allows for staging mediastinal lymph nodes and evaluating endobronchial involvement. However, it is limited to central and large tumors, with reported of low diagnostic yield for other nodules. Surgical biopsy remains as the final option, especially for peripheral lesions, but is an invasive procedure. Additionally, surgical resection may require preoperative marking for small nodules or those not directly on the pleural surface.
HISTORY OF BRONCHOSCOPY
Bronchoscopy has undergone a number of iterative improvements to become a useful and versatile diagnostic tool. Direct bronchoscopy originated as a tool for retrieval of foreign objects, and evolved from the laryngoscope used by otolaryngologists. Flexible bronchoscopy was introduced by Dr. Ikeda, a thoracic surgeon at the National Cancer Center in Japan, after applying the fiberoptic imaging used by endoscopists to a smaller channel. Biopsy forceps were easily adapted, and transbronchial fine need aspiration for cytology quickly followed (6). Miniaturized ultrasound probes, first with a radial probe and then the convex probe, soon made bronchoscopy the standard of care in staging the mediastinum (6)(7)(8). While the advancements in bronchoscopy have allowed it to move from direct line of sight to endobronchial, and then to transbronchial biopsy, more peripheral pulmonary lesions remain a challenge.
ELECTROMAGNETIC NAVIGATIONAL BRONCHOSCOPY
Given the risks of transthoracic approaches and invasiveness of surgical biopsy, recent advancements have been developed regarding image guidance to extend the bronchoscope's reach. Electromagnetic navigational bronchoscopy (ENB) relies on high resolution CT and an electromagnetic (EM) field generated around the patient's chest. CT images are reconstructed into a three-dimensional map and loaded to generate a virtual bronchoscopist's view. A steerable probe that can be sensed by the field is loaded into the tip of a flexible bronchoscope and select known points in the tracheobronchial tree are mapped to the virtual lungs to synchronize the images to the EM field. The probe, along with an extendable working channel, can then be advanced past the scope tip into smaller bronchi and drive along the virtual bronchial map to reach the target (9). While some reports indicate ENB is safe and has allowed for better sample yield of peripheral lesions, the technique is highly operator and anatomy dependent. Upper and middle lobe lesions, large lesions >2 cm, a bronchus sign (imaging of a bronchus leading to the lesion) and concurrent use with radial EBUS have been shown to improve yield (10,11). Meta-analysis of 39 studies indicate pooled diagnostic yields around 70%, although with wide variability and a risk of 1-2% pneumothorax (10) ( Table 1). A more recent pool of 16 studies demonstrates a similar combined yield of 64.9% and sensitivity to detect malignancy of 71%, with a 3% pneumothorax rate and 1.6% tube thoracostomy rate (11). Registry data review of a variety of centers show worse diagnostic utility with ENB (yield of 38.5%) compared to radial EBUS (yield of 57%), raising the concern that efficacy may not translate from specialized centers to the community (12). Comparison of ENB to CT guided transthoracic biopsy has indicated that the diagnostic yield of bronchoscopy is still lacking. A single center retrospective review of 285 patients undergoing ENB or CT guided biopsy demonstrated yields of 66 vs. 86%, respectively. Sufficient yield for molecular analysis was similar between both modalities (89 vs. 82%, respectively). Complication rates were significantly higher complications in transthoracic biopsies compared to CT guided biopsy with increased incidence of pneumothorax (29 vs. 4%) and bleeding (17 vs. 3.3%), though thoracostomy tube placement and significant bleeding rates were similarly low (13). While ultimate interventions and major complications remained low, the higher rate of bleeding and pneumothorax requires admission and observation to ensure serious sequelae do not develop.
ENB has been applied very successfully in the operating room for locating lesions for resection, especially during robotic operations. Without the ability to palpate for masses, robotic surgeons often rely on visual cues of mass location and can be aided by tattoo. ENB can help locate peripheral nodules for indocyanine green injection for precise resection. In our experience of 93 patients undergoing segmentectomy, ENB was able to locate 86% of lesions with no ENB related complications (14). Data of ENB is summarized in Table 1.
Technical concerns exist primarily around stability and extension of the probe/catheter complex past the bronchoscope. Catheter slippage can occur, especially when significant torque is necessary to create a stable position and during tool exchanges. Visualization at the distal subsegmental bronchi is also no longer real-time and relies on the virtual image after the probe is extended past the bronchoscope, which can make navigating sharply angulated, small bronchi difficult. Despite the technical difficulties, ENB is the most commonly used method to reach the peripheral bronchial tree for tissue sampling and remains the primary alternative to transthoracic biopsy with a more favorable risk profile, albeit with lower diagnostic yield. Tagging nodules endobronchially is also beneficial during sub-lobar resections and can locate nodules when unable to palpate or obscured by lung parenchyma (14).
ROBOTIC BRONCHOSCOPY PLATFORMS
The difficulties of ENB and suboptimal yield of traditional bronchoscopy has led to the development of robotic bronchoscopy. The robotic platform uses a similar virtual map generated from reconstructed high-resolution CT and EM field mapping, but has redesigned the bronchoscope and utilizes robotic arms to maneuver and drive it forward. The two platforms consist of largely similar equipment including a cart with robotic arms, the bronchoscope, the tower, and a controller. The Monarch TM system's bronchoscope consists of an 130 • articulating sheath and an inner bronchoscope that telescopes out of the sheath and can flex 180 • in any direction. All part of the scopes can be positionally parked for stability during tool exchanges and biopsy. The controller is modeled after current generation game controllers with two joysticks and minimal buttons (15). The Ion TM Endoluminal Platform uses a single bronchoscope/catheter complex and robotic arm. The scope consists of a catheter measuring 3.5 mm outer diameter and 2 mm working channel and a vision probe that loads into the working channel. The catheter includes fiber optic shape sensors that provide real time precise location and catheter shape information throughout the navigation and biopsy process and allows it to park the length of the catheter in its current formation for stability. The vision probe requires extraction once navigation is complete and biopsy is done under virtual guidance. Existing technologies, including radial EBUS, fluoroscopy and navigational bronchoscopy are integrated into both tower systems (16).
Early studies have been promising. An initial feasibility and safety study by Rojas-Solano et al. has shown a good safety profile, albeit in a small cohort. Fifteen patients with peripheral lesions and a bronchus sign underwent robotic bronchoscopy with the Monarch TM system and 93% of targets were able to be biopsied. Average tumor size was 26 mm. One patient required conversion to conventional bronchoscopy as the robotic parameters were set incorrectly. Another patient was nondiagnostic and subsequently underwent surgical biopsy for diagnosis of malignancy. No patients suffered pneumothorax or bleeding. Early procedure times had a median of 45 min, which dropped by more than half by the end of the series (17). A multicenter prospective study of 46 patients demonstrated similar results with successful navigation and biopsy in 95.6% of patients confirmed by radial EBUS. There was one pneumothorax (4.3%) requiring a tube thoracostomy (Table 2A). Yield and diagnosis are pending (18). A recent retrospective multicenter study in 165 patients with 167 lesions showed an 88.6% navigation rate when confirmed by radial EBUS and a conservative diagnostic yield of 69% and a maximum of 77%. Mean lesion size was 25 mm with 71% under 30 mm and 63.5% demonstrating a pre-procedure bronchus sign. In lesions where an eccentric view on radial EBUS was seen, diagnostic yield was 71%, higher than reported with radial EBUS alone. Complications included a 3.6% rate of pneumothorax and 2.4% rate of bleeding, comparable to other bronchoscopy trials (19). Two prospective single-arm multicenter trials, the BENEFIT and TARGET trial, are ongoing. Preliminary data from the BENEFIT trial demonstrate 96% localization rates and similarly low complication rates. TARGET is currently enrolling (21,22). One study has published data of the Ion TM system in 29 patients with intriguing yield data and an acceptable safety profile. Average tumor size was 12 mm with 96.6% localization and tissue sampling success. Diagnostic yield was 79.3 and 88% were malignant. Bronchus sign was present in 58.6% of all biopsied lesions. Procedure times, however, were fairly long, initially averaging 95 min before dropping to 61 min. The authors reported no complications (20). Our institution has used the Ion platform for 9 patients with a single surgeon immediately prior to resection for preoperative tattooing (Table 2B). Tattooing is done to help identify nodules for potential sub-lobar resection as they can difficult to visualize and cannot be felt on the robotic platform. Of our series, seven patients had successful navigation and dye injection. Two were converted to ENB and successfully tattooed. Mean duration of bronchoscopy was 13 min and none had complications related to bronchoscopy. Average length of stay was 1 day. The PRECIsE trial is a prospective single arm multicenter trial currently enrolling for the Ion TM Endoluminal system (21,23).
DISCUSSION
Electromagnetic navigational bronchoscopy and robotic bronchoscopy have both expanded the reach of conventional bronchoscopy and EBUS. Virtual pathfinding and navigation have allowed the working channel to extend past what the camera can see and fit through. The ENB system has allowed CT imaging to not just guide operative planning, but be a real time GPS for sampling and marking peripheral lesions for diagnosis and surgical resection. While this has been an important step forward in advancing endobronchial therapies, operator dependence and technical prowess factor into the debate over ENB's overall usefulness in boosting diagnostic yield. The benefits of robotic assisted platforms largely stem from a retooling of the bronchoscope into one with precise movements, adjustable angulation, and increased stability. Reliable sampling of peripheral lesions necessitates the ability to navigate to a target and remain in stable position while instruments and needles are exchanged. Robotic assistance increases dexterity to make subtle or acute changes in navigation. The increased structural support of the sheath and scope, as well as the fiberoptic shape sensing, allows for more leverage when making complex turns, and aids in positional parking. Continuous visualization of the peripheral airways, with one platform also offering direct visualization of biopsy tools, allows for more accurate biopsy deployment. These attributes would seem to make robotic bronchoscopic navigation and biopsy safer and extend the reach compared to conventional bronchoscopy.
Current data is still ongoing to confirm if these technical advantages translate to an improved clinical experience for the patient. However, the reports published are promising and have good safety profiles. If improved diagnostic yield pans out, patients may be spared higher risk transthoracic biopsy and multiple staging procedures. In addition, with reliable navigation, robotic assisted platforms may be utilized for perioperative marking, whether through fiducial placement or tattoo, and obviate a separate marking procedure by CT guidance. For nonoperative patients, the robotic platform may be a stable, accurate avenue for delivering endoluminal therapies to all corners of the lung.
All reports of robotic assisted bronchoscopy are from the past couple of years, and adoption of the platform remains in its infancy. Drawbacks to the technology include cost, increased complexity in the operating room, increased procedure time, and a learning curve without a proven benefit. Further studies are needed to evaluate the efficacy of robotic bronchoscopy. All early data needs to be evaluated with the knowledge that cost, procedure time, and efficacy improve with increasing experience due to the learning curve associated with any novel technology. Thoracic robotic surgery initially was considered inefficient due to cost, operating time, and lack of benefit over traditional minimally invasive platforms. However, persistence, practice and patience has demonstrated the benefit of the robotic platform for thoracic surgery. With refinement and familiarity, robotic assisted bronchoscopy may similarly become an essential step forward in the diagnosis and treatment of peripheral pulmonary nodules.
AUTHOR CONTRIBUTIONS
JJ: writing, research, and editing. SC, AK, TG, and RC: writing and editing. All authors contributed to the article and approved the submitted version. | 3,360 | 2020-11-16T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Single-Walled Carbon Nanotube Dominated Micron-Wide Stripe Patterned-Based Ferroelectric Field-Effect Transistors with HfO2 Defect Control Layer
Ferroelectric field-effect transistors (FeFETs) with single-walled carbon nanotube (SWCNT) dominated micron-wide stripe patterned as channel, (Bi,Nd)4Ti3O12 films as insulator, and HfO2 films as defect control layer were developed and fabricated. The prepared SWCNT-FeFETs possess excellent properties such as large channel conductance, high on/off current ratio, high channel carrier mobility, great fatigue endurance performance, and data retention. Despite its thin capacitance equivalent thickness, the gate insulator with HfO2 defect control layer shows a low leakage current density of 3.1 × 10−9 A/cm2 at a gate voltage of − 3 V.
Background
Ferroelectric field-effect transistor (FeFET) is a promising candidate for nonvolatile memory devices and integrated circuits because of its high speed, single device structure, low power consumption and nondestructive read-out operation [1][2][3]. (Bi,Nd) 4 Ti 3 O 12 (BNT) is a Pb-free ferroelectric thin film with stable chemical properties and fatigue endurance performance. Thus, the FeFET using BNT as the gate dielectric would have smaller threshold voltage, large channel conductance, and so on. Carbon nanotubes (CNTs) have been widely applied in FeFET for its high conductivity and large carrier mobility [4][5][6][7]. It is well known that there are no dangling bonds on the surface of ideal CNTs, which leads to small interface reaction between ferroelectric film and CNTs [8,9]. However, it is very difficult to achieve single CNT growth between source and drain electrodes in experiment. Besides, the on/off current ratio of CNT nanowire network FeFET is generally low because of the admixture of metallic nanotubes in CNT network [7,10]. Song et al. proposed to use multiwalled CNT micron-wide stripe patterned as channel material of FeFET to solve the abovementioned problems, but the fatigue endurance performance and retention of physical characteristics of CNT FeFET is not clear [9]. Compared to multiwalled CNT (MWCNT), the singlewalled CNT (SWCNT) is a seamlessly wrapped single graphene sheet formed into a cylindrical tube [11]. Moreover, there are some defects (such as ion impurities, oxygen vacancies, and dislocations) which are difficult to control during the fabrication of ferroelectric thin film [12][13][14]. The diffusion of these defects can affect the on/off current ratio, fatigue endurance performance, and data retention [15,16]. Therefore, we implant HfO 2 layer in SWCNT-FeFET, which is used to block diffusion of point defects and can be used as a buffer layer to relieve the misfit between BNT and Si and therefore to reduce the dislocation density in the BNT film. It can control the defects in SWCNT-FeFET, and then significantly improve the on/off current ratio, fatigue characteristics, and data retention.
In this study, we fabricated regular and aligned micronwide stripe patterned network SWCNTs as channel layer, BNT films as insulator, and HfO 2 films as defect control layer to fabricate bottom-gate type FeFET and expected to obtain good on/off current ratio, fatigue characteristics, and data retention. The structure of SWCNT-FeFET and its preparation procedure are shown in Fig. 1a, b. Besides, we have also fabricated MWCNT-FeFET for comparison.
Methods
In the FeFET devices, the SWCNT micron-wide stripe patterned is used as channel, the BNT thin film is used as gate dielectric, HfO 2 films are used as defect control layer, and the heavily doped n-type Si is used as substrate and back gate electrode of FeFET simultaneously. The resistivity of n-type Si is 0.0015 Ω cm. The HfO 2 was deposited on the Si substrate by pulsed laser deposition (PLD) using a KrF excimer laser with a wavelength of 248 nm, and its thickness is about 20 nm. The BNT film was deposited on the Si substrate by PLD as described in the early work [17], and its thickness is about 300 nm. The pristine arc-discharged SWCNT was purchased from Chengdu Institute of Organic Chemistry (Chinese Academy of Science); the length and diameter are 10-30 μm and 0.8-1.1 nm, respectively. Its purity is 85% which signifies that SWCNT is dominated. The SWCNTs were fabricated by using evaporation-induced self-assembly. The concentration of SWCNT/water dispersion was 100 mg/L, the evaporation rate was varied in a range of 9-21 μL/min, and the temperature was 80°C. By controlling the solvent evaporation temperature, well-defined stripe pattern was formed at the solid-liquid-vapor interface on the BNT/HfO 2 /Si substrate. Next, Pt source/drain electrodes were deposited on the SWCNTs/BNT by ionbeam sputtering using a mask plate. The total area of the metal mask plate is 1 cm 2 , and the areas of source and drain are both 4.5 mm 2 . The channel length (L) and width (W) of FeFET are 200 and 1500 μm, respectively. The fabricated SWCNT-FeFET followed by a post annealing at 500°C for 2 h to improve the contact between source/drain electrodes and SWCNTs. As reported, the CNT network contains both metallic and semiconducting nanotubes. The CNT network was processed by applying a large gate voltage. The metallic SWCNT nanotubes were nearly ablated and semiconducting SWCNT nanotubes were remained by load current [18]. In order to compare, SWCNT/SiO 2 -FET were fabricated by the same method and conditions; MWCNTs/BNT-FET was also fabricated by the method as described in the early work [9]. FeFET characteristics were measured using a Keithley 4200 parameter analyzer. The hysteresis loops and polarizations of FeFET were measured using a RT Precision Workstation ferroelectric analyzer. Fig. 2b. The sunken and gray stripes correspond to the bared BNT/HfO 2 /Si substrate in the spaces between SWCNT micron-wide stripes. The concentration of SWCNT precursor solution is increased with evaporating, and the width of graded stripes slightly increases with declining of the SWCNT/water liquid level. The BNT/HfO 2 films and BNT films on the Si substrate are shown in Fig. 2c, d. It can be seen that the surface of the BNT/HfO 2 film is composed of many crystalline grains and pores, which indicated larger roughness than that of the BNT films. Figure 2e shows the P-V hysteresis loops of BNT and BNT/HfO 2 films, respectively. The polarizations of hysteresis loops of the BNT/HfO 2 films are larger than that of BNT films in the same voltage. Even though HfO 2 layer shared part of the voltage of BNT/HfO 2 films, the BNT film still shows better polarization value than that of BNT grown on Si substrate. It is because the BNT films grown on the HfO 2 layer have lower diffusion defect concentration than that of BNT films grown on the Si substrate directly. [19],
Results and Discussion
where ε r is the relative permittivity and t ins is the BNT thickness. A relative dielectric constant (ε r ) of BNT film is 350, which is measured at 1 MHz by HP4156 parameter analyzer. The μ sat of SWCNT/BNT/HfO 2 -FeFET and SWCNT/BNT-FeFET are 395 and 300 cm 2 /V s. Figure 5 shows 1 V) with the CNT network as channel layer [20]. The larger MW indicates good dielectric coupling in this FeFET system. From Fig. 4c, we can see the obtained window width of SWCNT/SiO 2 /HfO 2 -FET is about 1 V, which is mainly caused by defect densities of SWCNT [21]. These results suggest that the memory window hysteresis (4.2 V) of ferroelectric FeFET is caused by both BNT polarization and densities of SWCNT defects. Figure 6a shows the leakage current-voltage characteristics of the BNT/HfO 2 and BNT film. As can be seen, the leakage currents are 1.2 × 10 −9 A and 1.5 × 10 −8 A for BNT/HfO 2 and BNT film, respectively, when the voltage reaches up to − 3 V. The leakage current-voltage characteristics of the BNT/HfO 2 and BNT film were studied for comparison by fitting the I-V data. The leakage current characteristics of a Schottky contact were represented by Ln(J) = b(V + V bi * ) 1/4 [9,22,23], and the corresponding plots for BNT/HfO 2 and BNT films in the voltage range of 0 to 3.8 V are shown in Fig. 6b. The built-in voltages V bi * and slope b in the formula can be obtained by fitting the experiment I-V data. The calculated space-charge densities N eff , which consisted of deep trapping centers and oxygen vacancies [22], are about 2.132 × 10 17 cm −3 and 1.438 × 10 19 cm −3 for BNT/HfO 2 and BNT film, respectively. It is indicated that the BNT films deposited on Si substrate are n-type semiconductors according to the formula of interface barrier heights [24]. This is consistent with effect of the HfO 2 on reducing the off-current Fig. 4a, b, because n-type BNT generate electron increases the off-current in negative voltage. BNT film conduction shows bulk-controlled mechanism, which further implies that the n-type BNT is mainly induced by the conductive defects or impurities [9,22]. Figure 6c shows the fatigue endurance performance for the SWCNT/BNT/HfO 2 -FeFET, SWCNT/BNT-FeFET, and MWCNT/BNT-FeFET with a 100-KHz bipolar pulse at the V GS range from − 7 to 4 V. The fatigue endurance performance of FeFET is exhibited in the loss of switchable polarization with repeated switching cycles. The value of non-volatile polarization (P nv ) is obtained by the equation P nv = P r * − P r^a nd then, normalized with P nv /P r0 * [25], where P r * is twice remnant polarization of FeFET, P r^i s the loss of polarization after the next pulse, and P r0 * is the twice initial remnant polarization of FeFET. The partial loss of the normalized P nv after 10 11 read/write switching cycles is observed for the FeFET, which are approximately 3, 10, and 25% for SWCNT/BNT/HfO 2 -FeFET, SWCNT/BNT-FeFET, and MWCNT/BNT-FeFET, respectively. When BNT directly grows on the bottom electrode Si, the fatigue performance of SWCNT/BNT-FeFET is very bad because of the diffusion between BNT and Si substrate through grain boundary [12][13][14]. These results suggest that the HfO 2 layer effectively blocks the diffusion of Si substrate and reduces the ion impurities, which results in excellent fatigue endurance performance.
To assess the device reliability of FeFET toward NVRAM application, we have examined data retention. Figure 7 shows the source-drain current retention curves for the SWCNT/BNT/HfO 2 -FeFET, SWCNT/BNT-FeFET, and MWCNT/BNT-FeFET at room temperature. The voltage pulse of V GS = − 4 V and V GS = 1 V at V DS = 1 V is applied to the gate and source-drain electrode, switching the voltage of FeFET to off or on state, respectively. The measured on/off-state current ratios are nearly 3 × 10 4 , 7 × 10 3 , and 6 × 10 2 for SWCNT/BNT/HfO 2 -FeFET, SWCNT/BNT- There is no significant loss in on/off-state current ratio (3. 2%) after a retention time of 1 × 10 6 s for SWCNT/BNT/ HfO 2 -FeFET. By extrapolating the curves to 10 8 s for SWCNT/BNT/HfO 2 -FeFET, SWCNT/BNT-FeFET, and MWCNT/BNT-FeFET, the on/off-state current ratios are nearly 1.9 × 10 4 , 3 × 10 3 , and 2 × 10 2 , respectively. The on/ off-state ratio of SWCNT/BNT/HfO 2 -FeFET is still high enough for the function of memories, demonstrating a desirable retention property of the present memory device. Retention is influenced by the gate leakage current [26,27]. The long retention time indicates HfO 2 defect control layer can effectively reduce the off-state current and gate leakage current, which stabilizes the on/off current ratio. In addition, we also made a comparison between ferroelectricbased FETs and different CNT in Table 1, suggesting that the fabricated SWCNT/BNT/HfO 2 -FeFET in this study can provide high on/off current ratio, great fatigue endurance performance, and data retention.
In order to further understand how the defects influence the physical characteristics of the device, the P-E hysteresis loops and I DS -V GS curve for the SWCNT/BNT/HfO 2 -FeFET and SWCNT/BNT-FeFET were simulated by considering asymmetric charge caused by defects using our previous models [12,28]. An asymmetric charge caused by defects is considered to simulate the P-E hysteresis loops and I DS -V GS curve of BNT, and a symmetrical charge is considered to simulate that of BNT/HfO 2 . The simulation results are shown in Fig. 8a, b, which are similar with the experimental results of Figs. 2e and 5a, b, respectively. The simulation results indicate HfO 2 layer effectively reduces the asymmetric charges of ferroelectric films caused by defects, which further increases the off-state current. Therefore, it can be demonstrated that the defects of ferroelectric thin film were effectively controlled by HfO 2 defect control layer.
Conclusions
In summary, the effect of HfO 2 materials as defect control layer on the on/off current ratio, fatigue endurance performance, and data retention of the SWCNT/BNT-FeFETs has been investigated, in which the defects of ferroelectric thin film are controlled by HfO 2 as the defect control layer. Due to the thin defect control layer of HfO 2 , the fabricated SWCNT/BNT/HfO 2 -FeFET shows a low leakage current of 1.2 × 10 −9 A when the voltage reaches to − 3 V, a large on/ off current ratio of 2 × 10 5 , a V th of 0.2 V, and a μ of Availability of data and materials All data are fully available without restriction.
Authors' contributions QT and QW conceived the project and performed the experiments, characterization,and data analysis.YL and HY helped with the electrical performance test and SEM analysis, respectively. All authors discussed the results and commented on the manuscript. All authors read and approved the final manuscript. | 3,122 | 2018-04-27T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Mixed Convection Flow over an Unsteady Stretching Surface in a Porous Medium with Heat Source
This paper deals with the analysis of an unsteady mixed convection flow of a fluid saturated porous medium adjacent to heated/cooled semi-infinite stretching vertical sheet in the presence of heat source. The unsteadiness in the flow is caused by continuous stretching of the sheet and continuous increase in the surface temperature. We present the analytical and numerical solutions of the problem. The effects of emerging parameters on field quantities are examined and discussed.
Introduction
The study of flow and heat transfer over a continuous stretching sheet with a given temperature distribution has received much attention due to its applications in different fields of engineering and industry.The stretching and heating/cooling of the plate have a definite impact on the quality of the finished product.The modeling of the real processes is thus undertaken with the help of different stretching velocities and temperature distributions.Examples of such processes are the extrusion of polymers, aerodynamic extrusion of plastic sheets, and the condensation process of a metallic plate cf.Altan et al. 1 ; Fisher 2 .A few more examples of importance are heat-treated materials traveling between a feed roll and a wind-up roll or materials manufactured by extrusion, wire drawing, spinning of filaments, glass-fiber and paper production, cooling of metallic sheets or electronic chips, crystal growing, food processing, and so forth.A great deal of research in fluid mechanics is rightfully produced to model these problems and to provide analytical and numerical results for a better understanding of the fluid behavior and an adequate explanation of the experiments.
Sakiadis 3 was first to present the boundary layer flow on a continuous moving surface in a viscous medium.Crane 4 was first to obtain an analytical solution for the steady stretching of the surface for viscous fluid.The heat transfer analysis for a stretching surface was studied by Erickson et al. 5 , while heat and mass transfer for stretching surfaces was addressed by P. S. Gupta and A. S. Gupta 6 .Some of the research pertaining to the steady stretching is given in numerous references 7-16 .In these discussions, steady state stretching and heat transfer analyses have been undertaken.
In some cases the flow and heat transfer can be unsteady due to a sudden or oscillating stretching of the plates or by time-varying temperature distributions.Physically, it concerns the rate of cooling in the steady fabrication processes and the transient crossover to the steady state.These observations are generally investigated in the momentum and thermal boundary layer by assuming a steady part of the stretching velocity proportional to the distance from the edge and an unsteady part to the inverse of time highlighting the cooling process .A similarity solution of the unsteady Navier-Stokes equations, of a thin liquid film on a stretching sheet, was considered by Wang 17 . Andersson et al. 18 extended this problem to heat transfer analysis for a power law fluid.Unsteady flow past a wall which starts to move impulsively has been presented byPop and Na 19 .The heat transfer characteristics of the flow problem of Wang 17 were considered by Andersson et al. 18 .The effect of the unsteadiness parameter on heat transfer and flow field over a stretching surface with and without heat generation was considered by 21, 22 , respectively.The numerical solutions of the boundary layer flow and heat transfer over an unsteady stretching vertical surface were presented by Ishak et al. 23,24 .Some more works regarding unsteady stretching are reported and available 25-27 .It is sometimes physically interesting to examine the flow, thermal flow, and thermal characteristics of viscous fluids over a stretching sheet in a porous medium.For example, in the physical process of drawing a sheet from a slit of a container, it is tacitly assumed that only the fluid adhered to the sheet is moving but the porous matrix remains fixed to follow the usual assumption of fluid flow in a porous medium.Different models of the porous medium have been formulated, namely, the Darcy, Brinkman, Darcy Brinkman, and Forchheimer models.However, the Darcy Brinkman model is widely accepted as the most appropriate.Comprehensive reviews of the convection through a porous media have been addressed in the studies 28-35 .We all know that mixed convection is induced by the motion of a solid material forced convection and thermal buoyancy natural convection .The buoyancy forces stemming from the heating or cooling of the continuous stretching sheets alter the flow and thermal fields and thereby the heat transfer characteristics of the manufacturing process.The combined forced and free convection in a boundary layer over continuous moving surfaces through an otherwise quiescent fluid have been investigated by many authors 36-43 .
The introduction of a heat source/sink in the fluid is sometimes important because of sharp temperature distributions between solid boundaries and the ambient temperature that may influence the heat transfer analysis as reported by Vajravelu and Hadjinicolaou 44 .These sources can be generally space and temperature dependent.
Keeping in view the importance of all that has been previously stated and the progress still needed in these areas, we address the problem of an unsteady mixed convection flow in a fluid saturated porous medium adjacent to a heated/cooled semi-infinite stretching vertical sheet with a heat source.We present an analytical and numerical solution to attain an appropriate degree of confidence in both solutions.This paper has thus multiple objectives to meet.The presentation of a satisfactory analytical solution for unsteady stretching which can be used in future studies for unsteady problems, the introduction of a source/sink, and the consideration of a porous medium.
In mathematical terms, the governing coupled nonlinear differential equations are transformed into a nondimensional self-similar ordinary differential equation using the appropriate similarity variables.The transformed equations are then solved analytically and numerically using the perturbation method with Padé approximation and shooting method, respectively.Very good agreement has been seen.The effects of the emerging parameters are investigated on the field quantities with the help of graphs and the physical reasoning.A comparison is made with the existing literature to support the validity of our results.
Development of the Flow Problem
Consider an unsteady laminar mixed convection flow along a vertical stretched heated/cooled semi-infinite flat sheet.The sheet is assumed impermeable and immersed in a saturated porous medium satisfying the Darcy Brinkman model.At time t 0, the sheet is stretched with the velocity u w x, t and raised to temperature T w x, t .The geometry of the problem is shown in Figure 1.
Under these assumptions, using the boundary layer and Boussinesq approximations, the unsteady two-dimensional Navier-Stokes equations and energy equation in the presence of heat source can be written as The appropriate boundary conditions of the problem are
2.4
In the above equations u and v are the velocity components in the x and y directions, respectively, T is the fluid temperature inside the boundary layer, K is the permeability of the porous medium, t is time, α m and ν are the thermal diffusivity and the kinematic viscosity, respectively.Where q is the internal heat generation/absorption per unit volume.The value of q is chosen as where A * and B * are space-dependent and temperature-dependent heat generation/absorption parameters and are positive for an internal heat source and negative for an internal heat sink.We assume that the stretching velocity u w x, t and the surface temperature T w x, t are where a > 0 and c > 0 are the constants having dimension time −1 such that ct < 1.The constant b has a dimension temperature/length, with b > 0 and b < 0 corresponding to the assisting and opposing flows, respectively, and b 0 is for a forced convection limit absence of buoyancy force .Let us introduce stream function ψ, similarity variable η and nondimensional temperature θ as together with the boundary conditions in which primes denote the differentiation with respect to η, d 1/D, α c/a is the unsteadiness parameter and Pr ν/α m is the Prandtle number.Further, λ is the buoyancy or mixed convection parameter defined as λ Gr x /Re 2
2.11
x where Gr x gβ T w − T ∞ x 3 /ν 2 and Re x u w x/ν are the local Grashof and Reynold numbers, D Da x Re x where Da x K/x 2 K 1 1 − ct /x 2 is the local Darcy number and K 1 is the initial permeability.
The physical quantities skin friction coefficient C f and the local Nusselt number Nu x are defined as where the skin friction τ w and the heat transfer from the sheet q w are given by τ w μ ∂u ∂y y 0 , q w −k ∂T ∂y y 0 ,
2.13
with μ and k being dynamic viscosity and thermal conductivity, respectively.Using transformation on 2.7 , we get x −θ 0 .
Numerical Solution
Equations 2.9 and 2.10 can be expressed as and the corresponding boundary conditions are where α 1 and α 2 are the missing initial conditions.These are determined by the shooting method in conjunction with implicit sixth order Runge-Kutta integration.The results obtained are discussed in Section 4.
Perturbation Solution for Small Parameter α
We assume that both the mixed convection parameter λ and the unsteadiness parameter α are small, and take λ mε where m O 1 and α ε.Equations 2.9 and 2.10 yield
3.3
Now expanding f and θ in powers of ε the zeroth order system is given by
3.7
The exact solution of 3.5 is where c √ 1 d.
Discussion
The effects of various physical parameters on the velocity, temperature, local skin friction, and local Nusselt number are discussed.In Table 1, comparison between analytical and numerical results is presented showing a very good agreement.To compare our results with the earlier work for the steady state fluid flow, we take α λ d A * B * 0 in 2.9 .These results are compared with those given in 7, 9, 11, 24 in Table 2.In Tables 3 and 4, the skin friction coefficient and the Nusselt number, for various values of Pr and d, are presented and compared with 23 .The comparisons made in Tables 2-4 make a perfect match.Henceforth, the results discussed in the following paragraph are due to the shooting method Figures 2-7 .
The variation of the skin friction coefficient and the local Nusselt number are shown in Figures 2 a and 2 b .It is observed that there is an increase in the skin friction coefficient for an assisting buoyant flow λ > 0 and it is opposite for an opposing flow λ < 0 .This is reasonable because one would expect the velocity to increase as the buoyancy force increases and the corresponding wall shears stress to increase as well.This in turn increases the skin friction coefficient and the heat transfer rate at the surface.The It is revealed that there is an increase of temperature and the thermal boundary layer with the increase of the parameters A * and B * .The sink naturally has the opposite effect.
Conclusions
The unsteady Darcy Brinkman mixed convection flow in a fluid saturated porous medium adjacent to a heated or cooled semi-infinite stretching surface in the presence of a heat source is investigated.Perturbation method with Padé approximation is used for the analytical solution and the shooting method for numerical solution reaching a good agreement between the two.A comparison is made with the earlier work to show the accuracy and reliability of our results.The effects of different parameters on the fluid flow and heat transfer characteristics are presented.
From these investigations the following conclusions are drawn.
i The Prandtl number Pr, permeability parameter d, and heat source/sink parameters A * and B * have significant effects whereas the unsteadiness parameter α and mixed convection parameter λ have a little effect on the flow and temperature fields.
ii The skin friction coefficient increases for an assisting buoyant flow λ > 0 and decreases for an opposing buoyant flow λ < 0 .
iii The horizontal velocity and the boundary layer thickness decrease as the unsteadiness parameter α increases.
iv The velocity and temperature decrease throughout the boundary layer with the increase of Prandtl number Pr.
v The velocity and temperature both decrease with an increase of the porosity of the medium.
vi The fluid velocity increases while the temperature decreases with the increase of the mixed convection parameter λ.
vii Temperature increases substantially with the increase of a heat source and decreases substantially for a heat sink.
All these observations are well supported by the physics and the boundary value problem at hand.
Figure 1 :
Figure 1: Physical model and coordinate system.
Figure 2 :
Figure 2: Variation of a skin friction coefficient b local Nusselt number with λ for various values of unsteadiness parameter α when Pr 0.72, A * B * d 0.1.
Figure 3 :Figure 4 :Figure 5 :
Figure 3: Effect of unsteadiness parameter α for the case of Pr 0.72, λ d A * B * 0.1 on a velocity distributions f η b temperature distributions θ η .
Figure 6 :
Figure 6: Effect of mixed convection parameter λ for the case of α d A * B * 0.1, Pr 0.72 on a velocity distributions f η , b temperature distributions θ η .
Figure 7 :
Figure 7: a Effect of space-dependent heat generation/absorption parameter A * on temperature distribution θ η for the case of Pr 0.72, α 0.3, λ 0.1, d 0.1 and B * 0.05 b Effect of temperaturedependent heat generation/absorption parameter B * on temperature distribution θ η for the case of Pr 0.72, α 0.3, λ 0.1, d 0.1, and B * 0.05.
finally the two term perturbation solutions of 3.
unsteady effects are shown by the variation of α for fixed values of λ 0.1, Pr 0.72, d 0.1, and A * B * 0.1 see Figures 3 a and 3 b .It is seen that the horizontal velocity and the boundary layer decreases with the increase of α which must be the case for decreasing wall velocity.Figures 4 a and 4 b represent the graph of velocity and temperature profiles for different increasing values of Prandtl number Pr.It is clearly seen that the effect of the Prandtl number Pr is to decrease the temperature throughout the boundary layer resulting in the decrease of the thermal boundary layer thickness.The effects of porous medium on flow velocity and temperature are realized through the permeability parameter d 1/D as shown in Figures 5 a and 5 b .It is obvious that an increase in porosity causes greater obstruction to the fluid flow, thus reducing the velocity and decreasing the temperature.It is well known that λ 0 corresponds to pure forced convection and with an increase of λ the buoyancy force becomes stronger and the velocity profile of the fluid increases in the region near the surface of the sheet, which is evident from Figures 6 a and 6 b .These figures also show that the fluid velocity increases while the temperature decreases with an increase of the mixed convection parameter λ.Figures 7 a and 7 b describe the effects of heat source on temperature profile.
Table 1 :
Comparison between analytical and numerical results for f η and θ η when Pr 0.72, d A * B * 0.1.
Table 2 :
Values of −θ 0 when α λ d A * B * 0 and comparison with previous work.
Table 3 :
The values of f 0 for various values of d and Pr when α 0, λ 1, A * B * 0.
Table 4 :
The values of −θ 0 for various values of d and Pr when α 0, λ 1, A * B * 0. | 3,679.2 | 2012-11-12T00:00:00.000 | [
"Engineering"
] |
Choose Your Own Research Data Management Guidance
The GW4 Research Data Services Group has developed a Research Data Management Triage Tool to help researchers find answers quickly to the more common research data queries, and direct them to appropriate guidance and sources of advice for more complex queries. The tool takes the form of an interactive web page that asks users questions and updates itself in response. The conversational and dynamic way the tool progresses is similar to the behaviour of text adventures, which are a genre of interactive fiction; this is one of the oldest forms of computer game and was also popular in print form in, for example, the Choose Your Own Adventure and Fighting Fantasy series of books. In fact, the tool was written using interactive fiction software. It was tested with staff and students at the four UK universities within the GW4 collaboration.
Introduction
One of the complexities of supporting researchers in managing their data is that there is rarely a straightforward answer to any given question. So much depends on the context: not just the researcher's institution but their funding source, research domain, the type of data with which they are working, their project role, their external collaborators (if applicable), contractual arrangements, and so on. When it comes to writing guidance for researchers, therefore, the language can quickly become a maze of caveats and conditional clauses. It is hard to express the necessary information in a clear and concise way, and even harder for researchers to navigate and understand it. A possible strategy for dealing with this is to provide minimal guidance and instead rely on the provision of an advisory service; in this way, the supporter can have a conversation with the researcher and, having understood the context of their research, provide them with advice tailored to suit. This quality of service is highly desirable, but there is a limit to how far it can scale. At times of peak demand, it is better if simpler queries can be dealt with through guidance, with the advisory service dealing with more complex cases.
This issue was discussed at a meeting of the GW4 Research Data Services Group. GW4 is a collaboration between the University of Bath, the University of Bristol, Cardiff University and the University of Exeter; 1 the Research Data Services Group is one of a number of groups that facilitate co-operation, co-ordination, and the sharing of good practice between the four institutions. The group felt that what was needed was a form of interactive guidance that could, to a limited extent, mimic the conversational approach outlined above, and either provide straightforward answers tailored to the context or, on reaching its own limitations, refer the user on to the most appropriate sources of advice or detailed guidance.
It occurred to the group that this more conversational and interactive approach to text is a defining feature of interactive fiction . This term refers to a form of game or story in which the player takes the role of the point-of-view character in an unfolding textual narrative, and by directing the character's actions they affect how the story develops (Montfort, 2004). Among the group there was some experience in using dedicated interactive fiction authoring tools, and so a small working group was set up to take forward the idea of using them to develop a Research Data Management Triage Tool.
Background
There is a long history of using characteristic elements of games in serious settings to encourage uptake and engagement. The most familiar examples come from the commercial sector, such as loyalty points schemes where customers accrue points that may be redeemed against goods or services, or trigger preferential treatment when they reach a certain level. There are, however, examples of these techniques being used in higher education and research.
Such examples can be put on a spectrum according to how extensively game elements have been applied. At the minimal end of the spectrum, some Citizen Science projects provide leader boards that introduce a sense of competition among contributors; SETI@Home's Top Participants list is an example of this. 2 Moving along the spectrum, online learning modules, such as those developed as part of the MANTRA course, include puzzles and quizzes to enable participants to demonstrate their understanding. 3 At the far end of the spectrum are full games whose primary purpose is something other than entertainment, known as serious games (Deterding, Dixon, Khaled & Nacke, 2011). Examples include Foldit, a game in which players compete to find the optimal way to fold a protein, and thereby predict how it would fold in reality (Cooper et al., 2010). The Grenoble Ecole de Management developed 'Game of Deans' to help teams conceive and develop ideas for HE services. Since 2014 the Jussieu Inter-University Science Library of the Sorbonne Universities has been running 'Murder Party' games that provide a more imaginative form of library induction (Swiatek, 2015).
The Triage Tool idea sits at the minimal end of this spectrum, since it is using some text adventure paradigms but without any sense of winning or losing; it is gamified guidance rather than a serious game. There is some evidence to suggest that using gamification in teaching and learning leads to improved results, with the caveat that it should be considered as an addition rather than a replacement for traditional techniques (van Meegen & Limpens, 2010). Thus the group was keen to position the Triage Tool as an additional resource for researchers, providing an alternative route to accessing information and by no means a substitute for existing websites or advice services.
Method of Interaction
Development of the tool began in earnest in late April 2016. One of the first decisions to be made was how the user should interact with the tool. In the sphere of interactive fiction, there are two main ways the player can interact with the story. In choice games, the user is asked to choose one of several options in order to proceed. This type of game was used in the Choose Your Own Adventure and Fighting Fantasy series of gamebooks. In parser games, the user interacts by typing in commands that the game engine interprets. This mechanism was used in many early computer games, such as Adventureland and the Zork series. The strengths and weaknesses of these two styles derive respectively from the fact that with choice games, all the available options are laid out explicitly on the screen, while with parser games, the options are hidden and must be guessed.
For the Triage Tool, the parser approach would allow more topics to be covered, and allow guidance to be accessed without having to navigate through menus. On the other hand, there is greater potential for frustration since the user has first to guess what topics might be covered, and second to express their query in a way the parser can understand. Parser games are also harder to write since the author must anticipate all the various commands the user might issue: not only the requested topics but all the multifarious ways in which they might be expressed.
Conversely, choice games are limited by the number of options that can reasonably appear together on a screen, and the number of selections a user would be willing to make in order to get to an answer. However, there is a much shallower learning curve to using them, since the user need only point and click in order to interact. Such games are correspondingly easier to write since the author controls the available responses and can plan the effect of each one in turn. On reflection, the group decided to use a choice-based approach. Since the tool was not intended to be a comprehensive advice service, it was felt that the ease of use and development afforded by a choice-based text would be worth the sacrifice of the potential richness of something parser based.
Development Environment
Having decided on the style of interaction, the group reviewed the various systems available for authoring such games and narrowed the field to a shortlist of two: Twine and Squiffy. Twine was first released in 2009 and has established itself as one of the most popular systems for choice-based games. 4 Squiffy was first released in 2014, and was developed to a state of relative stability over the following 17 months. 5 The choice between them was made on the basis of four criteria: collaboration, ease of installation, ease of use, and game play characteristics.
Collaboration An important consideration was that the tool would be developed jointly by the GW4 partner institutions. The group needed a system that compiled games from source code, rather than an opaque binary file, and where changes from each partner could easily be merged into the master copy. In this respect, Squiffy had the advantage, since it compiles transparently from a source file that uses user-generated internal identifiers and a Markdown-like syntax. 6 In contrast, Twine 2 discourages direct editing of the source code; authors instead use dedicated authoring software which saves to an SGML file. While that file can be exported, shared and imported, Twine assigns sequential numeric IDs to passages; this means that if two people work on a game at once, their versions will have conflicting IDs. This makes merging the two versions non-trivial.
That being said, there is an unofficial command-line tool, Twee2, that supports a more portable version of Twine 2 code comparable to that of Squiffy. 7 Ease of installation For the purpose of sustainability, it was also important that any of the partner institutions could compile the source code to a working Web page. On this criterion, Squiffy and Twine were equally suitable: the editing applications for both can be used online or run locally without installation. The aforementioned Twee2 variant requires a local installation of the Ruby programming language and was therefore problematic on locked-down university PCs.
Ease of use Another factor relevant for sustainability was the learning curve for using the source code language, since the responsibility for maintaining the tool would lie with non-programmers. Here again there was little to choose between Twine and Squiffy, although Squiffy appeared to be slightly simpler at the expense of some functionality.
Game play
The game play experience provided by the two systems was very similar; indeed, there were only a couple of notable differences. In Squiffy games, progress is saved automatically in a browser cookie, so if the player leaves the page and returns later, they pick up where they left off. In Twine games, any reload of the page causes the game to return to the start, though players can manually save and resume progress.
The other main difference is that Twine allows players to undo and redo their decisions, while Squiffy does not.
On balance the group decided to use Squiffy, on the basis that it could be used without having to compromise on any of the above criteria, although an undo function could have been useful.
For the collaborative version control environment, the group looked for an external service rather than an institutional one to ensure equitable access to the code by all partners. GitLab was selected since it allowed repositories to be private initially and opened up at a later point. 8
Planning and Writing the Content
Having decided on the software to use and set up a collaboration environment, the group sketched out a structure for the Triage Tool. It had been decided at the outset that the tool would be directed at postgraduate researchers. Generally speaking the level of research data management information required by this group is at the introductory level, and therefore requires a less discipline-or institution-specific focus. This would aid writing the content of the tool across multiple institutions. A need had also been identified by all four partners for more guidance specifically tailored to this group, and it was anticipated the text adventure format would work well for a student audience wishing to 'explore' the topic.
The idea was to provide broad topic areas on the first screen; on selecting an area, the user would then be shown a list of questions that the tool could answer on that topic. Some questions would lead to answers or referrals to other sites, others to further questions. The group identified frequently asked questions concerning research data management and grouped them into five topical areas: Data Management Plans, storing data, organising data, documenting data, and sharing data.
Writing the tool was completed in two phases, with a review after the first phase to steer activity in the second. Two areas were selected for development in the first phase: organising data and documenting data. These were chosen as having least variation in guidance across the four institutions. Bristol developed the former and Bath developed the latter.
The initial review of the tool was conducted within the Research Data Services Group, but by those outside the working group, in July 2016. The key items of feedback were as follows: • The usual behaviour for Squiffy was to add new text to the end of the page, resulting in a long transcript. This was felt to be messy and confusing, so it was decided to clear the screen periodically instead.
• It was felt that the level of detailed information provided by the tool should be reduced to lessen the maintenance burden.
• The level of interactivity should be increased to further differentiate the tool from existing Web guidance.
• It was felt that people should be asked for their institution and funder only at the point where the guidance diverged, rather than at the start.
Having taken this feedback on board, the existing content was revised, and the remaining sections allocated to working group members. As each section was completed from the perspective of the first member's institution, the remaining members reviewed the content and contributed their own institution-specific guidance.
A full prototype of the tool was completed in early January 2017, at which point GW4 branding was applied (see Figure 1).
The way in which the prototype tool behaves is as outlined above: the tool asks the user questions and lists possible responses, each encoded as a link. Some links lead to further screens, others replace the response with relevant information. Links are also embedded within some of the answer text; on selection, they insert more detailed information on the topic adjacent to the link, rather than at the bottom of the page.
When a user comes to guidance that varies according to their funder or institution, they are presented with a list from which to select the relevant value. The tool remembers these selections using internal variables, so that if the users navigate to a different question they do not have to choose again.
Each screen has a 'restart' link at the top. This returns the user to the first screen and clears any internal variables set. In addition, any screen that does not simply link to further screens has one or two links at the end prefaced with 'Do you have any other questions about. . . ?' These allow the user to explore the other questions answered within the current topical area, or select a new topical area, by returning to previous screens. In contrast to the 'restart' link, no internal variables are cleared.
User Testing
Some preliminary user testing was held in late January 2017 with staff and postgraduate research students at the University of Bath. Participants were asked to use the tool to find the answers to research data management questions; they were invited to choose their own questions but sample ones were provided as a fallback. The tester observed their progress and noted down any points at which the tool surprised, confused or frustrated the participant.
After 10 to 15 minutes using the tool, participants were asked four questions: 1. Which aspects of the tool did you like or dislike? 2. Was the tool self-explanatory? Was there anything you wish you had known at the start?
3. Is there anything it doesn't do that you would like it to do? 4. Would you use it again, or recommend it to a peer?
The results from this preliminary round of testing gave some consistent messages. On the positive side, all participants said they liked either the look and feel of the tool, or the way it gave clear and concise answers to questions. Most approved of the conversational way it led them to those answers. None found it confusing or hard to use.
On the negative side, almost all participants expressed a concern about the navigation. A few missed the links to previous screens at the bottom, and others did not realise how they differed from the 'restart' link. Many said they would prefer to see a breadcrumb trail or a 'back' or 'undo' button.
On a related point, users were sometimes surprised by the effect of some of the links. Within the same list, some links might be replaced with simple answers while others might lead off to a separate screen to give room for more complex answers. This confounded the expectations of users tackling their query in a non-linear way, that is, trying several avenues simultaneously. Several participants suggested that links to external resources should be opened in a new window, or that external links should be explicitly marked; they did approve, however, of the way the tool allowed them to resume their session when they returned to the page.
Two other common points were that the tool needed clearer links back to the institution's research data support Web guidance or email address, and that a few questions did not sit intuitively within the topical areas on the initial screen.
Further user testing is planned to confirm these messages. There will then be a further round of revision to address the issues before the tool is launched.
Discussion
One of the issues that arose during the development of the tool was maintaining differentiation between it and the guidance pages already available on the respective institutions' websites. Since the tool is providing information on a web page, rather than acting as a serious game, there is a significant overlap of mission with the guidance pages; but there is clearly no benefit in having text from the website reproduced verbatim within the tool.
The fundamental difference in approach is that the tool provides interactive filtering of the information. The user selects various options, and is presented with a clear statement of the guidance that applies to them; they never see the irrelevant options or caveats. This helps to remove confusion and doubt, though it is of course incumbent on the tool authors to ensure that users are not presented with an over-simplification. A good example of this is in the tool's answer to 'What should my Data Access Statement look like?': after selecting a sequence of options, the user is presented with a single form of words they can copy out and complete with relevant details.
From this springs more nuanced aspects of the user experience. Instead of getting to the right topic through a menu structure, the user navigates by answering the tool's questions; this gives a more conversational feel to the process, which some users may prefer. If an issue has several facets under which it might be organised -for example, disposing of sensitive non-digital data -it is possible to lead the user to it by several routes quite naturally, without having to duplicate it at several points in a static hierarchy or favour a particular decomposition of the facets.
It is also possible to provide guidance at several levels of detail: the user reads a high level summary at first, and then digs into detailed points as they need to. At a coarse level, this can resemble an accordion menu, where clicking on a heading reveals the text beneath, but one can use this feature more subtly. For example, the tool mentions encryption as a way of protecting sensitive data; someone unfamiliar with encryption can select that word to insert additional sentences explaining it, while others can read on without hindrance from unwanted exposition.
This interactive filtering allows users to be presented with highly detailed information: since they do not see the detail that does not apply to them, they cannot get lost in or distracted by it. But just because they can be presented with such detail does not necessarily mean they should . Research data management is a fast-moving area and increasing the level of detail in the tool increases the burden of keeping the information up to date. Since any efforts in this direction are committed first and foremost to institutional Web guidance, the tool tends towards providing less detail and linking back to the existing guidance where possible.
Quite apart from the character of the Triage Tool itself, the group found benefit in the process of developing it collaboratively. When providing guidance at an institutional level it is all too easy to lose sight of what is general good practice and what is driven by local policy and infrastructure provision. Developing the tool encouraged members of the group to look again at that boundary. It also provided a useful starting point for sharing expertise and analysing possible gaps in guidance at each institution.
Conclusions and Next Steps
As mentioned above, the immediate next steps for the Triage Tool are to complete more extensive user testing across all four partners and adjust the tool to address the issues raised. Once all partners are satisfied, the tool will be published online and the respective institutions' research data management Web pages will link to it. At that point, the source code for the tool will be made available from the GW4 Research Data Services Group area of GitLab. 9 For the purposes of sustainability, at least one member of staff at each institution has administrator rights over the source code repository. That member manages write access to the repository at their institution, and is able to help the other institutions restore their access should it become necessary. Each member is responsible for updating the guidance specific to their own institution as well as the generic guidance. One detail still to be determined is how the tool will be hosted, but once this is agreed, a release procedure will be put in place for compiling and publishing updates to the tool.
The Triage Tool provides a different way of accessing information, and it may not be to everyone's taste. Some people will prefer to navigate through a traditional hierarchy of pages and see the full, unfiltered information laid out for them, and find reassurance that they are not missing out on anything. However, the testing performed so far suggests that many find a clear and simple message more reassuring, and this a strength of the Triage Tool approach. The authors believe it serves a need, particularly for those looking for a quick answer to a quick question. | 5,363.4 | 2017-09-16T00:00:00.000 | [
"Computer Science"
] |
Anti-inflammatory properties of mutolide isolated from the fungus Lepidosphaeria species (PM0651419)
Mutolide an anti-inflammatory compound was isolated from the coprophilous fungus Lepidosphaeria sp. (PM0651419). The compound mitigated LPS-induced secretion of pro-inflammatory cytokines TNF-α and IL-6 from THP-1 cells as well as human peripheral blood mononuclear cells (hPBMCs). Mutolide also inhibited secretion of another pro-inflammatory cytokine IL-17 from anti-hCD3/anti-hCD28 stimulated hPBMCs. NF-κB is the major transcription factor involved in the secretion of pro-inflammatory cytokines including IL-17. Mechanistic evaluations revealed that mutolide inhibited induced NF-κB activation and translocation from cytoplasm into the nucleus. However, mutolide did not significantly affect activity of p38 MAPK enzyme, a serine/threonine kinase involved in cell cycle proliferation and cytokine secretion. These results indicate that mutolide may exert its anti-inflammatory effect via NF-κB inhibition. Oral administration of mutolide at 100 mg/kg showed significant inhibition of LPS-induced release of TNF-α from Balb/c mice in an acute model of inflammation. Our results highlight the anti-inflammatory properties of mutolide and suggest that further evaluation in a chronic model of inflammation is required to confirm the potential of mutolide as a druggable candidate for the treatment of inflammatory diseases. Electronic supplementary material The online version of this article (doi:10.1186/s40064-015-1493-6) contains supplementary material, which is available to authorized users.
Background
Inflammation is a complex response to harmful stimuli like microbial infection, endotoxin exposure, damaged cells or irritants. Lipopolysaccharide (LPS), which is produced by Gram negative bacteria, binds to CD14/ TLR4/MD2 receptor complex, especially in monocytes, dendritic cells, macrophages and B cells. This results in activation of a complex biochemical cascade that promotes the recruitment of MyD88, activation of protein kinases, recruitment of the adaptor protein TRAF6, and subsequent activation and translocation of NF-κB and AP-1 into the nucleus. NF-κB activation is mediated by phosphorylation of IκB after LPS stimulation culminating in dissociation of the IκB complex leading to translocation of NF-κB into the nucleus wherein it interacts with promoter regions of various genes encoding pro-inflammatory mediators. The activation of NF-κB results in excessive production of pro-inflammatory cytokines such as TNF-α, IL-6, and IL-1β. Several studies have implicated the role of TNF-α, IL-1β, IL-6 in the pathogenesis of a number of inflammatory diseases, such as inflammatory bowel disease (IBD), rheumatoid arthritis, sepsis and mucositis. Since the discovery of TNF-α antibody for the treatment of RA (Feldmann et al. 1996) biologists are targeting TNF-α to reduce inflammatory reactions in the body and restore the cytokine balance. Tociluzumab, a humanized anti-IL-6 receptor antibody has achieved a very good ACR70 response in human clinical trials for RA (Smolen and Maini 2006). Thus, anti-IL-6 treatment is also considered an important strategy for therapeutic intervention. Recent developments with the anti-IL-17 and anti-IL-23 strategies have shown clinical success in the treatment of psoriasis (Griffiths et al. 2010;Leonardi et al. 2012;Papp et al. 2012). The clinical success of these Open Access *Correspondence<EMAIL_ADDRESS>1 NCE Research Division, Piramal Enterprises Ltd., Mumbai, India Full list of author information is available at the end of the article anti-cytokine strategies has increased the focus of pharmaceutical drug discovery on identifying small molecule inhibitors of cytokines such as TNF-α, IL-6, IL-17 and IL-23 (Kulkarni-Almeida et al. 2008).
Historically, the best resources for novel scaffolds have always been natural products. A number of studies have reported that natural products show anti-inflammatory activity by controlling the levels of various inflammatory cytokines or inflammatory mediators including TNF-α, IL-6, IL-1β, NF-κB, JAK, STAT, NO, iNOS, COX-1 and COX-2 (Debnath et al. 2013;Gautam and Jachak 2009). Amongst natural products, fungi are a rich source of chemical diversity (Deshmukh and Verekar 2012;Gunatilaka 2006;Kharwar et al. 2011;Newman and Cragg 2007), and its metabolites are used by the pharmaceutical industry in either the native form or as derivatives (Aly et al. 2011;Bernier et al. 2004). As only a small part of the mycota is known and most fungi produce several unknown metabolites, fungi are still one of the most promising sources for new lead compounds. Fungal metabolites are known potential anti-inflammatory agents and act on targets such as iNOS, NF-κB, AP-1, JAK, STAT, cytokines, cyclooxygenase (COX-1 and COX-2), 3β-HSD, XO and PLA2: Rutilins A and B isolated from Hypoxylon rutilum, an inhibitor of NO production (Quang et al. 2006), Gliovirin isolated from Trichoderma harzianum, an inhibitor of inducible TNF-α expression (Rether et al. 2007), Panepoxydone isolated from Lentinus crinitus, an inhibitor of NF-κB activation (Erkel et al. 1996), Phomol isolated from Phomopsis sp. inhibitor of edema in the mouse ear assay (Weber et al. 2004), Ergoflavin isolated from an endophytic fungus of Mimosops elengi, an inhibitor of human TNF-α and IL-6 (Deshmukh et al. 2009), are but a few examples.
In our ongoing pharmacological screening program on biodiversity of fungi present in Indian landscape using high throughput screening (HTS); we discovered a remarkable anti-inflammatory activity in extracts/fractions of a fungus Ascomycota, coded as PM0651419. The 14-membered macrolide, mutolide, was isolated by bioactivity guided isolation. This macrolide was first discovered by chemical screening of the culture broth of the fungus F-24′707y, obtained after UV mutagenesis of the wild type strain, which normally produces the spirobisnaphthalene cladospirone bisepoxide . However, the biological activity of this macrolide was not reported. We describe here the isolation of the active compound mutolide from this culture and demonstrate for the first time the anti-inflammatory properties of mutolide.
Phylogenetic tree analysis for sample PM0651419
The fungal culture, PM0651419 was identified as Lepidosphaeria nicotiae by partial sequencing of the internal transcribed spacer (ITS) region with ITS primers using polymerase chain reaction (PCR). A nucleotide-to-nucleotide BLAST query of the gene bank database (Altschul et al. 1990) recovered GQ203760.1, Lepidosphaeria nicotiae as the closest match to the ITS rDNA of PM0651419 (92 %) (Fig. 1). Evolutionary analysis were performed using MEGA 6 (Tamura et al. 2013). The 92 % similarity score does not provide confident species-level identification in the genus Lepidosphaeria, hence it was designated simply as a Lepidosphaeria sp.
Isolation and structural elucidation of the compound
Crude extract of fungus PM0651419 was subjected to vacuum liquid chromatography and organic fraction was generated using 100 % methanol. This organic extract was fractionated by HPLC which included reversed phase column (RP-C18) and water:acetonitrile (0.1 % formic acid) gradient as mobile phase. Fractionation of PM0651419 gave 12 active fractions (1-4: moderate activity, 5-12: potent activity, retention time 0.5-6.5 min.) and only four fractions 5, 7, 9 and 11 were selected for analysis by LCMS, High-Resolution MS. The four fractions/tubes (5, 7, 9 and 11) were mixtures of mutolide and minor unidentified components. All fractions were tested for its effect on induced secretion of TNF-α and IL-6 and active fractions were further purified. Further fractionation on PM0651419-tubes 5 and 7 yielded four active fractions (47 J, 47 K, 47 L and 47 P). High-resolution MS of 47 J, 47 K and 47 P showed mutolide was present as a relatively pure fraction (95 %) (Additional file 1: Figures S1, S2, S3). Anti-inflammatory activity was further confirmed for pure compound. Isolated quantity of pure compound was very low and hence, large scale isolation was carried out for other biological studies. The isolated compound was characterized by spectroscopic data analysis (IR, 1 H-NMR, 13 CNMR, and LC-MS). As mentioned in Table 1, all the values were in complete accord with (Table 1). Hence the structure of the compound was assigned as mutolide (Fig. 2). Our subsequent data shows the anti-inflammatory potential of mutolide.
Effect of mutolide on LPS-induced TNF-α and IL-6 secretion from THP-1 cells
THP-1 cells are frequently used as a standard system for monocytes due to their similar genetic background. In preliminary experiment, we sought to confirm the antiinflammatory potential of mutolide. The anti-inflammatory activity was evaluated by the ability to inhibit LPS-induced TNF-α and IL-6 secretion. THP-1 cells were stimulated by LPS after addition of mutolide. After 24 h, supernatant was collected and assayed for TNF-α and IL-6 levels by ELISA. Cytotoxicity was evaluated using MTS. Dexamethasone was used as a positive control. At 1 μM, dexamethasone inhibited the production of IL-6 and TNF-α from THP-1 cells by 90 % and 74 % respectively without significant toxicity on the cells. LPS stimulated THP-1 cells showed 2296 and 84 pg/ml of TNF-α and IL-6 secretion, respectively. As shown in Fig. 3a, mutolide blocked the release of TNF-α and IL-6 by LPS-stimulated THP-1 cells in a dose-dependent manner with an IC 50 of 1.27 ± 0.06 and 1.07 ± 0.02 µM, respectively, without significantly affecting viability of the cells. IC 50 for toxicity was 32.16 ± 1.53 µM.
Effect of mutolide on LPS-induced TNF-α and IL-6 secretion from human peripheral blood mononuclear cells
Further, we sought to evaluate the anti-inflammatory potential of mutolide using primary cells. For this, PBMCs from healthy donors were isolated and stimulated with LPS after adding mutolide. After 5 h of incubation, the supernatant was collected and assayed for measuring TNF-α and IL-6 levels by ELISA. Cytotoxicity of mutolide was assessed by MTS method. The positive control, dexamethasone at 1 µM, inhibited the secretion of IL-6 and TNF-α from hPBMCs by 71 ± 3 and 65 ± 6 %, respectively, without significant toxicity on the cells. Mean TNF-α and IL-6 levels observed from three healthy human donors in this experiment were 4055 and 350 pg/ml, respectively. Mutolide, in this experiment mitigated TNF-α and IL-6 at IC 50 equivalent to 1.83 ± 0.33, 2.5 ± 0.5 µM, respectively, without affecting viability of hPBMCs up to 100 µM (Fig. 3b).
Effect of mutolide on anti-hCD3/anti-hCD28 co-stimulated IL-17 release from human peripheral blood mononuclear cells and its effect of proliferation of anti-hCD3/anti-hCD28 co-stimulated human peripheral blood mononuclear cells
To evaluate the effect of mutolide on another pro-inflammatory cytokine IL-17, anti-hCD3/anti-hCD28 co-stimulated hPBMCs were used. PBMCs from healthy donors were isolated and stimulated with anti-hCD3/anti-hCD28 mAb and incubated for 48 h with mutolide. After 48 h incubation, supernatant was collected and assayed for IL-17 levels by homogenous time resolved fluorescence (HTRF) method. The anti-proliferative effect of mutolide on anti-hCD3/anti-hCD28 co-stimulated hPB-MCs was measured by the incorporation of 3 H thymidine in these cells. Mean IL-17 level observed from three healthy human donors was 1906 pg/ml. Mutolide inhibited IL-17 expression with an IC 50 of 0.63 ± 0.04 µM. Whereas effect on cell proliferation was observed at IC 50 of 4.36 ± 1.02 µM (Fig. 3c) suggesting that effect on IL-17 was selective and not due to a general cessation of proliferation. Mutolide was further assessed for its effect on activation of the transcription factor RORγt which is centrally involved in IL-17 synthesis (supplementary information). The compound did not affect RORγt activity in a transfected cell line (Additional file 1: Figure S4) suggesting that mutolide may be inhibiting pathways ubiquitously involved in cytokine secretion.
Effect of mutolide on TNF-α induced NF-κB activation in CEM-κB cells transfected with the κB element
It is well established that LPS as well as anti-CD3/CD28 signal transduction leads to activation of NF-κB and release of cytokines such as TNF-α, IL-6 by monocytes and IL-17 by T cells. Accordingly to decipher the pathway by which inhibition of induced cytokine secretion in THP-1 cells and hPBMCs was observed; we studied the effect of mutolide on NF-κB transcription using a CEM-κB cell line transfected with the κB binding element. CEM-κB cells were treated with mutolide and subsequently stimulated with TNF-α for a period of 16 h. We observed that mutolide dose dependently prevented activation of NF-κB (Fig. 4a).
Effect of mutolide on NF-κB activation in HeLa cells
To further elucidate the effect of mutolide on the signaling events involved in NF-κB activation; namely IκB activation followed by p65 translocation, we used TNF-α induced HeLa cells as a tool system. HeLa cells were treated with mutolide at 10 µM followed by stimulation with TNF-α. Mutolide inhibited phospho NF-κB translocation from cytoplasm to nucleus ( Fig. 4b) but did not affect phospho IκB activation (Fig. 4c).
Values presented are average of N = 3 donors. Mutolide had an IC 50 of 0.63 ± 0.04, 4.36 ± 1.02 µM for IL-17 inhibition and proliferation of stimulated hPBMCs, respectively. All data are statistically analyzed by GraphPad Prism version 5.0. Error bars represent mean ± SEM. *p < 0.05, **p < 0.01, ***p < 0.001 kinase, is a critical enzyme in cell proliferation and secretion of cytokines. To evaluate the mechanism by which mutolide inhibits the secretion of LPS-induced TNF-α and IL-6 secretion, it was tested at 5, 25, and 50 µM for its effect on p38 MAPK enzyme activity. SB203580 was used as a positive control. At 1 µM, SB203580 showed 90 % inhibition of p38 MAPK enzyme activity. However, mutolide was not active under similar experiment conditions and did not inhibit p38 MAPK enzyme indicating that the effect of mutolide on cytokine expression and cell proliferation may be independent of the MAPK pathway and may be regulated by action of mutolide on the NF-κB pathway ( Table 2).
Effect of mutolide in vivo production of TNF-α
To assess whether mutolide mediated inhibition of proinflammatory cytokine observed in vitro could be translated into a meaningful pharmacological effect in vivo, we used an acute model of inflammation. Rolipram, the positive control, significantly inhibited LPS-induced TNF-α production. In this study, oral administration of mutolide at 100 mg/kg significantly inhibited LPS-induced production of TNF-α from Balb/c mice (Fig. 5).
Discussion
Literature implicates that there are at least 1.5 million fungi in nature. Several fungal metabolites are reported to be inhibitors of iNOS, NFκB, AP-1, JAK, STAT, cytokines, cyclooxygenase (COX-1 and COX-2), 3α-HSD, XO and PLA2 (Deshmukh and Verekar 2012). In natural products based drug discovery, small scale fractionation followed by LC-MS based de-replication has an advantage of screening large number of extract libraries in comparatively a short span of time. The de-replication program along with high throughput screening in different pharmacological indications reveals tremendous valuable information. Here we demonstrate for the first time the anti-inflammatory properties of mutolide isolated from the fungus Lepidosphaeria species (PM0651419). There are some differences between mutolide and other macrolides such as azithromycin, clarithomycin, etc.
There is concern that that long-term administration of these macrolides can promote bacterial resistance. Nonantimicrobial macrolides (like tacrolimus, pimecrolimus etc.) are in development as potential immunomodulatory therapies. Mutolide has been reported to have a weak antibacterial activity (Pettit 2011). Also, molecular weight of mutolide is significantly lower than other macrolides. Considering this and its anti-inflammatory properties reported in this manuscript, suggests that mutolide if proven efficacious in a chronic model of inflammation, can be used to develop new scaffolds against specific anti-inflammatory targets using modern medicinal chemistry approach.
Exposure of macrophages to bacterial endotoxin or lipopolysaccahride is well known to activate TLR4 mediated signaling cascades that initiate inflammatory gene expression events leading to inflammatory cytokine production. Since, THP-1 cells are widely used cell lines to investigate the function and regulation of monocytes and macrophages (Sharif et al. 2007) and bear resemblance to primary monocytes-macrophages isolated from healthy donors, we utilized these cells for screening and bioactivity guided isolation of mutolide from the fungus PM0651419. Our data clearly indicates that mutolide blocks TNF-α and IL-6 production from LPS-induced THP-1 cells (Fig. 3a). To further verify anti-inflammatory potential using primary cells, mutolide was evaluated for its effect on pro-inflammatory cytokine secretion from LPS-induced human peripheral blood mononuclear cells. Binding of LPS is mediated through LPS binding protein (LBP) and CD14 receptors expressed on the cell surface of cells belonging to the macrophage lineage. LPS induced signal transduction leads to activation of NF-κB and release of cytokines such as TNF-α, IL-6. Mutolide inhibited the secretion of TNF-α and IL-6 from LPSinduced hPBMCs in a dose dependent manner (Fig. 3b).
To evaluate effect of mutolide on other pro-inflammatory cytokines, we looked at IL-17 expression by anti-CD3/CD28 stimulated hPBMCs. Mutolide showed dose dependent inhibition of IL-17 secretion (Fig. 3c) further corroborating its anti-inflammatory potential. RORγt being the transcription factor involved in the differentiation of T cells into IL-17 secreting Th17 cells and currently an engaging industrial target for drug development (Aggarwal and Gurney 2002), we studied the effect of mutolide on RORγt activation. However, mutolide did not mitigate RORγt activation in a reporter assay (Additional file 1: Figure S1) indicating that mutolide may not be exerting its effect on IL-17 secretion via RORγt.
IL-17, a pro-inflammatory cytokine, is found to stimulate the production of many other cytokines such as IL-6, TNF-α, IL-1β, TFG-β, G-CSF, GM-CSF and chemokines such as IL-8, GRO-a, MCP-1 from many cell types. Several studies have shown that IL-17 family is linked to many immune related diseases including RA, asthma, lupus, allograft rejection and psoriasis (Aggarwal and Gurney 2002;Cho et al. 2006;Ju et al. 2008).
Data from THP-1 cells and hPBMCs clearly demonstrated that mutolide significantly inhibited LPS-induced TNF-α and IL-6 secretion. Similarly the compound also abrogated IL-17 secretion from anti-CD3/CD28 stimulated hPBMCs. Cytokines secretion in most cell types is transcriptionally regulated by NF-κB activation. Transcriptional activation of NF-κB target genes in response to extracellular stimuli involves translocation of NF-κB from cytoplasm to nucleus. In the classical pathway,
Fig. 5 Effect of mutolide on LPS-induced TNF-α production in
Balb/c mice. Mutolide was administered orally at 50 and 100 mg/kg followed by LPS stimulation. Values presented are average of n = 8 mice. Mutolide significantly inhibited the LPS-induced production of TNF-α from Balb/c mice at 100 mg/kg (p < 0.05 as compared with LPS control). All data are statistically analyzed by GraphPad Prism version 5.0. *p < 0.05, ***p < 0.001 NF-κB protein is bound and inhibited by IκBα. Upon cell stimulation, IKK complex is activated which phosphorylates IκBα. Phosphorylation of IκBα leads to its ubiquitination and proteasomal degradation and thereby NF-κB is released from the IκBα. Active NF-κb is further activated by phosphorylation and translocate to the nucleus. Inside the nucleus, NF-κB either alone or in combination with other transcription factors such as AP-1, ETS and STAT, they induce target gene expression. Aberrant activation of NF-κB is observed in several conditions, most notably, inflammatory tissue injury, where NF-κB controls the gene expression of a variety of pro-inflammatory mediators (Ghosh et al. 1998;Heiss et al. 2001;Tergaonkar 2006).
Many natural products that have been shown to have anti-inflammatory property are known to inhibit NF-κB. Hence, in order to decipher the signaling pathway through which mutolide mitigates cytokine secretion and given the role of NF-κB signaling in cytokine secretion, mutolide was tested for its effect on NF-κB. Our data clearly indicates that mutolide blocks TNF-α induced NF-κB expression in a CEM-κB cell line transfected with the κB element (Fig. 4a). Further, mutolide inhibits TNF-α induced translocation of NF-κB from cytoplasm into the nucleus (Fig. 4b) but has no significant effect on IκB activation (Fig. 4c).
In addition to NF-κB, p38 MAPK enzyme is known to be involved in cell proliferation and cytokine secretion and several p38 MAPK inhibitors are being developed for possible therapeutic effect on autoimmune diseases and inflammatory processes (Goldstein and Gabriel 2005). Here we showed that mutolide does not inhibit p38 MAPK enzyme activity. This indicates that mutolide may exert its inhibitory effect on LPS-induced TNF-α and IL-6 secretion as well as anti-CD3/CD28 induced IL-17 secretion via NF-κB inhibition. Overall our data suggests that mutolide shows promissory anti-inflammatory properties and this activity may be mediated through effects on NF-κB signaling pathway and other transcriptional factors activated by LPS signaling. Further detailed studies are required to elucidate mutolide's mechanism of action.
Since mutolide is a small molecule which significantly mitigates proinflammatory cytokine secretion and demonstrates inhibition of NF-κB activity, we assessed its potential in abrogating cytokine secretion in vivo. Our data demonstrates that oral administration of mutolide at 50 and 100 mg/kg inhibited LPS-induced production of TNF-α from Balb/c mice (Fig. 5). Since plasma concentrations of mutolide were not evaluated it is not possible to draw a direct comparison to its in vitro effective concentration. However, this in vivo study highlights the potential of mutolide to be effective in mitigating cytokine mediated systemic inflammatory conditions.
Several studies have shown that over expression of cytokines plays a key role in the pathogenesis of autoimmune disease, chronic inflammatory proliferative disease, bone resorption and joint diseases (Feldmann et al. 1996;Papp et al. 2012;Smolen and Maini 2006). The central role of TNF in inflammatory disorders has been demonstrated by the ability of agents that block the action of TNF to treat a range of inflammatory conditions, including rheumatoid arthritis, ankylosing spondylitis, inflammatory bowel disease and psoriasis (Feldmann et al. 1996). Strategies targeting IL-6 and IL-6 signaling lead to effective prevention and treatment of chronic inflammatory diseases (Smolen and Maini 2006). Similarly IL-17 is a crucial cytokine expressed by Th17 cells; triggered by elevated IL-6. Recent data in clinical trials with antibodies directed to IL-17 has shown tremendous success in regression of inflammatory conditions such as psoriasis Papp et al. 2012). Mutolide is a small molecule which mitigates NF-κB driven inflammatory cytokine secretion by both APCs as well as T cells. NF-κB is the master transcriptional regulator which mediates the secretion of TNF-α and IL-6 in the monocyte-macrophage lineage as well as T cell driven cytokines such as IL-17. Since this transcription factor is ubiquitous, and plays a crucial role in cell differentiation and regulation of specific cellular responses, mitigation of this regulatory factor is considered a good therapeutic option for inflammation (Barnes and Karin 1997). The present studies have clearly indicated that mutolide inhibits pro-inflammatory cytokines by blocking activation of the transcription factor, NF-κB. The molecule when administered orally also abrogates LPS induced cytokine secretion in vivo. Mutolide is a small molecule which exerts systemic anti-inflammatory effects and can therefore be considered as a start-up point for developing new scaffolds against specific anti-inflammatory targets using modern medicinal chemistry approach.
Conclusions
In this study, we demonstrated the anti-inflammatory potential of mutolide isolated from the coprophilous fungus Lepidosphaeria sp. (PM0651419). Our data clearly indicated that mutolide mitigates secretion of pro-inflammatory cytokines TNF-α, IL-6 and IL-17 in different assays. Mechanistic evaluations indicated that mutolide may exert its anti-inflammatory effect via NF-κB inhibition. In an acute model of inflammation, oral administration of mutolide at 100 mg/kg showed significant inhibition of LPS-induced release of TNF-α from Balb/c mice.
Isolation and identification of fungus PM0651419
The culture PM0651419 was isolated from the horse dung samples collected from Rajkot, India, by the method described by Krug et al. using Potato Dextrose Agar (PDA) medium supplemented with 50 mg/L of chloramphenicol (Krug 2004;Krug et al. 2004). The culture was maintained on PDA slant tubes for identification and fermentation purpose.
Large-scale production of the fungus
A loop full of the well grown culture from slant maintained on Potato dextrose agar (PDA) was transferred to a 500 ml conical flask with 100 ml liquid medium containing soluble starch 1.5 g; soyabean meal 1.5 g; yeast extract 0.2 g; corn steep liquor 0.1 g; glucose 0.5 g; CaCO 3 0.2 g; NaCl 0.5 g; glycerol 1.0 g in demineralized water at pH 5.5. This was grown on rotary shaker at 220 rpm for 72 h at 26 ± 1 °C and was used as seed medium. Potato dextrose broth medium (Hi Media) was used for production. The pH of the medium was adjusted to 6.5 prior to sterilization. Twenty-five, 1000 ml flask containing 200 ml of the above medium were inoculated with 1 % of the seed culture and incubated on rotary shaker at 220 rpm for 72 h at 26 ± 1 °C.
L fermentation broth was filtered through Whatman
No. 1 filter paper to remove biomass. The filtrate was passed through the HP20 resin (250 ml bed volume). The organic compound was eluted with 1 L MeOH. The eluate was concentrated on rotary evaporator to remove methanol. The concentrated material was lyophilized to obtain 3.251 g of crude extract. The crude extract was suspended in 100 ml of water and partitioned with ethyl acetate. Evaporation of the ethyl acetate layer yielded a yellow semi solid extract (2.454 mg). Semi solid extract obtain from the ethyl acetate extract was subjected to column chromatography (SiO 2 , 60-120 mesh: CHCl 3 / MeOH gradient 2-20 %). The fractions were tested for its effect on induced secretion of TNF-α and IL-6. The fractions were also monitored by thin layer chromatography (TLC). The pure compound (590 mg) was obtained from fractions eluted with 1.75 % MeOH in CHCl 3. The purity of compound was determined by HPLC and compound was characterized by spectroscopy.
Cell line and THP-1 assay
The human monocytic cell line THP-1 (ATCC) was maintained in RPMI-1640 (GIBCO) supplemented with 2 mM l-glutamine, 100 U of penicillin per ml, 100 mg of streptomycin per ml with 25 mM HEPES and 10 % fetal bovine serum (FBS). Prior to LPS stimulation, 25,000 cells per well were cultured for 24 h in the presence of 10 ng/ml of phorbol 12-myristate 13-acetate (PMA, Sigma-Aldrich). After incubation, non-adherent cells were removed by aspiration, and the adherent cells were washed with RPMI three times. Mutolide or control (0.5 % DMSO) was added to the cells and the plate was incubated for 30 min at 37 °C. Dexamethasone at 1 μM was used as a positive control to assess assay validity. These cells were then stimulated with 1 µg/ml LPS (Sigma-Aldrich) for 24 h. The supernatants were collected and stored at −80 °C until quantification of TNF-α and IL-6 was performed by ELISA using kits from BD biosciences. The cytotoxicity was evaluated by MTS assay (Promega) as per manufacturer's recommendation.
Human peripheral blood mononuclear cells assay
Peripheral blood was collected from healthy human donors after informed consent and Independent Ethics Committee approval. Human peripheral blood mononuclear cells (hPBMCs) were isolated using Ficoll-hypaque density centrifugation (1,077 g/ml; Sigma Aldrich) and were suspended in RPMI-1640 media containing 100 U/ml penicillin and 100 µg/ml of streptomycin. For the hPBMCs assay, 1 × 10 6 cells/ml were plated in a 96-well plate and mutolide was added at eight concentrations ranging from 100 to 0.03 µM. Dexamethasone at 1 μM was used as a positive control to assess assay validity. After 30 min, these cells were then stimulated with 1 µg/ml LPS for 5 h. The supernatants were collected and stored at −80 °C until quantification of TNF-α and IL-6 was performed by ELISA using kits from BD Biosciences. The cytotoxicity was evaluated by MTS assay (Promega) as manufacturer's recommendation.
For evaluating the effect of mutolide on IL-17 release from hPBMCs, 96-well plates were coated with 1.5 μg/ml anti-human CD3 antibody and 35 ng/ml of anti-human CD28 antibody. For this assay, 1.25 × 10 6 cells/ml were plated onto these coated plates and mutolide was added at eight concentrations ranging from 100 to 0.03 µM. The supernatants were collected after 48 h and stored at −80 °C until quantification of IL-17 was performed by HTRF assay using kit from Cisbio as per manufacturer's instruction. In this assay, anti-proliferative effect of mutolide on anti-CD3/anti-CD28 co-stimulated hPB-MCs was evaluated by thymidine uptake assay.
NF-κB transcription assay
The effect of mutolide on NF-κB binding was studied using the CEM-κB cell line. The assay was conducted as per published protocol (Dagia et al. 2010). The CEM-κB cell line was maintained in RPMI containing G418. Cells were stimulated with TNF-α and NF-κB activation was measured as a direct measure of GFP fluorescence. The cells were plated at a density of 50,000 cells/ml and treated with mutolide at various concentrations or 0.5 % DMSO. These cells were then stimulated with or without TNF-α (1 ng/ml; R&D Systems), and the expression of NF-κB was observed after 16 h. The reduction of GFP fluorescence indicates the level of inhibition of NF-κB expression in the cells in the presence of mutolide. BAY 11-7082 was used as a positive control for inhibition of NF-κB activation.
NF-κB activation in HeLa cells
The specific NF-κB activity was confirmed in HeLa cells stimulated with TNF-α. HeLa cells were seeded at a density of 10,000 cells/well in MEM containing 0.5 % FBS and incubated overnight. Next day, cells were treated with mutolide at 10 µM. After 30 min., cells were stimulated with TNF-α (25 ng/ml) for 5 min followed by fixation with 4 % para-formaldehyde for 15 min at room temperature. Cells were then washed with PBS and permeabilized using 0.5 % Triton X-100 and blocked using BSA. Cells were then stained with anti-phospho NF-κB and anti-IκBα antibodies (Cell Signaling Technology) followed by secondary antibody and DyLight 549 containing Hoechst solution. Cells were then scanned on HCS platform. Data presented is an average of observations recorded per 1000 cells.
p38 MAPK assay
p38 kinase, ATP, mutolide and ULight-4E-BP1 peptide were diluted in kinase buffer. Mutolide was tested at 5, 25 and 50 µM. SB203580 at 1 μM was used as a positive control. 2.5 µl of each p38 MAPK, mutolide, ULight-4E-BP1 and ATP were mixed in a 384-well plate and incubated at room temperature for 90 min. Kinase reaction was stopped by adding 5 µl of 40 mM EDTA prepared in 1× detection buffer. Subsequently Eu-anti-phospho-eIF4Ebinding protein (2 nM) was added and plate was incubated at room temperature. After 60 min, plate was read on Tecan microplate reader in TR-FRET mode (excitation at 620 nm and emission at 665 nm).
Animals
Male Balb/c mice (8-10 weeks of age, weighing 18-20 g) were housed in individually ventilated cages in a temperature controlled room, with access to water and food ad libitum. Animal experiments were approved by Institutional Animal Ethics Committee of Piramal Enterprises Ltd., NCE Research Division.
In vivo LPS assay
Mutolide was orally administered to Balb/c mice at 50 and 100 mg/kg in the form of a suspension in carboxymethyl cellulose (CMC; Sigma-Aldrich). One hour later, LPS dissolved in sterile pyrogen-free normal saline was administered i.p. The negative control group received normal saline as an i.p. injection; while all other groups received LPS. Rolipram, a PDE-4 inhibitor, has been shown to significantly reduce serum TNF-α level in the LPS-induced endotoxic shock model and was approved by the Institutional Animal Ethics Committee to be used as an a control in acute model of inflammation. Rolipram was administered at 30 mg/kg. After 2 h, blood was collected and plasma separated by centrifugation at 2000×g at room temperature and stored at −80 °C until assayed for mouse TNF-α levels by ELISA.
Statistical analysis
Statistical analysis was performed using the software package GraphPad Prism. For analyzing differences among multiple (more than two) groups, a single factor ANOVA followed by Dunnett's multiple comparison tests were used. P values <0.05 were considered statistically significant. All error bars represent standard error of mean. | 7,093.8 | 2015-11-19T00:00:00.000 | [
"Biology"
] |
Extracting Pulmonary Nodules and Nodule Characteristics from Radiology Reports of Lung Cancer Screening Patients Using Transformer Models
Pulmonary nodules and nodule characteristics are important indicators of lung nodule malignancy. However, nodule information is often documented as free text in clinical narratives such as radiology reports in electronic health record systems. Natural language processing (NLP) is the key technology to extract and standardize patient information from radiology reports into structured data elements. This study aimed to develop an NLP system using state-of-the-art transformer models to extract pulmonary nodules and associated nodule characteristics from radiology reports. We identified a cohort of 3080 patients who underwent LDCT at the University of Florida health system and collected their radiology reports. We manually annotated 394 reports as the gold standard. We explored eight pretrained transformer models from three transformer architectures including bidirectional encoder representations from transformers (BERT), robustly optimized BERT approach (RoBERTa), and A Lite BERT (ALBERT), for clinical concept extraction, relation identification, and negation detection. We examined general transformer models pretrained using general English corpora, transformer models fine-tuned using a clinical corpus, and a large clinical transformer model, GatorTron, which was trained from scratch using 90 billion words of clinical text. We compared transformer models with two baseline models including a recurrent neural network implemented using bidirectional long short-term memory with a conditional random fields layer and support vector machines. RoBERTa-mimic achieved the best F1-score of 0.9279 for nodule concept and nodule characteristics extraction. ALBERT-base and GatorTron achieved the best F1-score of 0.9737 in linking nodule characteristics to pulmonary nodules. Seven out of eight transformers achieved the best F1-score of 1.0000 for negation detection. Our end-to-end system achieved an overall F1-score of 0.8869. This study demonstrated the advantage of state-of-the-art transformer models for pulmonary nodule information extraction from radiology reports. Supplementary Information The online version contains supplementary material available at 10.1007/s41666-024-00166-5.
Introduction
Lung cancer stands as the primary cause of cancer-related death in the United States (U.S.) [1].Research from the National Lung Screening Trial (NLST) has revealed that low-dose computed tomography (LDCT) is capable of detecting lung cancer in its early stages and significantly lowering mortality rates among high-risk individuals [2].Following the NLST study, numerous professional societies and medical associations, such as the U.S. Preventive Services Task Force (USPSTF) and the American Cancer Society, have recommended lung cancer screening (LCS) with LDCT for high-risk individuals [3,4].
Pulmonary nodules are abnormal cell growth that forms lumps in the lungs.Pulmonary nodules and nodule characteristics, such as nodule size, multiplicity, and density, detected in the LDCT are important indicators of nodule malignancy [3].The nodule characteristics are critical for the diagnosis and treatment of lung cancer, as well as for conducting epidemiologic and outcome studies of lung cancer.For instance, the Lung Imaging Reporting and Data System (Lung-RADS®), a quality assurance tool developed by the American College of Radiology (ACR) for standardizing lung cancer screening reporting and recommendations, is based on the pulmonary nodule characteristics.On the other hand, nodule information is often documented in free text clinical narratives such as radiology reports in electronic health record (EHR) systems, which is not readily accessible for down streaming studies such as those examining adherence to Lung-RADS recommendations for surveillance, and applications that utilize structured data.Manually identifying pulmonary nodules and nodule characteristics from free text reports is time-consuming and cannot scale up to large-scale studies.
In the past few years, researchers have developed several rule-based natural language processing (NLP) systems to extract pulmonary nodules and nodule characteristics concepts from radiology reports [5][6][7][8].Although these rule-based NLP systems show a good performance of capturing pulmonary nodules and the associated characteristics, they are known to have generalizability issues when applied to new datasets with different documenting patterns and styles.Researchers often have to substantially customize the rules when applying rule-based NLP systems to radiology reports from a different data source [9].Machine learning-based NLP models have better performance and generalizability than rule-based NLP systems.Particularly, recent studies have demonstrated that deep learning-based NLP approaches outperformed not only rule-based but also traditional machine learning-based NLP models [10][11][12][13].Previous studies explored machine learning-based and deep learning-based approaches for clinical information extraction from radiology reports and showed excellent performance [14][15][16][17].However, there is no deep learning-based NLP system for pulmonary nodules and nodule characteristic extraction from the radiology report.
3
Journal of Healthcare Informatics Research (2024) 8:463-477 This study aimed to develop an NLP system using state-of-the-art transformer models to extract pulmonary nodules and associated nodule characteristics from clinical narratives in radiology reports.Our NLP system consisted of three subtasks: (1) clinical concept extraction [18]-to identify the mentions of nodules and the nodule characteristics, (2) clinical relation identification [19]-to link nodule characteristics to the corresponding nodule concept, and (3) negation detection [20]-to identify the negation mention of the nodules (e.g., no nodules have been detected).To develop the NLP system, we established a retrospective cohort of patients that underwent lung cancer screening and collected their radiology reports and physician order notes from the University of Florida Health (UF Health) Integrated Data Repository (IDR).We systematically examined eight pretrained transformer models from three transformer architectures including bidirectional encoder representations from transformers (BERT) [10], robustly optimized BERT approach (RoBERTa) [21], and A Lite BERT (ALBERT) [22] for clinical concept extraction, relation identification, and negation detection.We compared our transformer models with a recurrent neural network (RNN) implemented using a bidirectional long short-term memory (LSTM) architecture with a conditional random fields layer (BiLSTM-CRFs)-as a baseline model for concept extraction and support vector machines (SVMs) model as a baseline model for relation identification.
Data Source
This study used clinical text from the University of Florida (UF) Health integrated data repository (IDR), a clinical data repository that consolidates EHR data from various UF Health clinical and administrative systems.We identified 3080 patients who underwent an LCS between 2012 and 2020 using the Healthcare Common Procedure Coding System (HCPCS) code G0297.Based on this patient cohort, we pulled a total of 120,465 clinical narratives, of which 3771 were radiology reports that documented pulmonary nodule characteristics information.We recruited annotators and randomly selected 400 reports for annotation.This research was approved by the UF Institutional Review Board (IRB201901754).
Annotation
We developed initial annotation guidelines based on the nodule information defined in Lung-RADS and iteratively optimized the guidelines in multirounds of annotations.Two annotators (SY and TL) manually identified all pulmonary nodules and their associated characteristics [23].The final annotation guidelines defined seven categories of nodule concepts and six types of relations between the nodule and characteristics of the nodule.For example, in the sentence "nodule located at the lower lobe," there is a nodule ('nodule') and a 'site' characteristic ('lower lobe') linked by a nodule-site relation.The relations were annotated at the document level which may cross multiple sentences.If there was a negation attached to a nodule concept such as "No pulmonary nodule," we annotated a negation attribute to the nodule concept.While performing annotation, we excluded content from general suggestions or references to clinical guidelines that were not directly linked to a particular patient (e.g., "lung nodule follows up algorithm: < 4 mm, CT at 12 months").We calculated the inter-annotator agreement using Cohen's kappa [24] from 40 reports annotated by both annotators.Annotation discrepancies were resolved through group discussions of the annotators, NLP experts, and physicians.After annotation, we removed duplicated notes and notes with very few concepts.We randomly divided the annotated notes into a training set and a test set according to a ratio of approximately 8:2.We trained various machine learning models using the training set and evaluated the performance using the holdout test set.
NLP Methods
As shown in Fig. 1, our pulmonary nodules extraction system consists of five modules: preprocessing, concept extraction, relation identification, negation detection, and postprocessing.We employed the preprocessing pipelines established in our prior research (https:// github.com/ uf-hobi-infor matics-lab/ NLPre proce ssing) [25], which integrated standard NLP procedures including tokenization, text normalization, sentence boundary detection, and data format transformation.Details for each module are provided in Supplement Appendix 1.In the concept extraction module, we adopted state-of-theart transformer-based NLP models, including BERT, RoBERTa, and ALBERT, and compared them with BiLSTM-CRFs [12] as the baseline.In relation identification, we adopted the same transformer-based NLP architectures mentioned above and compared them with the SVMs as the baseline.In the negation detection module, we approached negation detection as a classification problem, where the transformer models were trained to determine whether a lung cancer nodule mention in clinical text was negated.We explored pretrained transformers from the general English domain (e.g., BERTbase), public available models (e.g., BERT-mimic) pertained on PubMed and Medical Information Mart for Intensive Care III (MIMIC-III) corpora [26], as well clinical specific model pretrained using clinical narrative (e.g., GatorTron).The postprocessing module aggregates results from concept extraction, relation identification, and negation detection into a standard output format.The predicted results are first organized by document id; then, the concept extraction results in BIO format was converted to BRAT format, followed by assigning detected relations to the entities and finally attached the negation results to entities.The amalgamated results are saved into files which allows end-to-end evaluation and results visualization via the BRAT annotation tool.Details of the concept extraction, relation identification, and negation detection modules are described in the following sections.
Concept Extraction Module
The goal of the concept extraction module was to extract nodule mentions and nodule characteristics.According to the ACR recommendation, radiologists are required to document nodule characteristics (e.g., size, shape, composition, and margin) of each nodule [3].We adopted state-of-the-art transformer-based deep learning models for concept extraction.We adopted the standard beginning-inside-outside (BIO) tagging scheme to label the annotated pulmonary nodule and nodule characteristics using BIO tags, where "B" indicates the first token of a concept, "I" indicates tokens inside of a concept, and "O" indicates tokens that do not belong to any concepts.The goal of concept extraction was to classify each token in a sentence into predefined 'BIO' categories.The transformer model (e.g., BERT) introduced an algorithm to break words into common sub-tokens to reduce vocabulary size and avoid out-of-vocabulary problems; therefore, a special tag "X" was introduced to label the non-leading sub-token, which is modified from the previous deep learning models (e.g., BiLSTM-CRFs) that rely on word-level 'BIO' tags.
We used the BiLSTM-CRFs as the baseline model.The BiLSTM-CRFs is the most adopted deep learning model for concept extraction before transformer-based models.The LSTM component has several "gates" to model the long-distance dependency in language.In this study, we adopted the BiLSTM-CRF architecture by Lample et al. [12].The model has 2 bidirectional LSTM layers: a word embedding layer and a character embedding layer to transform the input words and characters into vector representations.The last layer is a CRF layer that decodes the hidden states from the word-level bidirectional LSTM to BIO tags and predicts the named entities.We used word embeddings pretrained using the fastText package on deidentified clinical notes from the MIMIC II corpus.The dimension of word embeddings was set to 100.
Relation Identification
We approached relation identification as a classification task-to classify a pair of concepts into predefined relation categories.In this study, a relation was defined between a nodule and a characteristic concept.We adopted a two-stage classification procedure developed in our previous study [27], including (1) a binary classifier to determine whether two concepts "has a relation" or "has no relation" and (2) a rule-based procedure to further categorize entity pairs that "has a relation" into the correct relation category based on the entity types.For example, if a candidate entity pair was classified as "has-relation" and one of the entities was nodule and the other was site, then the rule-based procedure would classify it as a "nodule-site" relation.The rule-based procedure in stage 2 worked since there was only one relation category defined between two types of entities.Another challenge in relation identification was to identify the candidate pairs that had relations.Theoretically, we could generate candidate pairs by enumerating combinations among all concepts as there might be a relation in between.However, this would introduce too many negative samples for classification.Instead, we applied the following heuristic rule to reduce combinations: (a) we only kept the concept pairs composed of a nodule entity as the first element and a nodule characteristics entity as the second element and (b) for cross-distance of pairs, we first defined the number of sentence boundaries between the two entities (e.g., 0 for single-sentence relations and 1 for relations across two sentences), then we only considered candidate pairs with cross-distances less than three since we found that, in the training set, 96% of the annotated relations had cross-distances less than three.We also used a unified BERT-based classifier developed by our previous study to handle all candidate pairs with various cross-distances [28,29].
We used the SVMs as the baseline model.SVMs is widely used in various classification tasks and demonstrated good performance [30] before the emerging of deep learning models.In this study, we used the SVM implementation in the LIB-SVM-3.22package [31] and optimized the regularizer C and the tolerance of termination criterion E. We used features including the text of entities in candidate pairs and their n-grams (n = 2,3), n-grams (n = 2,3,4,5) of context before and after entities in candidate pairs, the token distance between the entities in candidate pairs, and the concept extraction tags of all tokens in the sentences where the candidate pairs are located.
Negation Detection
We approached negation detection as a binary classification problem-to classify the observed entities into two predefined categories including "negated" and "nonnegated."We performed the negation detection for each 'nodule' entity and then integrated the results with concept extraction and relations in the postprocessing pipeline.For each nodule entity recognized by the concept extraction module, we identified the corresponding sentence, which was fed into transformer models to generate distributed representations from the nodule entity and its contexts.Then, we added a classification layer composed of a linear layer with soft-max activation to calculate a probability score.We experimented with various context window setups, comparing the use of just one sentence to including the sentences before and after as context, and we observed no performance difference.The best performance was achieved when using only the sentence containing the entity and we did not observe improvement by adding nearby sentences.
Transformer Models
In this study, we examined eight pertained transformer models from both general English domain and clinical domain.We explored three widely used transformerbased architectures including BERT, RoBERTa, and ALBERT.BERT is a multilayer bidirectional transformer-based encoder model.It was pretrained using masked language modeling (MLM) and next-sentence prediction.RoBERTa has the same architecture as BERT; unlike BERT pertained using static masking pattern generated during preprocessing, RoBERTa is pretrained using a dynamic masked 1 3 Journal of Healthcare Informatics Research (2024) 8:463-477 language modeling and optimized using different strategies such as full-sentence without next-sentence prediction (NSP) loss, large mini-batches, and a larger bytelevel BPE.(e.g., removing the next-sentence prediction).ALBERT is a simplified version of BERT.It reduces total number of parameters by factorizing the tokenembedding layer size, to optimize large-scale configurations and memory efficiency.ALBERT also pretrained using MLM and optimized using sentence-order prediction loss.We examined transformer models pretrained with 110 million parameters, i.e., BERT-base, RoBERTa-base, and ALBERT-base.We also examined their clinical versions fine-tuned using the MIMIC-III corpora, i.e., BERT-mimic, RoBERTamimic, and ALBERT-mimic, which were developed in our previous study [25].We further explored Bio-Clinical BERT [32] and the GatorTron model [33] for comparison.Bio_ClinicalBERT was developed using 0.5 billion words from the MIMIC-III dataset [32] with 110 million parameters.GatorTron was developed utilizing over 90 billion words extracted from de-identified clinical notes from University of Florida (UF) Health, PubMed articles, and Wikipedia, with 345 million parameters, which demonstrated good performance in clinical concept and relation extraction [33].
Experiment and Evaluation
The baseline BiLSTM-CRFs model was developed using TensorFlow and transformer-based models were developed using PyTorch in our previous work [25,28].We chose the best model checkpoints based on the F1-scores achieved on the validation set.For transformers, we adopted models pretrained using the MIMIC III corpus (BERT-mimic, RoBERTa-mimic, and ALBERT-mimic) in our previous study [25].We used the Bio_ClinicalBERT developed by a previous study [32] and used GatorTron model developed in our previous study [33].For NER, we split the training set further into a sub-training set and a validation set with a ratio of 8:2.We trained the models on the training set and saved the best checkpoints based on the model performance on the validation set.We adopted the early stopping strategy in the training and fixed the training epoch number and batch size as 30 and 4, respectively, for all NER experiments.For relation extraction and negation detection tasks, we adopted a fivefold cross-validation strategy in training set to optimize the model hyperparameters, including the training epoch number (in a range from 3 to 6) and training batch size (4, 8, and 16).We kept all other hyperparameters as default (e.g., learning rate at 1e-5 and random seed at 13) during the experiments.We evaluated NLP models using strict (i.e., the beginning and end boundaries of a concept must be the same with gold standard annotation) micro-averaged precision, recall, and F1-score calculated using the official evaluation script from the 2018 n2c2 challenge [34].To approximate the non-parametric standard deviation for model performances, we adopted the bootstrapping method where we re-generate datasets using different random seeds and repeat the same experiment 20 times and used the obtained results to calculate the standard deviation [35].The best models were selected according to the cross-validation performances measured as microaveraged strict F1-score.All experiments were conducted using five Nvidia A100 GPUs.The concepts, relations, and negations annotated by human exerts were used as the gold standard for evaluation.
Annotation of Pulmonary Nodules and Nodule Characteristics
The two annotators identified a total of 2012 pulmonary nodule concepts and nodule characteristics from 394 notes with an inter-annotator agreement of 0.925.We divided the data into a training, a validation, and a test set.Table 1 shows the distribution of concepts, relations, and negation in the training, validation, and test sets.
Extraction of Pulmonary Nodules and Nodule Characteristics
Table 2 compares eight transformer-based NLP models with a baseline BiLSTM-CRF model for extracting pulmonary nodule and nodule characteristics.All transformer-based NLP methods outperformed the baseline model in terms of F1-score, with the RoBERTa-mimic model achieving the best F1-score of 0.9279 followed by the GatorTron model with a F1-score of 0.9274.Noticed that the RoBERTa model achieved a high recall score (0.9747) but a relatively low prevision score (0.8853) while the GatorTron model's precision and recall scores are more balanced (0.9021 and 0.9542).Supplement Table 1 shows detailed performance of RoBERTa-mimic for the seven categories of pulmonary nodules and nodule characteristics.Among the seven categories, RoBERTa-mimic achieved an excellent F1-score for recognizing nodule course (0.9841), and a good F1score for the nodule shape (0.8000).
Linking Nodule Characteristics to Nodules
Table 3 compares the eight transformer-based NLP models with the SVMs as the baseline for linking nodule characteristics to pulmonary nodules.All transformer-based NLP methods achieved better F1-scores than the baseline model.Both the ALBERTbase model and GatorTron model achieved the best F1-score of 0.9737.Supplement Table 2 shows detailed performance of ALBERT-base for six relation categories used to link nodule characteristics to pulmonary nodules.Among the six relation categories, ALBERT-base, which achieved a perfect F1-score (1.0000) for linking 'shape' to nodules, and an excellent F1-score (0.9500) for linking 'course' to nodules.
Negation Detection
Per annotation, there were 162 negated and 454 non-negated nodule entities.Table 4 shows the performance of the eight transformer-based NLP models for negation detection.Two general transformer models (BERT-base and ALBERT-base) and five clinical transformer models (BERT-mimic, RoBERTa-mimic, ALBERT-mimic, Bio_ClinicalBERT, and GatorTron) achieved the best F1-score of 1.0000 on the test set.We used the RoBERTa-mimic model in the end-to-end pipeline as it also showed better performance in concept extraction.
The End-to-End System
We integrated the best concept extraction model (RoBERTa-mimic), the best relation identification model (ALBERT-base), and the best negation detection model (RoBERTa-mimic) into an end-to-end system.Our end-to-end NLP system for extraction of pulmonary nodule and nodule characteristics achieved an F1-score of 0.8869 (precision = 0.8345 and recall = 0.9464).
Discussion and Conclusion
In this study, we developed an NLP system to extract pulmonary nodules and nodule characteristics from radiology reports.We explored eight state-of-the-art transformer models for concept extraction, relation identification, and negation detection and compared them with two baseline models including BiLSTM-CRFs and SVMs.RoBERTa-mimic achieved the best F1-score of 0.9279 for extracting nodule concept and nodule characteristics.ALBERT-base and GatorTron achieved the best F1-score of 0.9737 for linking nodule characteristics to pulmonary nodules.Seven out of the eight transformer models achieved the best F1-score of 1.000 for negation detection.
Our end-to-end system achieved an overall excellent F1-score of 0.8869.This study demonstrated the advantage of state-of-the-art transformer models for pulmonary nodule information extraction from radiology reports.For nodule concepts and nodule characteristics extraction, the clinical transformer RoBERTa-mimic outperformed the other general transformer models pretrained using general English corpora, which is consistent with results from our previous study [25].Among the lung nodule and characteristic concepts, RoB-ERTa-mimic had a moderate performance for nodule shape.One potential reason is that the number of nodule shape concepts annotated in the corpus is lower (N = 43) than other categories.For relation identification, the ALBERT-base model outperformed clinical transformers pretrained using de-identified clinical notes, indicating that general English context and vocabulary may play an important role in determining the relation types than the medical context.The GatorTron model also achieved the best F1-score for relation identification task, this is consistent with previous studies [36].More future studies are needed to further examine this finding.Seven out of the eight transformer models achieved an F1-score of 1.0000 in negation detection, indicating that the negation patterns in radiology reports are very consistent.
Pulmonary nodules and nodule characteristics are important information to determine Lung-RADS scores to categorize radiology findings.NLP systems that can automatically extract pulmonary nodule information from radiology reports are critical to enable medical AI systems leveraging narrative clinical text for lung cancer screening and diagnosis prediction.The NLP system developed in this study is a valuable resource for lung cancer studies that require pulmonary nodules and nodule characteristics from radiology reports.Our NLP system uses the state-of-the-art transformer models and can be adopted for the extraction of other types of nodule information such as that for thyroid nodules.
This study has limitations.First, the nodule characteristics extracted by our NLP system need to be normalized.For example, nodule size can be present in different units of measurement, and the descriptions of shape and texture have many variations.Second, we approached the task of linking nodule characteristics to pulmonary nodules as a relation identification task, future studies need to further explore other potential solutions, such as adopting machine reading comprehension models to identify the characteristics using prompts tuning algorithms [37].Our future work includes developing NLP pipelines to normalize pulmonary nodules and nodule characteristics and exploring prompt-based solutions for linking nodule characteristics to pulmonary nodules [38].
Fig. 1 3
Fig. 1 Workflow of our pulmonary nodules extraction system
Table 2
Comparison of deep learning models for extraction of pulmonary nodule and nodule characteristicsValues in bold indicate the best value per metric.Multiple bold numbers per column signify that there is no statistical difference among the top-performing values for the given metric
Table 3
Comparison of deep learning models to link pulmonary nodule characteristics to pulmonary nodulesValues in bold indicate the best value per metric.Multiple bold numbers per column signify that there is no statistical difference among the top-performing values for the given metric | 5,387.2 | 2024-05-17T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Study of the Photoelectrochemical Properties of 1D ZnO Based Nanocomposites
: Exploitation of common elements as photocatalysts for conversion of photons to electricity stimulates the development of a green energy strategy. In this paper, methods for the preparation of active coatings based on ZnO/Ag/CdS, which are used in the photocatalytic oxidation reaction, are examined. The physical and chemical properties of the resulting arrays were studied using optical spectrometers, an electron microscope, an X-ray diffractometer, and potentiostatic measurements and electrochemical impedance spectroscopy. The effectiveness of photocatalysts was calculated by the ability to liberate gas from aqueous solutions when exposed to light. The rate of degradation was indirectly measured with a conductometer.
Introduction
Among the existing approaches in terms of converting solar energy, photoelectrochemical (PEC) hydrogen production is very promising because it allows existing difficulties, such as storage and/or transportation, to be overcome [1]. After the process of splitting water, the resulting energy can be stored via the covalent bond of hydrogen molecules, which can later be used in the combustion process to release heat, or in a hydrogen cell that produces electricity, with the formation of H 2 O as a by-product.
Direct decomposition of water under terrestrial conditions using the action of sunlight does not happen because water is transparent to light waves longer than 190 nm [2]. Therefore, harnessing the PEC process using semiconductor materials and/or organic compounds is auspicious [3]. Semiconductor materials with positions of valence and conduction levels suitable for PEC reactions are important components of photoelectrodes. Semiconductor materials with positions of valence and conduction levels suitable for PEC reactions are important components of photoelectrodes. Today, several materials are known to have semiconducting properties that catalyze the oxidation-reduction reaction of H 2 O under certain conditions. Most of these materials are expensive, requiring the inclusion of co-catalysts made of noble metals and sacrificial agents, among others. Photoelectrodes made of these substances significantly increase the cost of the resulting energy. By using common elements to produce semiconductor photoelectrodes, the cost of hydrogen extraction from water can be significantly reduced. Zinc oxide (ZnO) and cadmium sulfide (CdS) are commercially available materials with semiconductor properties when used as active photoanode layers. The relatively low temperature of synthesis of such semiconductors allows reduction of the total cost of a PEC cell and thus the creation of inexpensive hydrogen. In addition to semiconductor nanoparticles (NPs), plasmonic NPs are widely used to sensitize wide-bandgap semiconductor materials [4][5][6]. For example, Ag NPs deposited on a matrix of wide bandgap semiconductor material, such as ZnO, 2 of 11 increase the response to visible light [7]. The photocatalytic materials used in the redox reactions of the water-splitting process under sunlight require matching the energy levels of the conduction and valence zones to the potentials of oxidation and reduction of water molecules, respectively [8,9].
According to the Anderson model, the second type of heterojunction is formed between CdS and ZnO, although the electron affinity of CdS is higher than that of ZnO. Consequently, during the photoinduced formation of electron-hole pairs in CdS, electrons from the CdS conduction band are freely transferred to the ZnO conduction band through ballistic diffusion. The time required for the transfer of an individual electron is 18 ps, which is shorter than the exciton lifetime in CdS [10]. At the same time, deposition of silver (Ag) NPs on arrays of ZnO should lead to increased photocatalytic activity with better photocorrosion resistance [11], since Ag NPs on the surface of a semiconductor provide space for the accumulation of photogenerated electrons and increase the probability of separating electrons and holes [12,13]. Based on the above, the synthesis of multi-junction photoelectrodes based on ZnO, CdS, and Ag should have the following structure: widebandgap ZnO semiconductor, which receives sunlight, with subsequent absorption of UV light → Ag NPs coating, which collects radiation from the visible spectrum → thin-film CdS coating, which absorbs visible radiation and is in contact with the electrolyte solution. In the fabrication of PEC photoanodes, the structure was achieved by the sequential deposition of ZnO, Ag, and CdS.
Additionally, the sample preparation methods and synthesis conditions utilized should be acceptable for the creation of large-area photoelectrodes. A convenient and cheap process to make semiconductor coatings involves utilizing electrochemical, dipcoating, and hydrothermal methods. In this work, the methods listed were used for the synthesis of the 1D arrays of ZnO, ZnO/CdS, ZnO/Ag, and ZnO/Ag/CdS. Subsequently, the samples were used in the assembly of PEC photoanodes used to test the photocatalytic properties of the arrays.
Results
The formation of ZnO nanorods (NRs) arrays through electrochemical synthesis occurs at low concentrations of zinc precursors in the electrolyte, which stimulates the interaction of OH − ions adsorbed on the (0001) surface with Zn 2+ ions, which leads to the formation of Zn(OH) 2 , with a further transition to the ZnO phase, the growth of which mainly occurs along the (002) direction.
The morphology and elemental composition of obtained samples were investigated using scanning electron microscopes JSM-6490 LA (JEOL) with a tungsten cathode. Analysis of 1D ZnO arrays surface morphology showed the formation of hexagonal rods because of electrochemical synthesis (Figure 1a). The crystal structure of the material obtained was analyzed using an X-ray diffractometer X'pert PRO (PANalitical) with a copper anode of 0.154 nm Cu-Kα. The structural-phase analysis of the deposited layers revealed that the main directions of crystallite growth are (100), (101), and (002), which correspond to the wurtzite phase of (hexagonal) structured ZnO (Figure 1b). However, the peak relating to the direction (002) has a greater intensity, which indicates that the structures are oriented perpendicularly to the substrate. In order to form ZnO nanotubes (NTs), the etching of low-dimensional ZnO NRs should proceed mainly in the direction of the (0001) plane. Selective etching of ZnO NRs to achieve a tubular morphology was confirmed in [14][15][16]. The metastability of ZnO in the (0001) plane leads to the preferential removal of the material in the (0001) plane at speed exceeding the etching rate of the material in other planes, which results in preferential etching in the central part of the NRs. Because ZnO is soluble in an acidic medium, etching of 1D ZnO arrays was conducted through the electrochemical method. Samples with a uniform ZnO array were used as cathodes in an electrochemical cell, the anode of which was a platinum plate, with a 1 V potential supplied and a varied reaction time. Figure 1c shows a typical micrograph of an array of ZnO NTs obtained by selective etching of electrodeposited low-dimensional ZnO rods on a transparent conductive substrate. A homogeneous array of large-area ZnO NRs consisting of 1D structures with well-defined hexagonal cross-sections, the average diameter of which was 200-300 nm, was observed. A detailed study of the micrographic images shows that the etching process begins from the center of the upper surface and advances through the inner part of the rod, which results in the formation of tubes, the sidewalls of which were 10-20 nanometers thick.
For the formation of the composite material ZnO NRs/Ag, matrix of ZnO NRs with surface morphology illustrated in Figure 1a was selected. After deposition of Ag NPs onto the surface of the ZnO NRs, a layer of inclusions formed on the side and end surfaces (Figure 2a Assembling CdS layers onto ZnO NTs was conducted in the same manner as onto ZnO NRs. After 10 cycles of layering CdS coating onto a ZnO/Ag composite with tubular morphology (Figure 4a), the inside and outside of which were covered with Ag NPs, a coating forms, in the interior and exterior, similar to that following deposition of CdS onto ZnO/Ag NRs. After increasing the number of cycles up to 20, the deposited material significantly fills the spaces inside the ZnO/Ag NTs, which results in a compaction of the film (Figure 4b). Electron transport in ZnO NRs, ZnO NTs, ZnO NTs/Ag/CdS arrays (Figure 5a) was examined using an impedance meter while immersing the individual photoanodes in an aqueous solution of 0.05 M Na 2 SO 4 . The spectra were recorded in darkness at 0.2 V with an amplitude of 5 mA and a frequency range from 500 kHz to 10 MHz. Electrochemical impedance spectroscopy was able to interpret the migration activity of charge carriers and the resistance of interphase charge transfer in the samples. As a rule, a smaller radius in the semicircle of a Nyquist plot implies a lower resistance to electron transfer, indicating faster charging and a higher separation rate of electron-hole pairs [16], which facilitates charge transfer at the solid-liquid interface [17]. The arc radius that corresponds to the impedance of the ZnO NTs (Figure 5a-red curve) was the smallest, indicating a lower electrical resistance in the interface layer, which means increased charge transfer over the semiconductor surface [18]. In this case, the resistance to charge transfer in the NRs (Figure 5a-black curve) was greater in comparison with the NTs and the composites based on them. This means that the use of an array of ZnO NRs as a matrix for the manufacture of a composite material was not relevant. An increase in the resistance of the interfacial charge transfer occurs in arrays of NTs following deposition of Ag and CdS, which was predictable because of the resultant heterostructure (Figure 5a-blue curve). The EIS graphs demonstrate that the unique structure of NTs has a decisive role in significantly improving the performance of PEC. In order to study the photoresponse of the ZnO nanostructure and nanocomposites of ZnO/Ag and ZnO/Ag/CdS, a 60 W xenon lamp and three-electrode electrochemical cell were used. The PEC cell consisted of a fabricated photo-anode, a reference Ag/AgCl electrode, a platinum cathode, and an aqueous solution containing 0.05 M Na 2 SO 4 as an electrolyte. When using Ag NPs deposited between n-type direct-gap semiconductors, more efficient transport of charge carriers occurs. At the same time, such an architecture was more stable under photo-excited transitions and water molecule oxidation since photoinduced excitons rapidly separate, with charge transfer to the phase boundaries. Absorption of the visible fraction of light by the narrow-gap CdS semiconductor caused its excitation, which resulted in the generation of charge carriers. Photogenerated electrons can be quickly extracted and efficiently transferred through the alignment of the type II band in the NTs arrays of nanocomposites, which additionally results in the recovery of water for H 2 production. Photoinduced holes that gather in the valence band of CdS contribute to the oxidative reaction of water. Furthermore, plasmonic Ag NPs can generate additional hot electrons and then inject them into the conduction band of the neighboring CdS, which increases the carrier density and speeds up the transfer of photoinduced electrons from CdS to ZnO. Thus, a synergistic effect is observed in the ZnO/Ag/CdS photoanode, which ensures efficient transport and use of photoinduced electrons, ultimately leading to a significant improvement in the characteristics of the PEC processes. The photoinduced current density passing through a cell made of ZnO NTs/Ag/CdS 10 layers reaches 50-60 µA/cm 2 , which is twice the current density when using ZnO NTs/Ag/CdS 30 layers (Figure 5b).
Increased current density in the structures of the samples obtained is also explained by an improvement in photon harvesting associated with the expansion of the spectrum to which the active layers are sensitive. This expansion of the spectral sensitivity of the photoanode based on ZnO NTs after depositing CdS NPs to the array is confirmed by the optical absorption spectrum (Figure 5c-blue curve). Two absorption thresholds in ZnO/CdS composites are clearly seen in comparison with the single one in the ZnO nanostructure (Figure 5c-red curve). The first absorption threshold, in the UV region (370 nm), is directly related to ZnO, while the second threshold, in the visible region (470-490 nm), is a manifestation of the optical properties of CdS. The optical absorption spectrum of the ZnO NTs/Ag/CdS heterostructure (Figure 5c-black curve) shows broadening of the spectral sensitivity up to 480 nm, with the absorption threshold in this area related to the Ag NPs.
The amount of hydrogen gas, which may have included a portion of oxygen, as gas chromatography was not conducted, released after 200 min of irradiation of the threecomponent tubular ZnO-based nanocomposite was 2 µL, which was 1.8 times greater than the hydrogen released using a photoanode based on ZnO NRs (Figure 5d). Although photoinduced hydrogen generation using the two-component photoanode ZnO/CdS proceeds twice as fast as the three-component photoanode ZnO/Ag/CdS, the use of the latter has an advantage in photodegradation resistance ( Figure 6). As can be seen, the ZnO/Ag/CdS photoanode did not change visually (Figure 6b). At the same time, after a 200 min PEC reaction, the ZnO/CdS photoanode began to pale in color (Figure 6a). At the same time, the degree of degradation of the photoanodes obtained in this work was measured by comparing the electrical conductivity of the working solutions of the PEC cell before and after the photoinduced RedOx reactions. The electrical conductivity of the working solutions was measured by immersing a conductometric S30 electrode in the electrolyte. It is believed that upon dissolution of the photoanode material in the electrolyte, the conductivity of the solution changes due to the appearance of additional ions. A comparison of the electrical conductivity of the working solution before and after the PEC reactions with ZnO/Ag/CdS photoanode showed an insignificant increase from 177.5 µS/cm to 179 µS/cm, which indicates a low degradation of the photoanode during a 200-min cycle. The test of conductivity of the working solution of PEC reaction with ZnO/CdS photoanode showed a dramatic increase of conductivity from 178 µS/cm to 240 µS/cm. This can indirect evidence of the dissolution of the ZnO/CdS photoanode.
The improvement of stability can be explained graphically. Figure 7 is a diagram showing the levels of the valence band (VB) and conduction band (CB) relative to a normal hydrogen electrode (NHE). As can be seen from the figure, CB of the ZnO NTs and CdS above the zero level, which is sufficient for hydrogen reduction. In this case, the diagram shows that the level of the CB of CdS was more negative than CB of ZnO, which improves the transport of charges in the material, and, therefore, improves the photocatalytic properties. Ag NPs interspersed between the layers of ZnO and CdS serve as an additional charge transfer bridge.
electrolyte. It is believed that upon dissolution of the photoanode material in the electrolyte, the conductivity of the solution changes due to the appearance of additional ions. A comparison of the electrical conductivity of the working solution before and after the PEC reactions with ZnO/Ag/CdS photoanode showed an insignificant increase from 177.5 µS/cm to 179 µS/cm, which indicates a low degradation of the photoanode during a 200min cycle. The test of conductivity of the working solution of PEC reaction with ZnO/ CdS p (a) (b)
Finite-Difference Time-Domain Simulation (FDTD)
FDTD simulations (Lumerical FDTD Solutions 8.19.1584) were employed in order to solve Maxwell's equations and obtain theoretical values for several parameters such as electric field as a function of wavelength. The sample was represented as a periodic array of cylindrical structures. The advantage of symmetry was taken to optimize simulation times. The range of interest was set to be between 0.4 and 0.7 um.
From the graphical representations given in Figure 8a,b, we can clearly see that field lines are more intense, apparent from relative numerical values on the scale bar, and the area of higher intensity is much broader in the case of ZnO NRs with Ag NPs.
Materials and Methods
During experimental work, pure commercial reagents were used. Synthesis of nanoscale arrays of 1D structures of ZnO was performed using electrochemical methods. The technique consists of the formation of thin coatings on the surface of a transparent conductive substrate (ITO glass) in order to perform the function of the working electrode of the three-electrode cell. Ag/AgCl and platinum foil were utilized as the reference and countercurrent electrodes, respectively. The capacity of the electrochemical cell was 50 mL. Before carrying out experimental work, the ITO glass with a resistance of 8 Ω/cm 2 was thoroughly cleaned.
Synthesis of ZnO NRs and ZnO NTs
ZnO NRs were synthesized via the applied constant negative potential of 0.9 ± 0.05 V in an aqueous solution, with 0.005 M Zinc nitrate and 0.5 M KCl. Electrodeposition of ZnO NRs was carried out in a temperature range from 50 to 80 • C for 20 min. The formation of ZnO NTs was conducted by selective etching of ZnO NRs. The process of selective etching proceeded in a potentiostatic mode in a three-electrode cell. The electrolyte of the system was a solution containing 0.05 M Zn(NO 3 ) 2 *6H 2 O and 0.5 M KCl. The temperature was 70 • C time 120 min, and applied positive voltage was 1 V. After synthesis, the samples were washed in distilled water and annealed in a muffle furnace at a temperature of 500 • C, over a time of 2 h.
Synthesis of ZnO/Ag Nanocomposites
Ag NPs were deposited on the surface of ZnO NRs and ZnO NTs in two steps. Firstly, the formation of Ag NPs was carried out utilizing the hydrothermal method from an aqueous solution of 0.001 M silver nitrate and 0.038 M sodium citrate, at a temperature of 97-100 • C for 2 min. As a result of mixing the reagents, the solution took on a yellow color. Then, the heated solution was placed in a thermostatically controlled oil cooler, the temperature of which was maintained around −5 • C. The synthesis took more than 5 min, after which the solution became cloudy due to the agglomeration of Ag NPs. Secondly, the ZnO/Ag composite was obtained using electrodeposition of Ag NPs on ZnO arrays [14] through the application of a negative voltage of 1 ± 0.1 V for 90 s. Finally, the samples were washed in deionized water.
Forming ZnO/Ag/CdS Core/Shell Structure
The deposition of CdS layers proceeded in accordance with the SILAR adsorption technique described in [10,15]. Glass coated with the ZnO matrix was dipped into the first beaker containing Cd 2 + (5 mM Cd (NO 3 ) 2 ) cations, which were adsorped during a dip time of 10 s. After that, the samples were immersed in deionized water in the second beaker for 20 s, which was necessary to remove excess ions weakly bound to the ZnO arrays. The formation of CdS was achieved by immersing the sample into a beaker of S 2− (5 mM Na 2 S) anions for 10 s. After that, the samples were subjected again to washing in deionized water in another beaker for 20 s. The thickness of the CdS coating deposited onto the ZnO matrix was controlled by the number of immersion cycles.
When performing SILAR, the following parameters were followed: (1) the rate of immersion of the substrate into the solution was 50 mm/min; (2) the rate of removal of the substrate from the solution was 10 mm/min; and (3) the time between vessels with different solutions was 10 s.
Conclusions
Composite photocatalytic materials ZnO/Ag/CdS were formed by three-step synthesis, and their morphology and optical and photocatalytic properties were studied. Optimal geometric dimensions of the CdS layers on 1D ZnO structures using electrochemical deposition and SILAR methods were determined. Following the addition of Ag NPs to the ZnO/CdS nanocomposite, photo corrosion damage of the active photoanode layers during water decomposition decreased. Along with improved corrosion properties, an increase in hydrogen evolution was observed. The enhancement of properties upon the addition of Ag NPs was supported by simulation analysis as well. | 4,564 | 2021-10-13T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Spectral diagnostics of oscillation centers in crystals with hydrogen bonds
Practical application of crystals in optoelectronics and laser engineering requires the directions of optical axes and the types of oscillation centers be known, and this is an important and necessary condition. We have studied the infrared transmittance and absorption spectra of hexagonal lithium iodate α-LiIО3 crystals grown by open evaporation method in H2O and D2O solutions and natural lamellar monoclinic crystals of phlogopite and muscovite. The band gap of the test crystals has been determined from the transmittance spectra. The absorption spectra have provided information on the activation energy and wavelength of the activation centers related to the oscillations of protons, hydroxonium ions Н3О +, protium Н+, ОНgroups and HDO molecules. There has been a good correlation between the parameters of infrared spectra, thermally stimulated depolarization current spectra and nuclear magnetic resonance spectra. We have analyzed the possibility of oscillation center diagnostics based on infrared spectra which also allow determining the directions of optical axes. The experimental results confirm the possibility of using IR spectra for determining the type of oscillation centers and the presence of lattice anisotropy in test crystals.
Introduction
An important task of modern science is to provide nondestructive quality control methods for laser and optical crystals during crystal growth and study of new crystalline materials. The diagnostics of these materials can be considered as a nanotechnological problem since studying the types of oscillation centers implies monitoring the translation diffusion of nanoparticles in crystal nanostructures. Earlier the types of oscillation centers were determined from thermally stimulated depolarization current spectra (TSDC) [1]. This method however requires low-temperature measurements at 77-350 K which complicates the diagnostics and requires much time. Patented method of determining optical axis positions in crystals that are known to be anisotropic [2] proved to be quite complicated. The mechanism of proton-ion conductivity and dielectric relaxation was studied [3][4][5][6][7] and the study showed the possibility of transportation and translation diffusion of protons in crystal lattice with hydrogen bonds in a wide range of temperatures with the formation of various oscillation centers.
High-temperature super proton conductors based on cesium hydrosulfate crystals CsHSO 4 were studied [5]. The authors assumed a rotation of the whole HSO 4 anion but this is low probable from the energy viewpoint. The reorientation of the anion occurs most likely due to a tunneling transition of a proton between the oxygen ions inside the tetrahedron. Therefore further studies were required in order to directly confirm the presence of translation diffusion and tunneling of protons along some axes with the formation of oscillation centers. It was first of all to be checked whether the test crystals are wide band gap ones, i.e., whether their band gap is wide enough to exclude the possibility of electron transitions to the conduction band at low temperatures. Another task was to analyze the correlation between the results obtained from infrared (IR) spectra, TSDC and nuclear magnetic resonance (NMR) spectra.
The aim of this work is to analyze the possibility of spectral diagnostics of the types of oscillation centers in crystals with hydrogen bonds and to provide more accurate, streamlined and authentic methods of determining the types of oscillation centers and the directions of optical axes on the basis of IR spectral analysis.
Experimental
To provide more accurate, streamlined and authentic spectral diagnostics of the types of oscillation centers and optical axes in crystals with hydrogen bonds carefully polished crystals are placed in an IR spectrometer. Then IR transmittance and absorption spectra are recorded in order to determine the band gap of each crystal. Then the proton component of the oscillation centers is separated. For each spectral band corresponding to a specific oscillation center the activation energy, wavelength and wave number are evaluated. The magnitude of the latter parameters and their presence in a specific direction are the basis for determining the types of oscillation centers and the directions of optical axes. The abovementioned task is achieved due to the use of advanced equipment, careful preparation and polishing of the specimens and significant reduction of time required for the experiment in comparison with the method suggested earlier [1]. Furthermore this diagnostic method allows one to check whether a crystal is anisotropic.
The test crystals were optical quality lithium iodate α-LiIО 3 crystals (hexagonal system, point symmetry group C 6 ) grown by open evaporation method in H 2 O and D 2 O solutions and natural lamellar monoclinic crystals of phlogopite KMg 3 [AlSi 3 O 10 ](OH) 2 and muscovite KAl 2 [AlSi 3 O 10 ](OH) 2 micas (monoclinic system, point symmetry group 2/m, prismatic). The choice of these test materials was not arbitrary: all these crystals have hydrogen bonds. Lithium iodate crystals have unique optical, electrical and piezoelectric properties and are used as short-wavelength radiation frequency doubling crystals in many semiconductor lasers and in optoelectronics. Lamellar phlogopite and muscovite mica crystals are used for the production of electrically insulating materials, e.g. mica paper tape, micanite, micafolium and mica plastics that are widely used for the fabrication of slot and turn insulation in generators and transformers as well as in microelectronics. Therefore study of these iodate and silicate crystals is an important and timely task since their practical application requires the directions of optical waves and the types of oscillation centers be known.
Lamellar α-LiIО 3 crystals were cut with a diamond disc on an Okamoto machine from the central part of the growth pyramid and cooled with glycerin. The 0.5-1 mm thick plates were manually grinded on grinding glass using suspension of grinding powders and glycerin. The plate sides were parallel accurate to within 0.1 µm. Then the specimens were polished with Goya paste. The 5-10 µm thick natural muscovite and phlogopite mica crystals were separated from the larger druse of crystals. The purity of the these iodate, muscovite and phlogopite crystals was confirmed by microscopic examination and transmittance spectroscopy which proved stably high transmittance in the 500-3000 nm region for α-LiIО 3 and in the 500-3200 nm region for muscovite and phlogopite.
The absorption coefficients were the highest in the direction of the main optical axis Z(С 6 ) or [0001] and the lowest in the X axis direction which is perpendicular to the main optical axis. The IR transmittance spectra were taken on a UV-ViS-NiR Cary 5000 spectrophotometer (Varian, Australia). The absorption coefficient for allowed direct transitions can be expressed with the formula [8, p. 307]: (1) where E g is the band gap, hν is the photon energy and A is the coefficient which depends on the concentration and effective masses of electrons and holes as follows: The magnitude of α depends linearly on the photon energy hν in the frequency region which is individual for each crystal. Extrapolation of this linear dependence to the crossing with the X axis gives the band gap E g . It follows from Eqs. (1) and (2) that direct transitions should not cause absorption of quanta with energies lower than the band gap. Therefore the self-absorption edge at the long wave side (low energies) should be very sharp. Indeed, pure lithium iodate single crystals ( Fig. 1) as well as phlogopite and muscovite exhibit an abrupt growth of absorption. The band gap was calculated for the self-absorption edge by linear approximation of optical transmittance spectra. The average E g of the α-LiIO 3 crystals was 4.37 eV along the Z axis and 4.46 eV along the X axis. The wavelength of the absorption edge for the silicates corresponded to 4.31 eV band gap. Therefore electron transitions from the valence band to the conduction band can be disregarded for the test silicates and lithium iodate. This confirms that tunneling and translation diffusion with the formation of oscillation centers are only possible for protons.
Results and discussion
IR absorption spectra were taken on an IFS 66v/S Fourier spectrometer (BRUKER, Germany). Spectral bands are commonly denoted in spectroscopy with wave numbers in cm -1 but this does not allow comparing IR spectra with other spectra where energy is expressed in eV. Using the Planck formula one can obtain the relationship between wave number and energy: 1 cm -1 = 1.2398 × 10 -4 eV. It was assumed that the absorption band near 3400 nm (wave number 2941 cm -1 ) confirms the probability that hydrogen ions are present [9, p. 275]. This wavelength corresponds to an oscillation center energy of 0.365 eV and this band iwas actually present in the IR spectra of the silicates and lithium iodate grown in H 2 O with iodic acid HIO 3 addition (this band is absent in neutral crystals) along the sixth order axis C 6 . The IR spectrum taken along the Z axis (С 6 ) (Fig. 2) of the crystal grown in H 2 O contained bands at 0.27 eV (hydroxonium ion Н 3 О + ), 0.365 eV (protium Н + ) and ОНions, and that for the crystal grown in D 2 О (Fig. 3) did not contain these bands. Furthermore, the IR spectra taken along the X axis (Fig. 4) did not contain bands at energies of above 0.27 eV which would be related to proton and ОНion oscillations. Thus IR spectrometry can be used for studying anisotropy, determining the directions of main optical axes in crystals and the presence of heavy water. The outburst at 0.29 eV which is resolved in all the spectra of all the crystals is an instrumental feature caused by the presence of nitrogen used for spectrometer chamber cleaning. Free Н 2 O molecules that are the basis of the hydrogen bond absorb intensely in the IR region and produce three types of oscillations in free state: -1595 cm -1 band corresponding to a 0.20 eV energy (deformation oscillation δ); -3654 cm -1 band corresponding to a 0.453 eV energy (symmetrical valence oscillation ν s ); -3756 cm -1 band corresponding to a 0.466 eV energy (asymmetrical valence oscillation ν as ).
In our experiments the silicate crystals only exhibited the bands at 0.20, 0.45 and 0.464-0.470 eV, and lithium iodate only had the 0.195 eV band (Figs 2-6, Tables 1, 2). To obtain information from IR spectra on the state in which bound water is present in the minerals one should study the spectral regions corresponding to OH group absorption.
It was shown [11] that the low-energy edge of КНСО 3 is determined by the OH transitions at 0.372 and 0.186 eV. Tables 1 and 2 suggest the presence of the 0.365 eV band related to proton oscillation centers and the 0.20-0.12 bands in silicates due to OH centers. This confirms the conclusions made from the TSDC spectra [1,6].
Of interest is the 1580 cm -1 band (0.195 ± 0.01 eV) in lithium iodate. The absorption coefficient of the crystals grown in D 2 O is almost twice as high as that of the crystals grown in Н 2 О (Figs 2, 3). It was shown [12] that a free D 2 O molecule generates the 1460 cm -1 spectral band (0.181 eV) in the 1550-1350 cm -1 range corresponding to deformation oscillations of semi-heavy water molecules HDO. The 1580 cm -1 band for lithium iodate crystals is in the 1450-1650 cm -1 range, i.e., these ranges overlap. One can therefore assume that in the lithium iodate crystals grown in D 2 O this band corresponds to the oscillations of bound semi-heavy water molecules HDO. This method can thus be used for heavy water detection in the test material. In the lithium iodate crystals grown in Н 2 О this band corresponds to the oscillations of ОНions.
The region of Н 3 О + deformation oscillations in the silicates and lithium iodate contains a well-resolved band at 0.14 eV [13]. The bands of Н 3 О + valence oscillations at 0.27 eV are very wide and weak. Among the test crystals these bands were observed for the silicates at 0.25 eV (2020 cm -1 ) and lithium iodate at 0.27 eV (2170 cm -1 ), but only for those grown in Н 2 О (Fig. 2 and Table 1). This band is absent in the α-LiIO 3 crystals grown in D 2 O (Fig. 3). Thus few if any D 3 O + absorption centers form in the crystals grown in heavy water. The absorption bands at 0.40-0.45 eV were present in the IR spectra of lithium iodate and the silicates (phlogopite and muscovite, Figs 5, 6 and Table 2). This is in a good agreement with the TSDC spectra of hydroxyl ion ОН - (Table 3) and earlier data [10]. The bands at 0.066 eV (silicates) and 0.068 eV (lithium iodate) agree well with the activation energy of peak 1 (0.07 eV) in the TSDC spectrum (Table 3) generated by the relaxation of HSiO 4 3anions in silicates or HIO 3 in iodates [7] resulting from a tunneling transition of a proton between the oxygen ions. All the crystals contain typical common spectral bands at ~10 and 20 µm [10,14,15]. Molecules formed by the same chemical groups regardless of the rest of the molecule absorb in a narrow frequency range called characteristic. Thus different silicate compounds should exhibit oscillation spectra containing similar Si-O bands, e.g. 960 cm -1 (0.12 eV) [16].
Indeed, the test lamellar silicate crystals exhibited an intense band at 0.12 eV for muscovite (wavelength ~10 µm) (Fig. 5) and 0.118 eV for phlogopite (Fig. 6). These bands are characteristic of the strong Si-O bond. Lithium iodate also exhibits intense bands at 0.12 eV (10 µm) that are similar for the crystals grown in Н 2 О, and D 2 О, i.e., there is the strong I-O bond. The 20 µm band (0.062 eV) is the 0.066-0.068 eV for lithium iodate and the silicates.
A proton has not electron shell and is a single-charged particle with a small radius and a low coordination number. Therefore it can easily form protonized oscillation centers. The barrier transparency for protons can be easily evaluated using the formula This formula yields the proton transparency of a 0.12 nm wide and 0.06 eV high potential barrier to be 0.0408 [13]. Taking into account that the distance between the oxygen ions in the SiO 3anions which was confirmed earlier [18]. The same is true for I-O-H oscillations in HIO 3 molecules that generate the 970 cm -1 band corresponding to 0.12 eV. However these bands in lithium iodate are by almost two orders of magnitude weaker.
The 0.41, 0.462 and 0.45 eV bands of water are attributable to antisymmetrical oscillations of OH groups. Indeed muscovite and phlogopite exhibited well-resolved bands peaking at ~0.46 eV in a good agreement with earlier data [19,20]. The 0.40 eV band caused by OH group oscillations is also present in the IR spectra of the lithium iodate crystals grown in light water. However this band is absent in the IR spectra of the lithium iodate crystals grown in heavy water. This confirms that protons form absorption centers while deuterons do not, because of their low mobility. Figures 1-6 and Table 1 show that study of the type of oscillation centers also allows one to determine the directions of optical axes in crystals. Figure 5. IR absorption spectrum of muscovite crystals. Inset shows spectrum fragment. To grow high-quality lithium iodate laser crystals one should add iodic acid HIO 3 with pH = 1.5 to the growth solution. Iodic acid is a good donor of protons which penetrate into the growing crystal even at very low solution acidities. In our experiments the absorption bands at 2941 cm -1 (0.365 eV) caused by proton oscillations were well resolved. The presence of tunneling transitions with the formation of protonized HSiO 4 3anions (silicates) and НIO 3 (iodates) is confirmed by the good agreement between the activation energies of peak 1 in the TSDC spectrum (0.07 eV) and in the IR spectrum (Si-O-Н band, 0.066 eV) for the silicates, and between the TSDC spectrum and the IR spectrum (I-O-Н band, 0.068 eV) for lithium iodate ( Table 3). The TSDC peak at 0.23 and 0.3 eV [1,6] caused by Н 2 О ion relaxation also agrees well with the IR spectra. Despite the different physical origins of the TSDC and IR spectra the agreement of the activation energies indicates that the TSDC peaks and the IR spectral bands of oscillations centers are generated by the same relaxers.
The NMR proton spectrum of the deuterized α-LiIO 3 crystal taken on a BRUKER AVANCE III ТМ 300 spectrometer contained a resolvable twin band. It suggests the presence of two types of nonequivalent protons which may pertain to Н 3 О + and ОНions [21]. These oscillation centers are not observed in every crystal with hydrogen bonds. For example the NMR proton spectrum of NH 4 SeO 4 crystals contains a single band [22]. Furthermore NMR spectra allowed determining the translation mobility of protons to be 5.1 × 10 -5 m 2 /(V × s). This is far greater than the Н 3 О + ion mobility in ice crystals which is 7.5 × 10 -6 m 2 / (V × s) according to N. Maeno [23]. This suggests a high probability of the formation of proton oscillation centers in the test crystals. The NMR proton band halfwidth temper-ature dependence contained 0.054 and 0.31 eV activation energies which are close to the 0.066 and 0.365 eV bands in the IR spectra. This is an additional confirmation that crystals with hydrogen bonds contain oscillation centers formed by protons and proton defects.
Conclusion
The conclusions made from IR spectroscopic studies are in a good agreement with TSDC and NMR spectra. Thus IR spectra can be used as an independent tool for determining the directions of optical axes and the types of oscillation centers in most crystalline materials. Wide band gap crystals with hydrogen bonds grown in Н 2 О and D 2 O solutions proved to contain protons in the mobile phase. The crystals contain absorption centers related to Н + ions and ОН -, Н 3 О + , Н 2 О, Si-O-Н and I-O-Н groups and semi-heavy water molecules HDO. Their activation energies and directions of main optical axes were determined. The types of oscillation centers were clarified for a number of spectral bands. The experimental results confirm the possibility of using IR spectra for determining the type of oscillation centers and the presence of lattice anisotropy in test crystals.
These IR spectral studies solve a fundamental research and technical problem of determining the types of oscillation centers in the design of optical and laser crystals and the development of reliable processes and diagnostic methods for the production and operation of crystals, e.g. for laser navigation of ships, laser location, security alarms, laser welding and cutting of metals, opto-and microelectronics etc. | 4,389 | 2019-01-06T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Concurrent Process Histories and Resource Transducers
We identify the algebraic structure of the material histories generated by concurrent processes. Specifically, we extend existing categorical theories of resource convertibility to capture concurrent interaction. Our formalism admits an intuitive graphical presentation via string diagrams for proarrow equipments. We also consider certain induced categories of resource transducers, which are of independent interest due to their unusual structure.
Introduction
Concurrent systems are abundant in computing, and indeed in the world at large.Despite the large amount of attention paid to the modelling of concurrency in recent decades (e.g., [Hoa78,Mil80,Pet66,Mil99,Abr14]), a canonical mathematical account has yet to emerge, and the basic structure of concurrent systems remains elusive.
In this paper we present a basic structure that captures what we will call the material aspect of concurrent systems: As a process unfolds in time it leaves behind a material history of effects on the world, like the way a slug moving through space leaves a trail of slime.This slime is captured in a natural way by resource theories in the sense of [CFS16], in which morphisms of symmetric monoidal categories -conveniently expressed as string diagramsare understood as transformations of resources.
From the resource theoretic perspective, objects of a symmetric monoidal category are understood as collections of resources, with the unit object denoting the empty collection and the tensor product of two collections consisting of their combined contents.Morphisms are understood as ways to transform one collection of resources into another, which may be 1.1.Contributions and Related Work.Related Work.Monoidal categories are ubiquitous -if often implicit -in theoretical computer science.An example from the theory of concurrency is [MM90], in which monoidal categories serve a purpose similar to their purpose here.String diagrams for monoidal categories seem to have been invented independently a number of times, but until recently were uncommon in printed material due to technical limitations.The usual reference is [JS91].We credit the resource-theoretic interpretation of monoidal categories and their string diagrams to [CFS16].Double categories first appear in [Ehr63].Free double categories are considered in [DP02] and again in [FPP08].The idea of a proarrow equipment first appears in [Woo82], albeit in a rather different form.Proarrow equipments have subsequently appeared under many names in formal category theory (see e.g., [Shu08,GP04]).String diagrams for double categories and proarrow equipments are treated precisely in [Mye16].We have been inspired by work on message passing and behavioural types, in particular [CP09], from which we have adopted our notation for exchanges.Contributions.The main contribution of this paper is the resource-theoretic interpretation of the free cornering and the observation that it captures the structure of concurrent process histories.Other contributions concern the categorical structure of the free cornering of a resource theory: we show that it has crossing cells and is consequently a monoidal double category in Lemma 4.5 and Lemma 4.7, argue that the vertical cells are the original monoidal category in Proposition 4.4, show that the induced monoidal category of horizontal cells can be understood as a category of resource transducers, and establish Lemma 6.2, Lemma 6.3, Observation 6.4, Lemma 6.5, Lemma 6.6, and Proposition 6.8 -all of which concern the structure of this category of horizontal cells.Finally, we give an axiomatization of the category of horizontal cells in terms of equations over a monoidal signature in Section 7. The original contributions of this paper over [Nes21b] are Lemma 6.2, Lemma 6.5, Lemma 6.6, Proposition 6.8, and the axiom scheme of Section 7. 1.2.Organization and Prerequisites.Prerequisites.This paper is largely self-contained, but we assume some familiarity with category theory, in particular with monoidal categories and their string diagrams.Some good references are [Mac71,Sel10,FS19].Organization.In Section 2 we review the resource-theoretic interpretation of symmetric monoidal categories.We continue by reviewing the theory of double categories in Section 3, specialized to the single object case.In Section 4 we recall the notion of proarrow equipment, introduce the free cornering of a resource theory, and exhibit the existence of crossing cells in the free cornering.In Section 5 we show how the free cornering of a resource theory inherits its resource-theoretic interpretation while enabling the concurrent decomposition of resource transformations.In Section 6 we consider the category of resource transducers and investigate its structure, and in Section 7 we give an axiom scheme for it.In Section 8 we conclude and consider directions for future work.
Monoidal Categories as Resource Theories
Symmetric strict 1 monoidal categories can be understood as theories of resource transformation.Objects are interpreted as collections of resources, with A ⊗ B the collection consisting of both A and B, and I the empty collection.Arrows f : A → B are understood as ways to transform the resources of A into those of B. We call symmetric strict monoidal categories resource theories when we have this sort of interpretation in mind.
For example, let B be the free symmetric strict monoidal category with generating objects {bread, dough, water, flour, oven} and with generating arrows mix : water ⊗ flour → dough knead : dough → dough bake : dough ⊗ oven → bread ⊗ oven subject to no equations.B can be understood as a resource theory of baking bread.The arrow mix represents the process of combining water and flour to form a bread dough, knead represents kneading dough, and bake represents baking dough in an oven to obtain bread (and an oven).
The structure of symmetric strict monoidal categories provides natural algebraic scaffolding for composite transformations.For example, consider the following arrow of B: is the braiding.This arrow describes the transformation of two units of dough into loaves of bread by baking them one after the other in an oven.
It is often more intuitive to write composite arrows like this as string diagrams: Objects are depicted as wires, and arrows as boxes with inputs and outputs.Composition is represented by connecting output wires to input wires, and we represent the tensor product of two morphisms by placing them beside one another.Finally, the braiding is represented 1 We work with strict monoidal categories for the sake of convenience and readability.We expect the present development to apply equally well to the general case, and if pressed would appeal to the coherence theorem for monoidal categories [Mac71].
Each transformation gives a method of baking two loaves of bread.On the left, two batches of dough are mixed and kneaded before being baked one after the other.On the right, first one batch of dough is mixed, kneaded and baked and only then is the second batch mixed, kneaded, and baked.Their equality tells us that, according to B, the two procedures will have the same effect, resulting in the same bread when applied to the same ingredients with the same oven.In this way, VD forms a strict monoidal category, which we call the category of vertical cells of D. Similarly, HD is also a strict monoidal category (with collection of objects D V ) which we call the horizontal cells of D.
Cornerings and Crossings
In this section we introduce the free cornering of a resource theory, our primary technical device, and show that the free cornering contains special crossing cells with nice formal properties.We begin by recalling the notion of proarrow equipment, specialised to the case of single-object double categories: Tersely, the free cornering of a resource theory is the proarrow equipment obtained by freely adding corner cells.Explicitly, we define: Definition 4.2.Let A be a resource theory.Then the monoid A
Now the free cornering is given as follows:
Definition 4.3.Let A be a resource theory.Then the free cornering of A, written A , is the free single-object double category determined by the following data: • The horizontal edge monoid A H = (A 0 , ⊗, I) is given by the objects of A.
• The vertical edge monoid A V = A •• is the monoid of A-valued exchanges.
• The generating cells consist of corners for each object A of A as in Definition 4.1, subject to the yanking equations, along with a vertical cell f for each morphism f : A → B of A subject to equations as in: For a precise development of free double categories see [FPP08].In brief: cells are formed from the generating cells by horizontal and vertical composition, subject to the axioms of a double category in addition to any generating equations.We call this the "free" cornering both because it is freely generated, and because we imagine there is an adjunction relating proarrow equipments and arbitrary double categories under which A is "free" in a more principled sense.We leave the construction of such an adjunction for future work.An important property of the free cornering is that the vertical cells are the original resource theory: Proposition 4.4.There is an isomorphism of categories V A ∼ = A.
Proof.Intuitively V A ∼ = A because in a composite vertical cell every wire bent by a corner must eventually be un-bent by the matching corner, which by yanking is the identity.The only other generators are the cells f , and so any vertical cell in A can be written as g for some morphism g of A. A more rigorous treatment of corner cells can be found in [Mye16], to the same effect.
Before we properly explain our interest in A we develop a convenient bit of structure: crossing cells.For each B of A H and each X of A V we define a cell of A inductively as follows: In the case where X is A • or A • , respectively, define the crossing cell as in the diagrams below on the left and right, respectively: From this we obtain a "non-interaction" property of our crossing cells, similar to the naturality of braiding in symmetric monoidal categories: Corollary 4.6.For cells α of V A and β of H A , the following equation holds in A : These crossing cells greatly aid in the legibility of diagrams corresponding to cells in A , but also tell us something about the categorical structure of A , namely that it is a monoidal double category in the sense of [Shu10]: Lemma 4.7.If A is a symmetric strict monoidal category then A is a monoidal double category.That is, A is a pseudo-monoid object in the strict 2-category VDblCat of double categories, lax double functors, and vertical transformations.
Proof.We give the action of the tensor product on cells: This defines a pseudofunctor, with the component of the required vertical transformation given by exchanging the two middle wires as in: Notice that ⊗ is strictly associative and unital, in spite of being only pseudo-functorial.
Concurrency Through Cornering
We proceed to extend the resource-theoretic interpretation of some symmetric strict monoidal category A to its free cornering A .We interpret elements of We understand equality of cells in A much as we understand equality of morphisms in a resource theory: two cells should be equal in case the transformations they describe would have the same effect on the resources involved.In this way, cells of A allow us to break a transformation into many concurrent parts.Note that with the crossing cells, it is possible for cells that are not immediately adjacent to exchange resource across the cells in between them.In the above example, flour is sent from the rightmost cell to the leftmost cell across the middle cell.This makes the double-categorical structure less constraining that it may seem at first.For example we might rearrange our previous example into the following horizontally composable cells of B : When composed, we obtain a similar morphism of A: It is worth mentioning that the difference between oven ⊗ flour ⊗ water and water ⊗ oven ⊗ flour is negligible since any permutation of a collection of resources is naturally isomorphic to the original collection as an object of A.
Horizontal Cells as Resource Transducers
If A is a resource theory, then the category H A of horizontal cells of the free cornering can be understood as a category of (A-valued) resource transducers. 2Specifically, recall our interpretation of A •• = (H A ) 0 as A-valued exchanges, in which two parties Alice and Bob must supply or retreive the resources involved in the exchange in the order specified, with who gives whom what determined by the polarity of the resources (see Section 5).Let h : X → Y be an arrow of H A .We can understand h as a machine operated by a left and right participant, again called Alice and Bob respectively.To operate the machine, Alice must play the left hand role of the domain exchange X and Bob must play the right hand role of the codomain exchange Y .The morphism h describes the internals of the machine.For example, consider the following morphism of H A :
Alice Bob
To operate the transducer, Alice must supply water and then receive bread, while Bob must supply flour, receive dough, and then supply bread.The effect of the machine is to mix the flour and water initially supplied into the dough Bob receives, and then to send the bread Bob supplies to Alice.
The transducer interpretation (along with our previous interpretation of the whole of A ) makes H A into a category of independent interest, and in this section we will study it.Compounding our interest is the fact that H A is rather unusual.It is of course a monoidal category (see Section 3) but fails to have any of the properties common to monoidal categories.Selinger's survey paper [Sel10] lists many such properties, for example: Definition 6.1 [Sel10].A monoidal category is spatial in case for all objects X and arrows h : I → I we have: It is easy to see that H A has the property of being spatial: Lemma 6.2.H A is spatial.
Proof.We use the fact that every symmetric monoidal category is spatial.The proof is by induction on the type X of the wire.If X is A • we have: and so the spatial axiom holds.Similarly the spatial axiom holds if X is A • .If X is I the spatial axiom holds trivially, and the inductive case is immediate.
We note that H A has no other property found in the aforementioned survey paper.
Much of the structure that H A does have consists of isomorphisms formed of corner cells.While isomorphic objects in V A ∼ = A can be thought of as equivalent collections of resources -being freely transformable into each other -we understand isomorphic objects in H A as equivalent exchanges.For example, there are many ways for Alice to give Bob an A and a B: Simultaneously, as A ⊗ B; one after the other, as A and then B; or in the other order, as B and then A. While these are different sequences of events, they achieve the same thing, and are thus equivalent.Similarly, for Alice to give Bob an instance of I is equivalent to nobody doing anything.Formally, we have: Lemma 6.3.In H A we have for any A, B of A: (1) Proof.(1) For I ∼ = I • , consider the •-corners corresponding to I: we know that these satisfy the yanking equations: which exhibits an isomorphism I ∼ = I • .Similarly, I ∼ = I • .Thus, we see formally that exchanging nothing is the same as doing nothing.
(2) The •-corner case is the interesting one: Define the components of our isomorphism to be: and then for both of the required composites we have: This captures formally the fact that if Alice is going to give Bob an A and a B, it doesn't really matter which order she does it in.
(3) Here it is convenient to switch between depicting a single wire of sort A ⊗ B and two wires of sort A and B respectively in our string diagrams.To this end, we allow ourselves to depict the identity on A ⊗ B in multiple ways, using the notation of [CS17]: Then the components of our isomorphism (A ⊗ B) and and, much as in (ii), it is easy to see that the two possible composites are both identity maps.Similarly, (A ⊗ B) ).This captures formally the fact that giving away a collection is the same thing as giving away its components.
For example, we should be able to compose the cells on the left and right below horizontally, since their right and left boundaries, respectively, indicate equivalent exchanges: Our lemma tells us that in cases like this there will be a mediating isomorphism, as above in the middle, making composition possible.
It is worth noting that we do not have one direction, defined by but there need not be a morphism in the other direction, and this is not in general invertible.In particular, H A is monoidal, but need not be symmetric.
This observation reflects formally the intuition that if I receive some resources before I am required to send any, then I can send some of the resources that I receive.However, if I must send the resources first, this is not the case.In this way, H A contains a sort of causal structure.
Next, we find that H A contains the original resource theory A as a subcategory in two different ways, one for each polarity: and is therefore strong monoidal as a consequence of Lemma 6.3.Further (−) • is faithful because A is freely generated.It is full because of the coherence theorem of [Mye16], which implies that for any horizontal cell (morphism of H A ) h : A • → B • we may yank all of the wires straight to obtain an equal morphism f • = h for some f : A → B of A. Similarly, (−) • is functorial, strong monoidal, full, and faithful.
There is also a contravariant involution (−) * : H A op → H A .As an intermediate step we define an operation on the cells of A as follows: For We discuss one final bit of structure in H A , concerning the following arrows: These are reminiscent of the string diagrams for rigid monoidal categories, these arrows make A • into the left dual of A • (and so make A • into the right dual of A • ).However, H A is neither left nor right rigid: for example A • ⊗ B • has neither a left nor right dual.It is natural to ask whether the arrows introduced above carry significant categorical structure.
We give one answer, and in doing so connect the present work to Cockett and Pastro's logic of message passing [CP09].In particular, the categorical semantics of this logic of message passing is given by linear actegories.If A is a symmetric monoidal category, a linear A-actegory is given by a linearly distributive category X (see e.g., [CS17]) together with two functors: with nine natural families of arrows subject to a large number of coherence conditions.The category H A exhibits similar, if much simpler, structure.In particular the strong monoidal functors (−) • and (−) • of Lemma 6.5 allow us to define Echoing the definition of a linear actegory, we have: 3 It is tempting to call this a contravariant monoidal involution, but in the covariant case a monoidal involution (−) ι has the property that (f ⊗ g) ι = g ι ⊗ f ι , twisting the tensor product [Egg11].We refrain from coining any new technical terms lest a "contravariant monoidal involution" turn out to be better suited to describing contravariant involutions that twist the tensor product instead of those that do not.Now, every monoidal category is a linearly distributive category (with both monoidal operations given by ⊗), and it turns out that H A forms a (somewhat degenerate) linear actegory.Of the nine natural families of arrows required by the definition, four are accounted for by the isomorphisms of Lemma 6.3, a further four become identities in our setting, and the final one is given by the d • • morphisms from Observation 6.4.The coherence conditions all hold trivially.We record: Proposition 6.8.Let A be a resource theory.Then H A is a linear actegory.This is intriguing insofar as it exhibits a formal connection between the free cornering of a resource theory and existing work on behavioural types.For example, the message-passing interpretation of classical linear logic presented by Wadler in [Wad14] corresponds to the message-passing interpretation of linear actegories in the special case of a *-autonomous category acting on itself (Example 4.2(4) of [CP09]).There may be an even stronger connection to the behavioural type interpretation of intuitionistic linear logic due to Caires and Pfenning [CP10], although here the connection to the logic of message passing is weaker (Example 4.2(1) of [CP09]).We leave the full investigation of these connections for future work.
Axioms for Resource Transducers
We have seen that the category of horizontal cells of the free cornering of a resource theory is an interesting object of study in its own right: it is a planar monoidal category that arises naturally and is different from those typically considered.In this section we give a direct presentation of H A both to deepen our understanding of its structure and to facilitate its use as an example (or counterexample) in the future.While there are many axioms, they are mostly intuitive, and are conveniently organized into pairs by the contravariant involution (−) * of Lemma 6.6.
Let A be a resource theory.Define T(A) to be the free spatial strict monoidal category with the generating objects as in: and the generating morphisms given by: The rules • and • correspond to the image of the functors from Lemma 6.5.All of , , , , , , , , σ•, σ• correspond to the isomorphisms of Lemma 6.3, and the η and ε rules correspond to the morphisms considered at the end of Section 6 that lead to Proposition 6.8.Before presenting the equations for T(A) we give the following string-diagrammatic conventions for our generators: Proof.That M is full follows from the cohrerence theorem for string diagrams for proarrow equipments [Mye16].Intuitively, every arrow of H A is either in the image of (−) • or (−) • , or is built out of corner cells and crossing cells.Every horizontal cell of H A that can be built out of only corner cells and does not decompose into multiple such cells is the image of one of the generators of T(A), and so we know that M is full.Perhaps surprising is that the horizontal cell d To show that M is faithful is to show that the equations T(A) capture all equations between horizontal cells of A when taken together with the equations of a spatial strict monoidal category.Recall that all of the equations of A are generated by the yanking equations, along with any equations of A. The yanking equations are local, in that each instance of one of the yanking equations involves exactly two cells of A , so we need only consider local interactions of cells of H A in our analysis.It is relatively straightforward to verify that the defining equations of T(A) are precisely the equations that arise in this way, and so M is faithful.4Finally, M is clearly identity-on-objects.
It follows that our axiomatization of H A is correct.We record: Corollary 7.2.There is an isomorphism of categories H A ∼ = T(A).
Conclusions and Future Work
We have shown how to decompose the material history of a process into concurrent components by working in the free cornering of an appropriate resource theory.We have explored the structure of the free cornering in light of this interpretation and found that it is consistent with our intuition about how this sort of thing ought to work.We do not claim to have solved all problems in the modelling of concurrency, but we feel that our formalism captures the material aspect of concurrent systems very well.
We find it quite surprising that the structure required to model concurrent resource transformations is precisely the structure of a proarrow equipment.This structure is already known to be important in formal category theory, and we are appropriately intrigued by its apparent relevance to models of concurrency -a far more concrete setting than the usual context in which one encounters proarrow equipments!Further, we have considered categories of resource transducers that are induced by our construction.We have identified some structure they do and do not exhibit, and have provided a more direct axiomatization of them.We are not aware of any categories with similar structure, which we feel makes these categories of resource transducers worthy of further study, and of potential value as a counterexample.
There are of course many directions for future work.For one, it would be nice to connect the development here to the wider literature on concurrent processes.An obstacle to this is that the free cornering does not allow us to express branching or recursion, both of which feature heavily in more general theories of process communication.If we assume that our monoidal category A has binary coproducts then we may represent a limited sort of branching computation in which (A + B) • and (A + B) • represent choices to be made by the left and right participant respectively, but this is less flexible than the protocol-level choice that one finds in e.g.session types or the nondeterminism of process calculi.We speculate that this is best approached through the "situated transition systems" introduced in [Nes21a], in which the concurrent resource transformations developed in [Nes21b] (which this paper extends) are used to augment the category of spans of reflexive graphs -interpreted as open transition systems [KSW97] -to generate material history over some resource theory as transitions unfold in time.Alternatively, one might impose additional structure on the free cornering to allow nondeterministic choice and repetition.
Another direction for future work is to pursue the connection with the message passing logic of Cockett and Pastro [CP09] (established in Proposition 6.8) and the wider programme of behavioural types influenced by linear logic including [Wad14] and [CP10].Finally, the presence of proarrow equipments here is rather mysterious, and we wonder if some deeper reason for it might exist.
3.
Single-Object Double Categories In this section we set up the rest of our development by presenting the theory of singleobject double categories, being those double categories D with exactly one object.In this case D consists of a horizontal edge monoid D H = (D H , ⊗, I), a vertical edge monoid D V = (D V , ⊗, I), and a collection of cells where A, B ∈ D H and X, Y ∈ D V .Given cells α, β where the right boundary of α matches the left boundary of β we may form a cell α|β -their horizontal composite -and similarly if the bottom boundary of α matches the top boundary of β we may form α β -their vertical composite -with the boundaries of the composite cell formed from those of the component cells using ⊗.We depict horizontal and vertical composition, respectively, as in: and Horizontal and vertical composition of cells are required to be associative and unital.We omit wires of sort I in our depictions of cells, allowing us to draw horizontal and vertical identity cells, respectively, as in: and Finally, the horizontal and vertical identity cells of type I must coincide -we write this cell as I and depict it as empty space, see below on the left -and vertical and horizontal composition must satisfy the interchange law.That is, α β | γ δ = α|γ β|δ , allowing us to unambiguously interpret the diagram below on the right: Every single-object double category D defines strict monoidal categories VD and HD, consisting of the cells for which the D H and D V valued boundaries respectively are all I, as in: and That is, the collection of objects of VD is D H , composition in VD is vertical composition of cells, and the tensor product in VD is given by horizontal composition: Definition 4.1.Let D be a single-object double category.D is called a proarrow equipment in case for each A ∈ D H there are distinguished elements A • and A • of D V along with distinguished cells of D: called •-corners and •-corners respectively, which satisfy the yanking equations: objects of A, whose elements we write A • and A • .Intuitively, elements of A •• describe a sequence of resources moving between participants in the exchange, where A • denotes an instance of A moving from left to right, and A • denotes an instance of A moving from right to left (see Section 5).
in the case where X is I, define the crossing cell as in the diagram below on the left, and in the composite case define the crossing cell as in the diagram below on the right: We prove a technical lemma: 7:9 Lemma 4.5.For any cell α of A we have Proof.By structural induction on cells of A .For the •-corners we have: and for the •-corners, similarly: the final base cases are the f maps: There are two inductive cases.For vertical composition, we have: Horizontal composition is similarly straightforward, and the claim follows by induction.
a left participant and a right participant giving each other resources in sequence, with A • indicating that the left participant should give the right participant an instance of A, and A • indicating the opposite.For example say the left participant is Alice and the right participant is Bob.Then we can picture the exchange A • ⊗ B • ⊗ C • as: Alice Bob Think of these exchanges as happening in order.For example the exchange pictured above demands that first Alice gives Bob an instance of A, then Bob gives Alice an instance of B, and then finally Bob gives Alice an instance of C. We interpret cells of A as concurrent transformations.Each cell describes a way to transform the collection of resources given by the top boundary into that given by the bottom boundary, via participating in A-valued exchanges along the left and right boundaries.For example, consider the following cells of B : From left to right, these describe: A procedure for transforming water into nothing by mixing it with flour obtained by exchange along the right boundary, then sending the resulting dough away along the right boundary; A procedure for transforming an oven into an oven, receiving flour along the right boundary and sending it out the left boundary, then receiving dough along the left boundary, which is baked in the oven, with the resulting bread sent out along the right boundary; Finally, a procedure for turning flour into bread by giving it away and then receiving bread along the left boundary.When we compose these concurrent transformations horizontally in the evident way, they give a transformation of resources in the usual sense, i.e., a morphism of A ∼ = V A :
Lemma 6. 5 .
There are strong monoidal functors (−) • : A → H A and (−) • : A op → H A defined respectively on f : A → B of A by: and Further, each of these functors is full and faithful.Proof.(−) • is functorial as in: It interacts with the tensor product in A as in: and (X ⊗ Y ) * = X * ⊗ Y * .On cells of A we also define (−) * inductively: The base cases are 7:15 f * = f along with: * cases are: * → * → Informally, α * is the mirror image of α.It is easy to see that we have α * * = α for any cell α of A .Thus, restricting (−) * to H A gives: Lemma 6.6.There is a contravariant involution (−) * : H A op → H A with the property that (f ⊗ g) * = f * ⊗ g * . 3 Lemma 6.7.• is the paramaterised left adjoint of •.That is, for all A ∈ A the functors A • − : H A → H A and A • − : H A → H A defined on h : X → Y by, respectively: and are such that A • − A • −.Proof.Fix an object A ∈ A. We require natural families of morphisms η A,X : X → A•(A•X) and ε A,X : A • (A • X) → X in H A that satisfy the triangle identities.Define η A,X and ε A,X , respectively, by and Now the triangle identities hold by repeated yanking, as in: and We therefore conclude that A • − A • −, as required.
Observation 6.4 decomposes in this way, being the image under M of the following morphism in T(A): | 7,586.6 | 2020-10-16T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Measurement of the t t-bar production cross section in the dilepton channel in pp collisions at sqrt(s) = 8 TeV
The top-antitop quark (t t-bar) production cross section is measured in proton-proton collisions at sqrt(s) = 8 TeV with the CMS experiment at the LHC, using a data sample corresponding to an integrated luminosity of 5.3 inverse femtobarns. The measurement is performed by analysing events with a pair of electrons or muons, or one electron and one muon, and at least two jets, one of which is identified as originating from hadronisation of a bottom quark. The measured cross section is 239 +/- 2 (stat.) +/- 11 (syst.) +/- 6 (lum.) pb, for an assumed top-quark mass of 172.5 GeV, in agreement with the prediction of the standard model.
Introduction
A precise measurement of the tt production cross section can be used to test the theory of quantum chromodynamics (QCD) at next-to-next-to-leading-order (NNLO) level.It can be also used in global fits of the parton distribution functions (PDF) at NNLO, and allows an estimation of α s (M Z ) as described in [1,2].Furthermore, top-quark production is an important source of background in many searches for physics beyond the standard model (SM).A large sample of top-quark events has been collected at the Large Hadron Collider (LHC), and studies of top-quark production have been conducted in various decay channels as well as searches for deviations from the SM predictions [3][4][5][6][7][8][9].This paper presents a measurement of the tt production cross section, σ tt , based on the dilepton channel (e + e − , µ + µ − , and e ± µ ∓ ) in a data sample of proton-proton collisions at √ s = 8 TeV corresponding to an integrated luminosity of 5.3 fb −1 recorded by the Compact Muon Solenoid (CMS) experiment.In the SM, top quarks are predominantly produced in tt pairs via the strong interaction and decay almost exclusively to a W boson and a bottom quark.We measure the tt production cross section selecting final states that contain two leptons of opposite electric charge, momentum imbalance associated to the neutrinos from the W boson decays, and two jets of particles resulting from the hadronisation of two b quarks.
The CMS detector and simulation
The CMS detector [10] has a superconducting solenoid occupying the central region that provides an axial magnetic field of 3.8 T. The silicon pixel and the strip tracker cover 0 < φ < 2π in azimuth and |η| < 2.5 in pseudorapidity, where η is defined as η = − ln[tan(θ/2)], with θ being the polar angle measured with respect to the anticlockwise-beam direction.The leadtungstate crystal electromagnetic calorimeter and the brass/scintillator hadron calorimeter are located inside the solenoid.Muons are measured in gas-ionisation detectors embedded in the steel flux return yoke outside the solenoid.The detector is nearly hermetic, thereby providing reliable measurement of momentum imbalance in the plane transverse to the beams.A two-tier trigger system selects the most interesting pp collisions for offline analysis.
Several MC event generators are used to simulate signal and background events: MADGRAPH (v.5.1.4.8) [11], POWHEG (r1380) [12] and PYTHIA (v.6.424) [13], depending on the process considered.The MADGRAPH generator with spin correlations is used to model tt events with a top-quark mass of 172.5 GeV and combined with PYTHIA to simulate parton showering, hadronisation, and the underlying event.The MADGRAPH generator is also used to simulate the W+jets and Drell-Yan (DY) processes.Single-top-quark events are simulated using POWHEG.Inclusive production of the WZ and ZZ diboson final states is simulated with PYTHIA.Production of WW fully leptonic final states is simulated with MADGRAPH.Decays of τ leptons are handled with TAUOLA (v.2.75) [14].The contributions from WW, WZ and ZZ (referred to as "VV") and single-top-quark production are taken from MC simulations with appropriate next-to-leading order (NLO) cross sections.All other backgrounds are estimated from control samples extracted from collision data.The tt production cross section amounts to σ tt = 252.9+6.4 −8.6 (scale) ± 11.7 (PDF + α s ) pb as calculated with the TOP++ program [15] at NNLO in perturbative QCD, including soft-gluon resummation at next-to-next-to-leading-log order [16], and assuming a top-quark mass m t = 172.5 GeV.The first uncertainty comes from the independent variation of the factorisation and renormalisation scales, µ F and µ R , while the second one is associated to variations in the PDF and α s following the PDF4LHC prescriptions [17].Expected signal yields in figures and tables are normalised to that value unless otherwise stated.
The simulated samples include additional interactions per bunch crossing (pileup), with the distribution matching that observed in data.
Event selection
Event selection is similar to that used for the measurement of the tt dilepton cross section at √ s = 7 TeV [4].At trigger level, events are required to have two electrons, two muons, or one electron and one muon, where one of these leptons has transverse momentum p T > 17 GeV and the other has p T > 8 GeV.Events are then selected with two oppositely charged leptons reconstructed with the CMS particle-flow (PF) algorithm [18], both with p T > 20 GeV and |η| < 2.5 for electrons and |η| < 2.1 for muons.In events with more than one pair of leptons passing these selections, the pair of opposite-sign leptons with the largest value of total transverse momentum is selected.Events with τ leptons contribute to the measurement only if they decay to electrons or muons that satisfy the selection requirements.The efficiency for dilepton triggers is measured in data through triggers based on transverse momentum imbalance.The trigger efficiency is approximately 90% to 93% for the three final states.Using the measured dilepton trigger efficiency in data, the corresponding efficiencies in the simulation are corrected by p T and η multiplicative data-to-simulation scale factors (SFs), which have an average value of 0.96 and uncertainties in the range 1 to 2%.
Charged-lepton candidates from W-boson decays are usually isolated from other particles in the event.For each electron or muon candidate, a cone of ∆R < 0.3 is constructed around the track direction at the event vertex, where ∆R is defined as ∆R = √ (∆η) 2 + (∆φ) 2 , and ∆η and ∆φ are the differences in pseudorapidity and azimuthal angle between any energy deposit and the axis of the lepton track.The scalar sum of the p T of all particles reconstructed with the PF algorithm, consistent with the chosen primary vertex and contained within the cone, is calculated, excluding the contribution from the lepton candidate itself.The relative isolation discriminant, I rel , is defined as the ratio of this sum to the p T of the lepton candidate.The neutral component is corrected for pileup based on the average energy density deposited by neutral particles in the event: an average transverse energy due to pileup is determined event by event and is subtracted from the transverse energy in the isolation cone.A lepton candidate is rejected if I rel > 0.15.The efficiency of the lepton selection is measured using a "tag-andprobe" method in dilepton events enriched in Z-boson candidates, as described in [4,19].The measured values for the combined identification and isolation efficiencies are typically of 96% for muons and 90% for electrons.Based on a comparison of lepton selection efficiencies in data and simulation, the event yield in simulation is corrected by p T -and η-dependent SFs, which have an average value of 0.99 and uncertainties in the range 1 to 2% to provide consistency with data.Considering also the dilepton trigger, the combined factors have an average value of 0.96 and uncertainties around 2% for the three tt final states.
Dilepton candidate events with an invariant mass M < 20 GeV ( = e or µ) are removed to suppress backgrounds from heavy-flavour resonances, as well as contributions from low-mass DY processes.Events with dilepton invariant masses within ±15 GeV of the Z mass are also rejected in the same-flavour channels.
Jets are reconstructed from the PF particle candidates using the anti-k T clustering algorithm [20] with a distance parameter of 0.5.The jet energy is corrected for pileup in a manner similar to the correction of the energy inside the lepton isolation cone.Jet energy corrections are also applied as a function of the jet p T and η [21].Events are required to have at least two reconstructed jets with p T > 30 GeV and |η| < 2.5.
The missing transverse energy, E T / , is defined as the magnitude of the momentum imbalance, which is the negative sum of the momenta of all reconstructed particles in the plane transverse to the beams.A value of E T / > 40 GeV is required in the e + e − and µ + µ − channels while no E T / requirement is imposed for the e ± µ ∓ mode, as there is very little contamination from DY events in this channel.
Since tt events contain jets from hadronisation of b quarks, requiring their presence can reduce background from events without b quarks.Jets are identified as b jets using the combined secondary vertex algorithm (CSV) [22].The operating point chosen for CSV corresponds to an identification efficiency of about 85% and a misidentification (mistag) probability of about 10% [23] for light-flavour jets (u, d, s and gluons).The selection requires the presence of at least one b jet in the event.
Figure 1 shows the p T distributions of the highest-p T lepton and jet after jet multiplicity selection, for all three final states combined.In this and the following figures the signal yields refer to an assumed top-quark mass of 172.5 GeV.The hatched regions correspond to the total statistical uncertainties in the predicted event yields.The ratio of the data to the sum of simulations and data-based predictions for the signal and backgrounds is shown in the bottom panels.A detailed description of the different background estimates is given in section 4. The multiplicities of selected jets and b jets are shown in figure 2 for the e ± µ ∓ channel, which is expected to have less background contamination.A similar level of agreement is obtained with the e + e − and µ + µ − channels.
Background determination
Backgrounds in this analysis arise from single-top-quark, DY and VV events, in which at least two prompt leptons are produced from Z or W decays.Other background sources, such as tt or W+jets events with decays into lepton+jets and where at least one jet is incorrectly reconstructed as a lepton (which mainly happens for electrons) or a lepton from the decay of bottom or charm hadrons (which mainly happens for muons), are grouped into the non-W/Z lepton category.Background yields from single-top-quark and VV events are estimated from simulation, while all other backgrounds are estimated from data.
The DY background is estimated using the "R out/in " method [3,4,24] in which the events outside of the Z mass window are obtained by normalising the event yield from simulation to the observed number of events inside the Z mass window.The data-to-simulation scale factor is found to be 1.3 ± 0.4 for the e ± µ ∓ channel.This value is compatible with 1.5 ± 0.5, which is estimated using a template fit as described in [4].For the e + e − and µ + µ − channels the factors are found to be 1.7 ± 0.5 and 1.6 ± 0.5, respectively.
Non-prompt leptons can arise from decays of mesons or heavy-flavour quarks, jet misidentification, photon conversions, or finite resolution detector effects whereas prompt leptons usually originate from decays of W or Z bosons and are isolated and well identified.Backgrounds with non-prompt leptons are estimated [25] from a control sample of collision data in which leptons are selected with relaxed identification and isolation requirements defining the loose lepton candidate, while the set of signal selection cuts described in section 3 defines the tight lepton candidate.The prompt and non-prompt lepton ratios are defined as the ratio of the number of tight candidates to the number of loose ones as measured from samples enriched in leptonic decays of Z bosons or in QCD dijet events, respectively.These ratios, parametrized as a function of p T and η of the lepton, are then used to weight the events in the loose-loose dilepton sample, to obtain the estimated contribution from the non-prompt lepton background in the signal region.The systematic uncertainty comes from the jet p T spectrum in dijet events and amounts, together with the statistical one, to 40% of the estimated yield.
Sources of systematic uncertainty
Simulated events are scaled according to the lepton efficiency correction factors, which are typically close to one, measured using control samples in data, leading to a 1 to 2% uncertainty in the tt selection efficiency.
The impact of uncertainty in the jet energy scale (JES) and jet energy resolution (JER) are estimated from the change observed in the number of selected MC tt events after varying the jet momenta within the JES uncertainties [21], and in the case of JER by an η-dependent correction with an average of ±10%.For the e + e − and µ + µ − channels these uncertainties are also propagated to E T / resulting in a larger uncertainty than for the e ± µ ∓ channel.
The uncertainties on the b jet scale factors in tt signal events are approximately 2% for b jets and 10% for mistagged jets [22,23], depending on the p T of the jets.They are propagated to the tt selection efficiency in simulated events.
The uncertainty assigned to the pileup simulation amounts to 0.8%, as obtained by varying the inelastic cross section by 5%.The uncertainty in the integrated luminosity is 2.6% [26].
The systematic effects related to the missing higher-order diagrams in MADGRAPH are estimated with two different methods.The uncertainty in the signal acceptance is determined by varying the renormalisation and factorisation scales simultaneously up and down by a factor of two using MADGRAPH, and the uncertainty is taken as the maximum difference after the final event selection.The effect on the calculated tt production cross section is 2.3%, which is the value used in the analysis for this uncertainty.This estimate is cross-checked by comparing the predictions of the leading-order and NLO generators MADGRAPH and POWHEG, where both use PYTHIA for hadronisation and extra radiation.The systematic uncertainty is found to be 2.2%, comparable with the above estimate.
The matching between the matrix elements (ME) and the parton shower (PS) evolution is done by applying the MLM prescription [27].Changing the thresholds that control the matching of partons from the matrix element with those from PS by factors of 0.5 and 2.0 for one of the parameters (minimum k T measure between partons) and 0.75 and 1.5 for the other (jet matching threshold for the k T -MLM scheme) compared to the default thresholds, produces a 1.6% variation in the tt event selection efficiency.
The uncertainty arising from the hadronisation model affects mainly the JES and the fragmentation of b jets.As the b-jet efficiencies and mistagging rates are taken from data, no additional uncertainty is expected from this source.The uncertainty in the JES already contains a contribution from the uncertainty in the hadronisation.The hadronisation uncertainty is also determined by comparing samples of events generated with POWHEG where the hadronisation is modelled with PYTHIA or HERWIG, and the effect on the calculated tt cross section is 1.4%, which is well within the JES uncertainty.
Uncertainties in the selected number of single-top-quark and VV events are calculated following the same prescription as for the signal yield.In addition, an uncertainty in the cross sections for single-top-quark and VV backgrounds, taken from measurements and estimated to be approximately 20% [28-36], is added in quadrature.
Table 1 summarizes the magnitude of the systematic uncertainties on the tt production cross
Results
The tt production cross section is measured by counting events after applying the selection criteria described in section 3. Table 2 shows the total number of events observed in data and the number of signal and background events expected from simulation or estimates from data.Table 3 lists the mean acceptance (which contains contributions from W → τν τ , with leptonic τ decays) multiplied by the selection efficiency and the branching fraction in the dilepton final state, and the measured cross section for each of the three final states, e + e − , µ + µ − , and e ± µ ∓ , which give compatible results.The e + e − and µ + µ − channels have two additional sources of uncertainty, arising from the DY background estimation and from the propagation of the JES to the E T / estimation, which limit the precision of the measurement of σ tt in those final states.
A combination of the three final states using the BLUE method [37] yields a measured cross section of σ tt = 239.0± 2.1 (stat.)± 11.3 (syst.)± 6.2 (lum.)pb for a top-quark mass of 172.5 GeV.
In the combination, the systematic uncertainties are 100% correlated across channels, except those associated to the lepton efficiencies, which have a correlation coefficient of 0.64 for e + e − with e ± µ ∓ and 0.55 for µ + µ − with e ± µ ∓ .Finally, the uncertainties associated with the databased estimates and the statistical uncertainties are taken as uncorrelated.
In this analysis the dependence of the acceptance on the top-quark mass is found to be quadratic within the present uncertainty of the top-quark mass [38].The cross-section dependence in the range 160-185 GeV can be parametrized as where m t is given in GeV.Assuming a top-quark mass value of 173.2 GeV [38], a cross section value σ tt = 237.5 ± 13.1 pb is obtained.
Figure 3 shows the distributions of M , E T / and the difference of the azimuthal angle between the two selected leptons (∆φ ) and their ratios to expectations for the e ± µ ∓ channel, which dominates the combination.Figure 3: Distributions of (upper left) the dilepton invariant-mass, (upper right) the E T / , and (lower) the difference of the azimuthal angle between the two selected leptons, after the b-jet multiplicity selection and for the e ± µ ∓ channel.For the first two plots the last bin contains the overflow events.The expected distributions for tt signal, in this case, are normalised to the measured tt cross section.The hatched bands correspond to the total uncertainty in the predicted event yields for the sum of the tt and background predictions.The ratios of data to the sum of the expected yields are given at the bottom.
Summary
A measurement of the tt production cross section in proton-proton collisions at √ s = 8 TeV is presented for events containing a lepton pair (e + e − , µ + µ − , e ± µ ∓ ), at least two jets with at least one tagged as b jet, and a large imbalance in transverse momentum in the final state.The measurement is obtained through an event-counting analysis based on a data sample corresponding to 5.3 fb −1 .The result obtained by combining the three final states is σ tt = 239 ± 2 (stat.)± 11 (syst.)± 6 (lum.)pb, in agreement with the prediction of the standard model for a top-quark mass of 172.5 GeV.
Table 2: Number of dilepton events after applying the event selection and requiring at least one b jet.The results are given for the individual sources of background, tt signal with a top-quark mass of 172.5 GeV and σ tt = 252.9pb, and data.The uncertainties correspond to the statistical and systematic components added in quadrature.
Figure 1 :
Figure1: The p T distributions of the highest-p T lepton (left) and jet (right) after the jet multiplicity selection, for all three final states.The expected distributions for tt signal and individual backgrounds are shown after data-based corrections are applied; the last bin contains the overflow events.The hatched bands correspond to the total statistical uncertainty in the event yields for the sum of the tt and background predictions.The ratios of data to the sum of the expected yields are given at the bottom.
Figure 2 :
Figure2: Jet multiplicity (left) in events passing the dilepton criteria, and (right) b-jet multiplicity in events passing the full event selections but before the b-jet requirement, for the e ± µ ∓ channel.In the right figure, the hatched bands show the total statistical and b-jet systematic uncertainties in the event yields for the sum of the tt and background predictions.The hatched bands in the left figure show only the total statistical uncertainty on the predicted event yields.The ratios of data to the sum of the expected yields are given at the bottom.
Table 1 :
Summary of the individual contributions to the systematic uncertainty on the σ tt measurement.The uncertainties are given in pb.The statistical uncertainty on the result is given for comparison.Sourcee + e − µ + µ − e ± µ ∓ | 4,865 | 2014-02-05T00:00:00.000 | [
"Physics"
] |
Dietary Patterns Influence Target Gene Expression through Emerging Epigenetic Mechanisms in Nonalcoholic Fatty Liver Disease
Nonalcoholic fatty liver disease (NAFLD) refers to the pathologic buildup of extra fat in the form of triglycerides in liver cells without excessive alcohol intake. NAFLD became the most common cause of chronic liver disease that is tightly associated with key aspects of metabolic disorders, including insulin resistance, obesity, diabetes, and metabolic syndrome. It is generally accepted that multiple mechanisms and pathways are involved in the pathogenesis of NAFLD. Heredity, sedentary lifestyle, westernized high sugar saturated fat diet, metabolic derangements, and gut microbiota, all may interact on a on genetically susceptible individual to cause the disease initiation and progression. While there is an unquestionable role for gene-diet interaction in the etiopathogenesis of NAFLD, it is increasingly apparent that epigenetic processes can orchestrate many aspects of this interaction and provide additional mechanistic insight. Exciting research demonstrated that epigenetic alterations in chromatin can influence gene expression chiefly at the transcriptional level in response to unbalanced diet, and therefore predispose an individual to NAFLD. Thus, further discoveries into molecular epigenetic mechanisms underlying the link between nutrition and aberrant hepatic gene expression can yield new insights into the pathogenesis of NAFLD, and allow innovative epigenetic-based strategies for its early prevention and targeted therapies. Herein, we outline the current knowledge of the interactive role of a high-fat high-calories diet and gene expression through DNA methylation and histone modifications on the pathogenesis of NAFLD. We also provide perspectives on the advancement of the epigenomics in the field and possible shortcomings and limitations ahead.
Introduction
Nonalcoholic fatty liver disease (NAFLD) includes a spectrum of features spanning from the simple accumulation of triglycerides (TG) in hepatocytes (hepatic steatosis) to nonalcoholic steatohepatitis (NASH), which is characterized by the presence of an inflammatory infiltrate and hepatocellular injury [1], and may further evolve to cirrhosis and hepatocellular carcinoma (HCC) [2]. Based on the close association between hepatic steatosis and metabolic dysregulation, international consensus guidelines recommended the renaming of NAFLD to metabolic associated fatty liver disease (MAFLD) [3,4]. Several emerging research studies are providing support for the shift to the novel nomenclature and its criteria for diagnosis. For example, van Kleef et al. suggested recently that using novel MAFLD criteria would help to improve the identification and treatment of fatty liver disease patients at risk for fibrosis [5]. Others investigations demonstrated the importance of MAFLD criteria in identifying individuals with impaired liver health and increased cardiovascular risk [6][7][8]. However, the proposed terminology may change with substantial advancement of our scientific knowledge in the field. NAFLD is emerging as the most common cause of chronic liver disease, especially in countries that consume a western diet that is high in saturated fat, trans fat, and refined sugars [9]. The prevalence of NAFLD was estimated to be between 25-45% of the general population [10], and 70-90% among patients with metabolic comorbidities such as obesity, type 2 diabetes mellitus (T2DM) or metabolic syndrome (MetS) [11]. In fact, epidemiological and clinical studies demonstrated that NAFLD has, in addition to intrahepatic lesions, devastating health consequences beyond the liver, and is commonly intimately linked to metabolic disorders such as obesity, insulin resistance, T2DM [12,13], inflammation, mitochondrial damage, and oxidative stress response [14]. In addition, patients with NAFLD are at substantial risk for the development of cardiovascular diseases (CVD) [15]. Since NAFLD is recognized as the hepatic manifestation of metabolic syndrome, a recent study suggested that the inclusion of steatosis in the panel of MetS diagnostic risk factors improves the predictive power of cardiovascular risk better than the current MetS criteria [16].
The root causes of NAFLD were extensively debated during the last few years. While investigations brought forward evidence that this disorder may be caused by a plethora of modifiers including sedentary lifestyle, metabolic derangements, gut microbiota, genetic predisposition, and epigenetic factors [1], unhealthy diet remains the factor that contributes the most (Figure 1) [17]. With respect to genetic component, studies of families and twins as well as genome wide association studies (GWAS) provided evidence for an element of heritability in NAFLD [18,19]. GWAS carried out mainly in adult cohorts led to the identification of various genetic variants that potentially could serve as biomarkers for early prediction of individual risks [20]. Among these, genetic variants in patatin-like phospholipase domain containing three protein (PNPLA3), transmembrane 6 superfamily member 2 (TM6SF2), and membrane bound O-acyltransferase domain-containing seven gene (MBOAT7), which are involved in lipid droplets remodeling and very low-density lipoproteins secretion, are considered as the major determinants of interindividual differences in NAFLD trait [21,22]. However, the specificity of these variants remains unknown and genetics alone cannot explain the large variability in the prevalence of NAFLD [23].
Environmental factors including sedentary lifestyle, overconsumption of a high-fat western type diet (HFD), and increased intake of sweetened beverages are major risk factors for the onset and progression of NAFLD [24]. In this respect, a recent report indicated that a HFD-induced maternal hypercholesterolemia predisposed offspring to NAFLD and metabolic diseases [25]. Moreover, overwhelming studies established a direct link between various nutrients consumption and long-term liver damage from NAFLD [26,27]. In a similar way, Nobili et al. reported an association between fructose consumption and NASH in a cohort of children and adolescents with a histologically confirmed diagnosis of NAFLD [28]. Conversely, the restriction of fructose intake was associated with a reduction in hepatic fat content and de novo lipogenesis [29]. More interesting, human data suggest that exposure to excess maternal fuels during pregnancy could prime the fetal liver for NAFLD and might drive the risk for NASH in the next generation [30]. Together, these findings imply that healthful dietary patterns and the intake of unsaturated fats are protective against NAFLD. In addition, certain dietary supplements could be useful in preventing the development and/or worsening of liver steatosis in patients with NAFLD. Nowadays, healthy diet habit represents a key factor to enhance the health status and welfare; indeed, within the scientific community, diet supplementation was widely accepted as useful strategy to modulate and/or optimize the biochemical and molecular pathways which orchestrate the metabolic responses to both physiological and pathological conditions [31]. Of note, since dietary habits and lifestyle play a chief role in the prevention and treatment of NAFLD in humans, the search for effective nutritional strategy to reduce risk of liver disease is worthy of investigation [32]. phospholipase domain containing three protein (PNPLA3), transmembrane 6 superfamily member 2 (TM6SF2), and membrane bound O-acyltransferase domain-containing seven gene (MBOAT7), which are involved in lipid droplets remodeling and very low-density lipoproteins secretion, are considered as the major determinants of interindividual differences in NAFLD trait [21,22]. However, the specificity of these variants remains unknown and genetics alone cannot explain the large variability in the prevalence of NAFLD [23]. Schematic representation of potential epigenetic and genetic events that are altered by diet leading to NAFLDrelated genes aberrant expression. Environment factors (E) such as nutrition, genetics factors (G), gut microbiota, and intrauterine environment, act collectively in play with epigenetic landscape to induce NAFLD phenotype and associated complications. Diet represents one of greatest environmental determinants of an individual's health. Nutrients, metabolites, and bioactive components can reversibly alter epigenetics marks causing epigenetics alterations in known epigenetic mechanisms: DNA methylation, histone modifications, noncoding RNAs regulation, and most likely, RNA epigenetics. Resulting epigenetic alterations impact genome by affecting metabolic gene expression patterns, and accordingly, lead to metabolic diseases such as NAFLD. These events highlight role of epigenetic alterations as interface between E (e.g., diet/metabolism) and G (e.g., genetic variations) interactions in metabolic disorders including NAFLD.
Although genetic and environmental factors were thought to be independently associated with metabolic disorders, huge evidence confirms the existence of complex interactions between genetic background and environmental influences, particularly diet, to modulate individual risk of NAFLD development and its severity and progression [33]. This is not surprising, since nutritional genomic studies revealed that nutrition is most likely the key environmental factor that exerts its impact on health outcomes by directly affecting the expression of key genes involved in major metabolic pathways. Moreover, nutrigenetics provided evidence that genetic variants can be associated with differential response to nutrients and affect health outcomes relating this variation to variable disease states. However, how the bidirectional interaction between nutrition and an individual's genetic makeup impacts health status is not well-understood. In this respect, progress in the field may come from the emerging knowledge of nutriepigenomics, referred to as the interaction between nutrients and genome through epigenetic mechanisms.
Epigenetics was originally defined as heritable changes in gene expression without altering the primary DNA sequence [34]. While the genome is identical in all cells of an organism, the epigenome contains key information specific to every type of cells. Modulation of gene expression can occur through the epigenetic landscape or epigenome, a complex network of modifications including DNA methylation, histone protein posttranslational modifications, chromatin remodeling, and several noncoding RNAs (ncRNAs) regulation [4,35]. These dynamic processes may be responsible for mediating gene-gene and gene-environmental interactions, which consequently induce phenotypic changes. Indeed, multiple studies suggested that epigenetic factors may contribute to the metabolic memory in liver tissue [36]. Thus, attempts were made to identify epigenetic mechanisms underlying metabolic alterations caused by diet-induced NAFLD, as these could be beneficial for disease treatment. Specifically, the effect of HFD on genes involved in hepatic fat accumulation and steatosis was shown to be mediated by epigenetic factors, which play crucial roles in the molecular initiation of liver dysfunction and NAFLD development [37,38]. Despite these encouraging data, we still do not have a firm handle on how and when epigenetic marks that occur in response to an HFD alter gene expression in NAFLD. To fill this knowledge gap, there is clearly a need for a streamlined and novel investigation of epigenetic machinery that interacts with master regulators of lipogenic and glycolytic gene expression programs. Understanding this potential interaction and the resulting pathological signals may lead to identification of epigenetic marks that predispose an individual to NAFLD, and subsequently, could allow early preventive and therapeutic strategies for those at a high risk for the disease.
Epigenetic Mechanisms Underlying the Link between Nutrition and Aberrant Gene Expression in NAFLD
As discussed above, NAFLD susceptibility and progression are likely attributed to dynamic interactions between genetic and environmental factors [18,39]. However, knowledge surrounding molecular mechanisms by which these factors, particularly diet, alter hepatic gene expression to trigger NAFLD remains limited. A large body of evidence strongly supports that alteration in epigenetic landscape mediates gene-diet interaction and play important roles in the onset of the NAFLD [18]. The major elements of the human epigenome are covalent chemical changes to DNA and histones that contribute to the fine-tune regulation of gene expression and changes of chromatin structure [4]. But the question is: how can HFD connect metabolic information with transcriptional gene control through epigenetics marks to initiate NAFLD? Preliminary observations suggest that biochemical modifications to DNA and certain histones involve several modifying enzymes that play important roles in epigenetic gene regulation. The activity of these enzymes is sensitive to dietary factors and cofactors generated by cellular intermediary metabolism, allowing cells to adapt to a change in conditions by switching specific genes on and off, thereby providing a link between diet, metabolism, and gene expression [40]. As an example, metabolites deriving from various food sources can serve as substrates or cofactors for transcription factors histone modifying enzymes that affect chromatin compaction, leading to transcriptional regulation associated with diseases and ageing [41].
In addition to DNA methylation and histones modifications, epigenetic regulation can also occur in the form of transcriptional machinery interaction with ncRNAs including microRNAs (miRNAs), long noncoding RNAs (lncRNAs), and circular RNAs (circRNAs). Emerging evidence suggests that there is a relationship between different ncRNAs and their roles in the regulation of gene networks involved in the development of metabolic diseases including obesity and NAFLD [18,[42][43][44][45]. The best characterized category of ncRNA species is the miRNAs class. Mounting evidence revealed that dysregulation in miRNAs expression is associated with molecular processes of various metabolic and pathophysiologic liver diseases, including NAFLD conditions [46,47]. Indeed, several differentially expressed miRNAs were associated with the pathogenesis of NAFLD and its subtype NASH, both in humans and in experimental models [48][49][50]. Moreover, plasma miRNA expression signatures could serve as a biomarker to differentiate between several types of liver injury such as simple steatosis, NASH, fibrosis and, ultimately, HCC [51,52]. Hence, the ncRNAs new emerging field of research is expected to significantly increase our understanding of the fundamental epigenetic mechanisms that contribute to NAFLD with the hope of developing potential biomarkers for diagnosis, prognosis, and treatment of the disease. Next, we focus on discussing the currently available knowledge regarding the best characterized epigenetic changes, such as DNA methylation and histone modifications, and how these alterations contribute to the development and progression of NAFLD in response to nutritional intake. As for advancement in the understanding of the mechanistic roles of ncRNAs in NAFLD, we refer the readers to recent well-detailed reviews [18,[53][54][55].
DNA Methylation and NAFLD
DNA methylation is one of the most characterized biological process of the epigenome. This mechanism typically refers to the addition of a methyl group on a cytosine (C) with guanine (G) as the next nucleotide on DNA, known as CpG sites. CpG sites, usually referred to as CpG islands, commonly present with higher frequency at the promoter regions of the genes than that of other sites [34]. In the human genome, 70-80% of the 28 million CpG dinucleotides are methylated [56]. Interestingly, these dynamic CpGs cohabit with gene regulatory elements, particularly enhancers and transcription-factor-binding sites. The methylation is processed by a family of enzymes; the DNA methyl transferases (DN-MTs) [57], which use S-adenosylmethionine (SAM) generated by one-carbon metabolism. Hypermethylation of CpG islands usually results in gene transcription silencing [58], while hypomethylation of promoters may activate gene transcription. Increasing numbers of studies indicate that DNA methylation patterns are susceptible to specific change in response to cellular and tissue microenvironments [59] and contribute to the epigenetic networks that operate to turn genes on and off in response to various signals [60]. More importantly, alterations in DNA methylation patterns can take place during aging and in pathologic states, such as metabolic diseases [61].
DNA methylation is one of the keys to how environmental conditions, particularly diet and nutritional status, modulate gene expression at the transcriptional level. Because DNA methylation relies on the availability of S-adenosylmethionine, which is synthetized from several nutrients, diet is one of the strongest factors that affects DNA methylation pathways [62]. For example, western type diet, which is known to promote obesity, also alters DNA methylation [63], thereby changing the expression of several genes involved in lipid metabolism [64]. DNA methylation patterns induced by dietary fatty acids are specifically linked with dysfunctions in cellular lipid metabolism and fatty acid oxidation [62,65]. However, much less is known about DNA methylation in NAFLD, and much more needs to be done. With recent progress in epigenetic tools such as high-throughput sequencing and methylation arrays, attempts were made to detect methylation signals and uncover the regulation of DNA methylation by HFD and its role in tissue-specific transcription control in NAFLD. In this context, relevant animal and human studies related to these aspects will be discussed next.
Data from animal studies: evidence of altered hepatic genomic DNA methylation in NAFLD is demonstrated by several animal studies; in particular, rodents. Maternal HFD was shown to alter littermates' DNA methylation, as well as to favor the development of NASH and hepatic fibrosis [66]. Rat offspring born from mothers fed HFD during pregnancy and lactation periods develop the NAFLD phenotype, as well as changes in cyclin dependent kinase inhibitor 1A (Cdkn1a) gene expression and corresponding DNA methylation levels [67]. Methionine is an essential amino acid that plays major roles through its metabolites, which regulates a number of cellular functions. Mounting evidence from animal studies suggests that methyl-group donors including folate, betaine, and choline can alter DNA methylation patterns [68,69]. Further evidence in support of the importance of DNA methylation in NAFLD comes from studies in mice showing that dietary restriction of methyl donors or impairment of methyl donor metabolism alters DNA methylation and promotes NAFLD and liver injury [70]. By contrast, dietary methyl donor supplementation appears to protect rodents from high-fat/sucrose diet-induced hepatic steatosis [71]. Betaine was also found to relieve HFD-induced fatty liver in association with modification of DNA methylation [72].
In a mouse model of high-fat-sucrose-induced hepatic steatosis, supplementation with methyl donors containing folic acid, choline, betaine, and Vitamin B12 improved liver steatosis by reversing the methylation status in several genes including the sterol regulatory element binding transcription factor 2 (Srebf2) [68,73]. In another animal study, folate was reported to affect the expression of genes regulating fatty acid synthesis, and folate deficiency-induced TG accumulation in the liver [74]. Wang et al. [75] showed that betaine supplementation decreased DNA methylation of the microsomal triglyceride transfer protein (Mttp) gene promoter in mice and induced global methylation over the genome compared to HFD. These changes of DNA methylation induced by betaine supplementation promoted hepatic TG export and attenuated liver steatosis in mice fed HFD. Several studies also showed that epigenetic changes can regulate the transcription factor peroxisome proliferator-activated receptor γ (PPARγ), which is known as a master regulator of lipogenic genes involved in fatty liver diseases. PPARγ overexpression in the liver induced by HFD feeding or pathophysiological stresses leads to lipid accumulation, and consequently, development of NAFLD. Blocking Pparγ gene expression in the liver of HFD-fed mice reduced not only lipid accumulation, but also the expression of inflammatory genes, which is an indication of NASH progression [76]. In agreement with these studies, we recently reported that both HFD and palmitic acid alter global and Pparγ promoter DNA methylation leading to a significant induction of PPARγ expression and enhanced lipid retention in the liver, which lead to NAFLD development [38].
The nuclear factor-erythroid 2-related factor-2 (NRF2) is another transcription factor known to play a pivotal role in liver diseases [77]. Resveratrol attenuated hepatic DNA methylation at the Nrf2 promoter region in mice fed an HFD, and this effect was correlated with a reduction in TG levels and expression of lipogenesis-related genes [78]. Thus, NRF2 signaling pathways could be a potential target to develop a preventive and therapeutic strategy to reduce NAFLD. Furthermore, several potential genes coding for enzymes involved in NAFLD were also reported to be susceptible to methylation and contribute to altered hepatic metabolism and cellular transformation. Glycine N-methyltransferase (GNMT) is the most important enzyme regulating S-adenosyl-L-methionine metabolism that is frequently decreased in liver disease, including NAFLD, cirrhosis, and HCC [79]. Likewise, Borowa-Mazgaj et al. reported that the development of NAFLD and NAFLDderived HCC was characterized by decreased Gnmt gene expression and this was mediated by gradual DNA methylation in the promoter region in Gnmt [80].
Based on this knowledge, animal studies are beginning to examine therapeutic values of certain pathways for the treatment of NAFLD. For example, a recent research work showed that therapeutic targeting of hepatic methylation-controlled J protein (MCJ) with nanoparticle-and GalNAc-formulated siRNA efficiently prevented liver lipid accumulation and fibrosis in multiple NASH mouse models [81]. However, although studies in rodents provided crucial insights into the NAFLD onset and progression, the translatability between animals and humans should be carefully considered.
Data from human studies: differential DNA methylation was also associated with the pathogenesis of NAFLD in human. DNA methylation signatures of liver biopsies collected from patients with NAFLD revealed broad changes in the methylation profile compared to that of healthy individuals [82][83][84][85][86][87]. Growing evidence indicates that hepatic DNA methylation and insulin resistance in NAFLD patients are critical factors for the progression of the disease from simple steatosis to severe fibrotic NASH [88]. Indeed, hepatic DNMT levels were found to be increased in patients with steatohepatitis versus those with simple steatosis [86]. Hardy et al. found that the plasma methylation of PPARγ positively correlates with the severity of NAFLD [89]. In a case-control study of NAFLD patients, hepatic DNA promoter methylation in PPARγ coactivator 1-α (PGC1α), was significantly associated with differential liver DNA methylation in NAFLD and peripheral insulin resistance [85]. Moreover, the methylation level of Pparγ was found to be positively correlated with liver fibrosis levels in rat models as well as in NAFLD patients [90]. Thus, circulating PPARγ DNA could be used as a potential biomarker for stratification of liver fibrosis in nonalcoholic fatty liver disease [89]. In agreement with this suggestion, another study carried out on individuals diagnosed with NAFLD indicated that DNA methylation at specific CpGs within Pparα, Pparγ, TGFβ1, Collagen 1A1, and PDGFα genes can distinguish mild from severe NAFLD-associated fibrosis [83].
Furthermore, Ahrens and colleagues identified an association between increased methylation at a CpG site (cg11669516) in the first intron and reduced expression of insulinlike growth factor binding protein 2 (igfbp2) gene in NAFLD and NASH patients [82].
These results are also supported by a recent cohort study indicating that IGFBP2 levels are inversely associated with the risk of NAFLD [91]. Similarly, Fahlbusch et al. demonstrated that circulating levels of IGFBP2 are lower in patients with NAFLD and NASH, and are restored after weight loss following bariatric surgery along with reductions in hepatic fat content [92]. A recent epigenomic study suggested that differentially methylated genes might distinguish patients with advanced NASH from simple steatosis [93]. There is evidence showing that mitochondrial gene NADH dehydrogenase 6 gene (MT-ND6), which was transcriptionally silenced by promoter hypermethylation, was significantly associated with the histological severity of NAFLD [86].
Based on all these data and those summarized in (Table 1), methylation status might be used as a parameter to improve the diagnosis of NAFLD and to differentiate between disease subtypes. However, the mechanisms by which HFD exerts its specific effects on epigenetic landmarks and DNA methylation, which could enhance lipid accumulation in hepatocytes promoting NAFLD, are only beginning to surface. Therefore, it merits more systematic studies to provide more unequivocal findings and research in the field of cell-free DNA that reflects gene methylation status in the liver. This would be a potential noninvasive biomarker of liver damage, as it was suggested by Hardy et al. [89]. Table 1. Relevant DNA methylation alterations associated with gene expression in nonalcoholic fatty liver disease.
Srebf2
Hepatic steatosis Supplementation with methyl donors containing folic acid, choline, betaine, and Vitamin B12 improved liver steatosis by reversing the methylation status in the promoter region of sterol regulatory element binding transcription factor 2 (Srebf2) [68,73] Mttp
Histone Post-Translational Modifications in NAFLD
Histone modifications were identified as another epigenetic determinant of chromatin structure and gene expression. Changes include mainly histone acetylation, methylation, phosphorylation, ribosylation, ubiquitination, and sumoylation. Among these, acetylation/deacetylation and methylation/demethylation mechanisms were the most studied modifications over the past decade. These epigenetic processes, which occur in response to various conditions including diets, are characterized by dynamic changes of aminoacidic residues in the histone tails [95]. A series of enzymes including histone acetyltransferase (HAT), histone deacetylase (HDAC) [96,97], and methyltransferase are accountable for 'writing' or 'erasing' the epigenetic modifications. Alterations in the activity and/or levels of any of these enzymes may impact chromatin structure and subsequent gene expression. Moreover, abnormal histone modifications contribute to metabolic disorders and consequently fatty liver disease [98]. Hence, a precise understanding of this epigenetic process may provide new perspectives in the discovery of novel epigenetic targets, which may provide important leads to design future functional studies and potential epigenetic-targeting drugs for NAFLD.
Histone acetylation: altered expression and activity of HAT modifying enzymes were reported to influence gene expression profiles in NAFLD, leading to aberrant hepatic metabolism and cellular transformation [88]. More recent research revealed that the dysfunction of lysine acetylation is involved in NAFLD and other metabolic diseases, including obesity, cardiovascular disease, hypertension, and T2DM [99]. For example, a study indicated that blocking the hyperacetylation of lysine 9 and 36 at histone 3 (H3K9 and H3K36) in the promoter of SREBP1c, FASN, and ATP citrate lyase (ACLYS) genes prevented the development of NAFLD (Table 2) [100]. In addition, a genome-wide analysis of histone 3 at lysine 9 acetylation (H3K9ac) in the liver of mice fed control or HFD demonstrated that approximately 17% of the differentially expressed genes were associated with changes in H3K9ac in their promoters [101]. In agreement with this, another study used HFDfed mice to illustrate that hepatic lipid accumulation caused aberrant histone H3K4 and H3K9 trimethylation in Pparα and other genes involved in lipid metabolism, which may contribute to the pathogenesis of NAFLD [102]. Table 2. Examples of histone modifications and their association with aberrant gene expression in nonalcoholic fatty liver disease.
Little is known regarding the role of HATs in the development of NAFLD. The p300 protein, a histone acetyltransferase HAT family member, is an important element of the transcriptional machinery that contributes in the regulation of chromatin structure and transcription initiation. A previous study indicated that p300 upregulation results in NAFLD, insulin resistance, and inflammation [103]. Glucose-activated p300 acetylated Lys-672 of the carbohydrate-responsive element-binding protein (ChREBP) and increased its transcriptional activity, leading to increased hepatic lipogenesis and the development of NAFLD [88,103]. In a recent study, Chung et al. identified tannic acid as a novel histone acetyltransferase inhibitor preventing NAFLD [100]. Therefore, suppression of hepatic p300 activity may be useful target for the treatment of hepatic steatosis and pharmacological p300 blockers may represent a potential option for NAFLD treatment.
Histone deacetylation: Several HDACs were reported to play a role in the pathogenesis of NAFLD. Sirtuin type 1 (SIRT1), which belongs to the silent information regulator-2 family, is the most studied member of the class III histone deacetylases [106]. SIRT1 is an important regulator of lipid and carbohydrate metabolism. Through its deacetylation capacity, SIRT1 was also shown to play a role in the pathophysiology of NAFLD and metabolic diseases. In this respect, Colak et al. reported that the deacetylation of SIRT1 is responsible for the regulation of several proteins involved in the pathogenesis of NAFLD [107]. For instance, SIRT1 was shown to potentiate fatty acid oxidation, mitochondrial biogenesis and turnover through deacetylation of its targets such as PGC-1α [108]. In response to caloric restriction, SIRT1 activates PGC-1α by deacetylation of lysine residues, thereby enhancing mitochondrial function [109]. The deacetylation effect of SIRT1 on histone was reported to improve hepatic steatosis [110]. In support to these findings, hepatocyte-specific deletion of Sirt1 resulted in hepatic steatosis and inflammation [111], whereas both transgenic SIRT1 mice and overexpression of SIRT1 specifically in the liver showed lower hepatic steatosis along with better glucose tolerance [112]. Another study revealed that SIRT1 levels were significantly reduced in a rodent model of HFD-induced NAFLD [113] as well as obese patients with severe steatosis [114]. SIRT1 transgenic mice exposed to a HFD showed a dramatic resistance to the development of HCC and damage in hepatocytes triggered by a chemical carcinogen [115]. Furthermore, Luo et al. reported that docosahexaenoic acid (DHA; C22; n-3) improved NAFLD by activating Sirt1 in a high-fat diet-induced NAFLD mouse model and prevented the accumulation of palmitic acid-induced lipid droplets, the decrease of fatty acid oxidation and the reduction of SIRT1 level in HepG2 cells [116]. Collectively, these data indicate that SIRT plays an important role in epigenome and metabolome in association with NAFDL development.
Several other HDACs are known to play a crucial role in NAFLD. Histone deacetylase 3 (HDAC3) was shown to be essential for the maintenance of chromatin structure and its liver-specific deletion caused both advanced fibrotic NAFLD and HCC [117]. A further study also demonstrated that histone HDAC3 to be a key epigenomic coregulator in liver, and that hepatic suppression of HDAC3 in liver results in remarkable steatosis [118]. Defects in the regulation of circadian clock genes by HDAC3 may lead to abnormal lipid metabolism in the liver, which may increase the risk of NAFLD [119]. HDAC8 is another histone deacetylase, commonly upregulated in dietary and genetic obesity promoted HCC mouse models as well as in human HCC cells and tissues [120]. HDAC8 promoted insulin resistance as well as cell proliferation, while its suppression induced insulin sensitivity and inhibited tumorigenesis in HCC [120].
Histone methylation: several studies reported that NAFLD development and progression are associated with alterations in the pattern of histone methylation profiles. Histone methylation marks are responsible for the epigenetic regulation of chromatin structure through addition or removal of methyl groups from lysine residues of histone tails [121]. Histone methylation is mediated by histone methyltransferases (HMTs) [122]. Methylation marks and their respective demethylation of lysine residues within histones H3 and H4 act as epigenetic switches that can either activate or repress gene expression [121]. Kim et al. reported that the histone H3 lysine 4 (H3K4) methyltransferase MLL4/KMT2D directs overnutrition-induced steatosis via its function as coactivator for PPARγ2 (Table 2) [104]. Additional investigations suggested that H3K4 an H3K9 trimethylation may contribute to hepatic steatosis and disease progression [102]. Indeed, this group of researchers has shown that hepatic lipid accumulation is linked with aberrant histone H3K4 and H3K9 trimethylation in PPARα and increased expression of genes involved in lipid metabolism in HFD-fed mice [114]. The methylation transferase suppressor of variegation 3-9 homologue 2 (Suv39h2) is significantly elevated in diet-induced obese mice and NAFLD patients and represses both Sirt1and Pparγ genes expression [94]. Another histone methyltransferase Enhancer of Zeste Homolog 2 (EZH2), which catalyzes trimethylation of H3K27 (H3K27me3) for transcriptional repression, was shown to play a key role in liver diseases. The reduction of EZH2 expression in the liver of NAFLD rats and fatty acid-treated hepatocytes is inversely correlated with lipid accumulation and inflammatory marker expression [123]. In agreement with this, inhibition of EZH2 recapitulated the steatosis-related phenotypes.
Histone demethylation: histone demethylation is carried out by histone demethylases (HDMs), which remove methyl groups from modified histones, thereby activate or repress gene transcription. Several histone demethylases were identified and classified into two classes: FAD-dependent amine oxidases (LSD demethylases) and Fe(II)and α-ketoglutarate-dependent Jumonji C (JmjC) domain-containing demethylase (JMJD demethylase) [124]. MJD1C of Jumonji family was identified as a critical epigenetic factor for lipogenesis. Suppression of JMJD1C in animal models can protect from dietaryinduced NAFLD, while its overexpression promotes lipogenesis to increase hepatic and plasma triglyceride levels [125]. Histone H3K9 demethylase JMJD2B is a member of the JMJD2 family. JMJD2B specifically catalyzes the removal of di-and trimethylated H3K9 (H3K9me2/me3), converting both histone marks to the monomethylated state. JMJD2B was shown to play a role in the development of hepatic steatosis through upregulation of PPARγ2 and steatosis target genes including CD36 and fatty acid-binding protein [105]. Not long ago, Kim et al. provided evidence that JMJD2B induces LXRα-dependent lipogenesis by removing repressive histone marks H3K9me2 and H3K9me3 near LXREs of lipogenic gene promoters leading to the development of NAFLD [126]. Moreover, an additional H3K9 HDM, Plant homeodomain finger protein 2 (Phf2), was shown to erase H3K9me2 methylmarks on the promoter of carbohydrate-responsive element-binding protein, (ChREBP) thereby protects liver from the pathogenesis progression of NAFLD [127]. Together, these findings suggest that the understanding of histone epigenetic dynamic changes underlying the development of NAFLD may deliver new insights into the physiopathology of the disease enabling the development of novel therapeutic and prevention modalities.
Epigenetic Studies' Limitations
Existing preclinical and clinical studies are providing evidence that epigenetic mechanisms such as DNA methylation and histone modifications play crucial roles in several metabolic diseases including NAFLD. In fact, epigenetic processes bridge the genetic and environmental factors such as diet, which contribute to transcriptional and posttranslational control of gene expression and consequently influence NAFLD and its more advanced clinical phenotype. In this respect, epigenetic modifications could have future application as effective diagnostic and therapeutic tools for NAFLD. However, many aspects of NAFLD biology remain enigmatic, and the research area still has important limitations: (i) since epigenetics applied to metabolic diseases is a relatively new field of investigation, a significant limitation is the current preclinical models of NAFLD, which make it difficult to study the interplay between diet and epigenetic changes. Although current animal models are necessary systems for biologically characterizing the disease, there is no consensus on a suitable animal model that could represent both the pathophysiology and histopathology in human NAFLD. In fact, most of the existing animal systems represent only a specific aspect of the disease rather than the whole spectrum. Moreover, diet used in animal studies to introduce NAFLD phenotype does not reflect the depth of dietary variation in humans, which clearly highlights the need to move to clinical studies. There are also limitations with respect to human studies. These include difficulties in capturing with accuracy an individual's response to complex environments. For example, the response that we can measure in one set of conditions may not be apparent in another set of circumstances. In addition, epigenetic processes are complex and depend on a variety of parameters including sex, stress, genetic variants, and tissue specificity. Besides, human studies are technically difficult due to the invasive procedure to get biopsies from the liver. Alternatives such as the development of new methods for quantification of DNA methylation from circulating cell-free DNA isolated from patient plasma has the potential to overcome this limitation [128]. With such an approach, epigenetics biomarkers are becoming close to clinical reality, as demonstrated by the example of the circulating, cell-free, DNA-based epigenetic biomarker methylated Septin9, which shows great promise as a tool to diagnose HCC in patients with cirrhosis [129]. (ii) Further to this difficulty is that certain dietary patterns are known to cause metabolic disorders mediated by epigenetic alterations but, in which metabolic tissue, by which mechanism, and in which physiological and pathological conditions? All remain to be determined. Understanding the dynamic relationship between food consumption, epigenetic changes, and genome may provide insight into how to target molecules involved in NAFLD either from a nutritional perspective or an epigenetic standpoint. (iii) Studies on DNA methylation patterns associated with diet quality in larger sample size racially diverse research cohorts are lacking. In this respect, preliminary data provided key evidence that higher diet quality has a beneficial effect on the lifespan, and adopting a healthy diet is crucial for maintaining healthy aging [130]. Moreover, Ma et al. reported that the whole blood DNA methylation signatures of diet were associated with cardiovascular disease risk factors and all-cause mortality [131]. Therefore, further studies are required to measure accurately DNA methylation after exposure to various diet as this may help understanding individual differences in responses to diet and diet-related chronic disease. (iv) While most of the studies discussed above proved that epigenetic alterations can influence gene expression in NAFLD when considering each epigenetic mechanism separately, research that combines all epigenetic layers in a studied model is missing. The identified and yet to be identified epigenetic mechanisms may interact and overlap among all and with several cellular factors, such those involved in the transcriptional machinery, to modulate target gene expression. Other issues to take in consideration are unknown molecular pathways from other primary metabolic tissues, along with secreted molecules involved in NAFLD that are also subject to the same epigenetics changes and may affect the diseases outcome. All these complicate attempts to identify the primary epigenetic trigger of aberrant gene expression and pathological role of an epigenetic event in a single organ. Hence, future studies need to exploit the whole-epigenome in greater depth and breadth than previously possible. Such effort may permit the screening and analysis of DNA methylation, modified histone landscapes, and ncRNAs regulation all in a single sample, as well as in different tissues of an organism. This approach would possibly help determine and understand interdependencies in the epigenetic landscape and its link to genome function under various inputs. (v) Due to the advances in the chemogenetic RNA-labeling and next-generation sequencing, several cellular RNAs were also found to dynamically and reversibly undergo different chemical modifications post-transcriptionally, a process called epitranscriptome. The dynamic and reversible modifications of RNA were found in multiple classes, such as mRNA, rRNA, tRNA, and noncoding RNA, with increasing evidence suggesting that they play important roles in post-transcriptional gene regulation. Indeed, the most abundant internal mRNA modification in eukaryotic cells that provides a new perspective for the regulation of gene expression and exhibit essential roles in physiological processes including hepatic functions and various liver diseases is N6-methyladenosine (m 6 A) [132]. In fact, recent studies investigated the role of m 6 A RNA methylation in disorders of hepatic lipid metabolism, showing that hyper-methylated m 6 A sites in HFD-induced fatty livers are enriched for lipid-associated pathway processes, while hypo-methylated m 6 A sites are associated with translation-associated processes [133]. Nevertheless, important, m 6 A mechanisms remain unexplored in the context of NAFLD.
Thus, comprehensive studies to demonstrate a potential linkage between diet, genetics, epigenetics, and epitranscriptomic regulation of gene expression would offer potential new insights for the understanding the different stages of NAFLD.
Conclusions
Although associations between epigenetic modifications and NAFLD was demonstrated, it is still not clear whether epigenetic alterations lead to NAFLD or rather the onset of NAFLD is the trigger for different alterations of epigenetic landscape. In this regard, further mechanistic studies are becoming a real necessity to better dissect causal from correlative relationships in the field. Additionally, more fine-tuned research needs to be achieved towards understanding how in an individual, diet, epigenetic layers, and genetic make-up crosstalk to alter hepatic gene expression, leading to the pathogenesis NAFLD. Finally, combining the latter research with the implementation of dietary interventions such as caloric restriction, Mediterranean diet, intermittent fasting, and their ability to reverse disease state, epigenetics would allow the design of a modular switch 'on/off' that controls gene expression in response to a specific diet to reverse metabolic diseases such as obesity, metabolic syndrome, and NAFLD.
Conflicts of Interest:
The authors declare no conflict of interest. | 8,772.4 | 2021-09-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Mass difference for charged quarks from asymptotically safe quantum gravity
We propose a scenario to retrodict the top and bottom mass and the Abelian gauge coupling from first principles in a microscopic model including quantum gravity. In our approximation, antiscreening quantum-gravity fluctuations induce an asymptotically safe fixed point for the Abelian hypercharge leading to a uniquely fixed infrared value that is observationally viable for a particular choice of microscopic gravitational parameters. The unequal quantum numbers of the top and bottom quark lead to different fixed-point values for the top and bottom Yukawa under the impact of gauge and gravity fluctuations. This results in a dynamically generated mass difference between the two quarks. To work quantitatively, the preferred ratio of electric charges of bottom and top in our approximation lies in close vicinity to the Standard-Model value of $Q_b/Q_t =-1/2$.
The top quark is substantially heavier than all the other quarks, with a pole mass of M t ≈ 173 GeV [1] significantly larger than the pole mass of the second-heaviest quark, the bottom at M b ≈ 4.9 GeV [2]. In the Standard Model, neither the two values nor their difference can be derived. The masses are determined by the Yukawa couplings y t , y b to the Higgs, once it acquires a vacuum expectation value. The low-energy values of y t , y b are free parameters in the Standard Model, fixed by comparing to experiment. We propose a mechanism that could generate the mass difference dynamically and uniquely determine the values of both masses from first principles. The mechanism follows from microscopic physics in the ultraviolet (UV), where an interplay of quantum gravity and gauge boson dynamics generates asymptotic safety [3,4], i.e., an interacting Renormalization Group (RG) fixed point at transplanckian scales. This fixed point prevents Landau-pole type behavior in the running couplings, rendering the Standard Model UV-complete. The fixed point determines the values of y t and y b in the UV. This mechanism combines the fixed-point scenarios explored in [5,6], see also [7], where the top pole mass and Abelian gauge coupling are retrodicted separately. Due to the two quarks' unequal electric charges, y t and y b assume uniquely determined, different values at M Planck = 10 19 GeV, cf. Fig. 1. This results in a retrodiction of unequal top and bottom masses at the electroweak scale. The viability of this mechanism hinges on the quantum numbers of the top and bottom quark: in our approximation, significant deviations from the measured charge ratio are incompatible with the observed masses. We now explain the mechanism, by following the RG flow from the UV fixed point through the transplanckian regime down to the electroweak scale.
gauge couplings of the Standard Model, g 3 for SU(3), g 2 for SU(2) and g Y for the Abelian hypercharge, the oneloop beta functions and coefficients read [38][39][40] Y t,b,Q are the hypercharges of the right-handed top and bottom quark and the left-handed SU(2) quark doublet, respectively. f g encodes the quantum-gravity contribution that acts like an anomalous dimension for the gauge couplings, and we assume that additional terms are subleading. These additional contributions are proportional to the product of g i and quantum-gravity-induced higherorder couplings. The fixed-point values of the latter are of the same order as f g , see the discussion in [30,35,36]. They enter the β gi through a loop diagram, leading to a suppression by 1 16π 2 in comparison to the direct contribution in Eq. (1), see [36]. We work with the one-loop beta functions to explain the mechanism, explicitly checking that two-loop effects only lead to quantitative changes.
arXiv:1803.04027v2 [hep-th] 23 Oct 2018
We focus on f g ≥ 0, as found in truncations of the functional RG flow [41,42] under the impact of asymptotically safe quantum gravity [6,7,[28][29][30][31], see [43][44][45][46] for reviews. In the asymptotically safe regime beyond the Planck scale, f g = const. holds as a consequence of gravitational fixed-point scaling. For the non-Abelian gauge couplings, this reinforces the asymptotically free fixed point at g 3 * = 0 = g 2 * . For the Abelian gauge coupling, the positive one-loop coefficient, generated by screening quantum fluctuations of charged matter, and the antiscreening gravity contribution cancel at an interacting fixed point [6,7,47], Quantum-gravity contributions to the running of the Yukawas supplement the one-loop beta functions [48] For the quantum-gravity contribution, f y = const holds in the asymptotically safe transplanckian regime [5,[33][34][35][36][37] generating an interacting fixed point at y t,b, * ≠ 0 through the interplay with Abelian fluctuations: At the fixed point at g 2 * = 0 = g 3 * and g Y * in Eq. (2), we obtain Specifying to Standard-Model charges Y t = 2 3, Y b = −1 3, and Y Q = 1 6, yields a fixed-point equation that is the key relation of our scenario This relation enforces y t * ≠ y b * in the far UV because g Y * ≠ 0. The difference in fixed-point values, y t(b) * , has an intuitive physical interpretation: The interacting fixed point for the Yukawas is generated through a balance of quantum fluctuations of matter with gauge and gravity fluctuations. The two fixed-point values y t(b) * must be unequal since Abelian gauge boson fluctuations couple more strongly to the top than to the bottom quark, as the top has a larger hypercharge, i.e., To compensate the combined impact of gravity and gauge boson fluctuations and generate a fixed point, the top Yukawa coupling must be larger, y t * > y b * . The beta functions in Eqs. (1) and (3) admit further fixed-point solutions, e.g., g Y * = 0, y b * = 0, y t * > 0 explored in [5], cf. light-green shaded region in Fig. 2.
Here, we focus on the most predictive fixed-point solution, cf. Eqs. (2) and (4), leading to retrodictions of the top mass M t , the bottom mass M b and the Abelian hypercharge coupling g Y at the electroweak scale.
RG flow at transplanckian scales: Starting from Eq. (5), the couplings deviate from their fixed-point values during the RG flow towards the infrared (IR). For real fixed-point values, Eq. (5) implies y t * > y b * , and the RG flow conserves this inequality: The ratio y t (k) y b (k) cannot become smaller than 1 if y t * y b * > 1 in the UV. The flow of the ratio is given by For y t (k) y b (k) → 1 from above, the beta function becomes negative due to the contribution of the Abelian gauge coupling. Hence, the ratio y t (k) y b (k) is driven away from 1 towards larger values. Once created by the fixed-point structure, the relation y t (k)−y b (k) > 0 is thus preserved down to the IR, cf. Fig. 1. Specifically, the trajectories in Fig. 1 arise as follows.
This results from the competition of the two distinct contributions in Eq. (1): The screening matter contribution, encoded in b 0, Y g 3 Y > 0 drives any small deviation g Y (k) = g Y * + δ with δ > 0 back to δ = 0 under the RG flow to the IR. Conversely, the antiscreening gravity contribution, encoded in −f g g Y < 0, drives any small deviation g Y (k) = g Y * − δ with δ > 0 back to δ = 0. In other words, the fixed point is IR attractive, cf. thick dashed green line in Fig. 1. This is in contrast to the behavior of the non-Abelian gauge couplings, where the gravity contribution triggers a power-law running in the transplanckian regime. Since both, the gravity-contribution and the matter contribution to the beta functions β g2,3 are antiscreening, the free fixed-point is IR repulsive. Hence, deviations from it are allowed in the transplanckian regime and g 2,3 grow under the RG flow to the IR, until they reach the experimentally determined values at IR scales. This dynamics for the gauge couplings leads to a more intricate behavior of the Yukawas: although the fixed point in Eq. (4) is IR attractive, the Yukawas run as soon as g 2,3 deviate from zero significantly, cf. Fig. 1. Their running is determined by a critical trajectory y t(b) (k) = y t(b) (g 2 (k), g 3 (k)) on which they exhibit a slight growth towards the IR. The non-Abelian gauge contribution to the flow of the Yukawas is negative. This counteracts the screening effect of matter fluctuations. Thus, tiny deviations y t (k) = y t * + δ with δ > 0 are no longer driven back exactly to y t * for g 2,3 (k) > 0. The critical trajectory is IR attractive, i.e., starting from their fixed-point values, the Yukawa couplings are fixed uniquely at M Planck .
RG flow between the Planck and the electroweak scale: At the Planck scale, quantum-gravity effects switch off dynamically as f g , f y are proportional to the Newton coupling measured in units of k. In asymptotic safety, it is constant at transplanckian scales, but falls off as k −2 and y b (k IR ) at k IR = 173 GeV as a function of the two independent quantum-gravity contributions fg and fy.
below M Planck , making quantum-gravity effects negligible there, cf. [8,10]. To model this behavior, we implement a sharp transition to f g = 0 = f y for k ≤ M Planck . Below M Planck , we follow the one-loop running in the Standard Model, attracted by a partial IR fixed-point [57][58][59]. At the electroweak scale, where the Higgs acquires a vacuum expectation value, the two Yukawas determine the top and bottom mass. The inequality y t (k) > y b (k), generated by the properties of the transplanckian regime, is preserved under the Standard-Model flow, as Eq. (6) still holds. The difference in fixed-point values between y t and y b thus generates a mass difference between M t and M b . So far, we have explained how a mass difference between the two quarks could result from their unequal quantum numbers as a consequence of an asymptotically safe fixed point. We now test the quantitative viability of this mechanism in our approximation by using approximately observationally viable values. To accommodate g Y (k IR = 173 GeV) = 0.358 in accordance with observations, f g = 9.7 × 10 −3 is required. Together with the values g 2 (k IR ) = 0.64779 and g 3 (k IR ) = 1.1666 see, e.g., [49] this also fixes the running of the non-Abelian gauge couplings at all scales. Then, f y = 1.188 ⋅ 10 −4 is required to obtain y b (k = 4.2 GeV) = 0.024. This translates into a bottom pole mass [2] of M b = 4.9 GeV. Given this input, the mechanism presented here generates y t (k = 168 GeV) = 0.967 corresponding to a top pole mass [2] of M t = 178 GeV. All three retrodicted quantities, M t , M b and g Y , come out rather close to their observed values with the input of two free parameters, f y and f g . The above values f y , f g lie in the vicinity of fixed-point values obtained in an approximation for quantum gravity minimally coupled to matter fields of the Standard Model [16]. A quantitatively precise calculation of f y , f g is subject to future studies. These studies must include higher-order curvature operators as in [36,37] and non-minimal matter-curvature couplings as in [13,34,50] to determine the gravitational fixed-point values which directly set f g and f y . As the UV fixed point is generated from a balance of the leading quantum-gravity contribution with the oneloop matter contribution and lies at small Standard-Model couplings, its existence is expected to be stable under the extension to higher-loop orders in the Standard-Model sector. Including two-loop terms in the Standard-Model running [51][52][53][54][55][56], f g = 9.8 × 10 −3 yields g Y (k IR ) = 0.358 and f y = 1.1266 × 10 −4 gives a bottom pole-mass of M b = 4.9 GeV. This retrodicts a top polemass of M t = 182 GeV. Analyzing an extended setting going beyond the third generation could provide a future test of the present model. Extending our study to the quarks of the second generation requires to account for the CKM mixing matrix. Inspecting the beta-functions for the strange and charm Yukawa under the simplifying assumption of a diagonal mixing matrix at y t * , y b * and g Y * yields a fixed point at vanishing Yukawas for charm and strange which is IR-attractive in the strange and thus retrodicts M s M t ≃ 0. Testing whether the tiny ratio M s M t ≈ 5 ⋅ 10 −4 is compatible with our setting requires to go beyond the above simplifying assumptions in more complete studies but should provide a critical future test of the present proposal. In the charm, this fixed point is IR repulsive, rendering the charm asymptotically free. Therefore M c M t is not retrodicted. Specifically, M c M t ≈ 7 ⋅ 10 −3 can be accommodated in our setting.
Exploring the gravitational parameter space: We now explore f g and f y away from the specific values used above. This exploits the link between electroweak and Planck-scale physics in order to constrain the microscopic gravitational parameter space by the requirement to match IR observables, in the spirit of [36]. In our approximation, the low-energy value of g Y only depends on f g . Hence, lines of constant f g in Fig. 2 correspond to lines of fixed g Y (k IR = 173 GeV). In contrast, y t b (k IR ) depend on f y as well as on f g through the gauge contributions in Eq. (3). Thus, lines of constant y t b (k IR ) are not simply lines of constant f y . Fig. 2 visualizes that the existence of an intersection area of the three approximately observationally viable contours defined by 0 < y b (k IR ) < 0.1, 0.94 < y t (k IR ) < 1 and 0.35 < g Y (k IR ) < 0.36 is a nontrivial result. An intersection does not occur for arbitrary combinations of values. For instance, g y (k IR ) > 0.4 and 0.94 < y t (k IR ) < 1 are incompatible with a non-zero bottom mass in our approximation. Thus, in our approximation, values close to the observed ones appear to be singled out by asymptotic safety.
The fixed-point in Eq. (5) shows that y 2 b * depends on the difference of the squares of y t * and g Y * . Accordingly, small variations of these two numbers away from y 2 t * = g 2 Y * 3 result in a fast growth of the value of y b (k IR ). Due to the different U(1) hypercharges of top and bot- tom, the line M b = M t cannot be reached, and a difference M t − M b > 0 always persists. On the other hand, a very large difference, M t − M b ≃ M t requires a choice of the gravity parameters in a relatively small region of the gravitational parameter space, such that the system sits close to the phase-transition line to vanishing bottom mass. In our approximation, this region translates into close-to Standard-Model values for g Y (k IR ) and M t , cf. Fig. 3. In summary, we have uncovered a non-trivial UV fixed point for the Standard Model couplings g Y * ≠ 0 and y t(b) * ≠ 0 induced by asymptotically safe gravity, that generically results in a mass difference between the top and bottom quark, i.e., M t > M b . This fixed point retrodicts (g Y (k IR ), M t , M b ), in terms of two gravitational parameters (f g , f y ). In our study the retrodiction is in approximate agreement with the observed IR values, cf. Fig. 2.
Three observations:
1) Universality of gravity contributions:
A key assumption of our study is the independence of the quantumgravity contributions from internal symmetries: gravity is the only known force that couples universally to all matter fields such that f g is independent of the gauge group. A significant violation of this universality leads to a quantitative failure of the above scenario. Specifically, let the gravitational contribution to the running of the non-Abelian gauge couplings be given by f g → f g, nA in Eq. (1). The rate at which g 2,3 grow above the Planck scale is thereby increased (lowered) for f g, nA > (<)f g . This affects how fast the Yukawa couplings increase in the transplanckian regime. Only f g, nA ≈ f g results in an observationally viable range for y t (k IR ), cf. Fig. 4. Thus, the independence of the gravitational contribution from the gauge group is suggested by the observed values of y b (k IR ), y t (k IR ) and g Y (k IR ).
2) Setting the scale: A second central assumption underlying our study is that the scale at which the gravitational contributions switch off is the Planck scale. We test whether another, presently unknown universally coupled interaction could underlie the proposed mechanism. Its scale would of course not be tied to the Planck scale. Varying the scale significantly away from 10 19 GeV results in a mismatch of M b M t with the observed values, cf. upper panel in Fig. 4. Given the electroweak scale, which is an input of our calculation, the Planck mass can thus be estimated by demanding that the model realizes a mass ratio in the vicinity of the observed ratio of M b M t in our approximation.
Here, we keep the top and bottom in a doublet of the SU (2). The hypercharges of the doublet Y Q and singlets Y b t are linked to the electric charges by where the last equality ensures equal electric charges for the right-and left-handed IR-values of the retrodicted couplings g Y (k IR ), yt(k IR ) and y b (k IR ) at k IR = 173 GeV as a function of the two quantum gravity contributions fg and fy at modified charge ratio, Q b Qt = −2 3 (left-hand panel); Q b Qt = −1 3 (righthand panel).
quarks. It turns out that for Q b Q t < −1 2, M t M b → 0, whereas for Q b Q t > −1 2, M t M b → 1, cf. Fig. 5. The reason lies in the dynamics of the green, cyan and yellow contours in Fig. 6: An increase in Q b Q t triggers a growth in f g , since b 0,Y increases with Q b Q t . Thus, the green contour moves to the right as a function of Q b Q t . Simultaneously, the cyan and yellow contours move towards each other as y b * → y t * for Q b Q t → 1. Accordingly, the three contours single out a value of Q b Q t at which they intersect in one location in the f g , f y plane. This value agrees with the Standard Model value Q b Q t = −1 2. Conclusions: The asymptotic-safety paradigm could provide a UV completion for quantum gravity coupled to the Standard Model. At an asymptotically safe fixed point, residual interactions in the microscopic regime can imprint a nontrivial structure on the low-energy masses of the model. Thereby, observations such as M t ≫ M b could become an automatic consequence of the asymptotically safe regime. Our study hints at the potential predictive power of an asymptotically safe UV regime. The mechanism we propose here links the measured ratio of electric charges of top and bottom to their masses: If the charge ratio deviates significantly from the Standard Model value, in our setting no choice of microscopic gravitational parameters is available to correctly retrodict M t , M b and g Y . | 4,806.2 | 2018-03-11T00:00:00.000 | [
"Physics"
] |
Comparison of Analytical Approaches Predicting the Compressive Strength of Fibre Reinforced Polymers
Common analytical models to predict the unidirectional compressive strength of fibre reinforced polymers are analysed in terms of their accuracy. Several tests were performed to determine parameters for the models and the compressive strength of carbon fibre reinforced polymer (CFRP) and glass fibre reinforced polymer (GFRP). The analytical models are validated for composites with glass and carbon fibres by using the same epoxy matrix system in order to examine whether different fibre types are taken into account. The variation in fibre diameter is smaller for CFRP. The experimental results show that CFRP has about 50% higher compressive strength than GFRP. The models exhibit significantly different results. In general, the analytical models are more precise for CFRP. Only one fibre kinking model’s prediction is in good agreement with the experimental results. This is in contrast to previous findings, where a combined modes model achieves the best prediction accuracy. However, in the original form, the combined modes model is not able to predict the compressive strength for GFRP and was adapted to address this issue. The fibre volume fraction is found to determine the dominating failure mechanisms under compression and thus has a high influence on the prediction accuracy of the various models.
Introduction
Fibre reinforced polymers (FRP) are increasingly used for structural parts in many applications owing to their high density specific strength and stiffness. However, in contrast to their excellent tensile properties, the mechanical properties under compressive loading are significantly inferior. Compressive strength is limited to approximately 70% of the tensile strength [1]. Although the macroscopic failure behaviour of FRP under compressive loading is brittle, the failure process is complex and the compressive strength is difficult to predict, leading often to high safety margins. Accurate prediction methods for an optimum design of composite parts with regard to lightweight applications are necessary and different approaches for predicting the compressive strength have been developed.
A first model for predicting compressive strength of composite laminates was presented by Rosen [2]. He proposed that compressive failure initiates due to fibre microbuckling and distinguished between two modes of microbuckling: in-phase microbuckling (shear mode) for higher and out-of-phase microbuckling (extension mode) for lower fibre volume fractions. The in-phase microbuckling leads to the formation of a Hereby, the influence of fibre type with regard to diameter and mechanical properties is analysed. A lower fibre volume fraction compared to existing literature is obtained, and the influence of fibre content on the prediction accuracy is discussed as well. Applying the analytical models developed for CFRP also to GFRP is a novelty and it is analysed as to whether they function as desired or whether an adaption is necessary to consider the geometry and mechanical properties of glass fibres. The results are expected to be of great interest for all those who do design or research for GFRP composite laminates.
Analytical Models for Compressive Strength Prediction of FRP
For predicting the compressive strength σ c with the model from Rosen [2], a high enough fibre volume content V f for the shear mode to be applicable is assumed. For this case, the UD compressive strength can be predicted with Equation (1), where G m is the shear modulus of the matrix [2]: The fibre kinking model after Argon [18] with the extension from Budiansky [19,20] predicts the compressive strength via the plastic deformation of the composite. The model states that the compressive strength is reached at the transition from elastic to plastic material behaviour. This implies, that the compressive strength is dominated by the matrix properties. It is further assumed that the plastic deformation can be described by pure shear after the shear yield strength τ y is reached. For small kink-band angles β, the compressive strength can be predicted by applying Equation (2) [19,20]: In this equation, φ 0 is the initial maximum angular fibre misalignment (in radian) and γ y is the shear yield strain. For larger kink-band angles, the compressive strength can be predicted with Equation (3). Here, σ Ty is the plane-strain yield stress in pure transverse tension [6,19,20]: Berbinau et al. [16] developed an analytical model for compressive strength prediction of FRP based on damage initiation by fibre microbuckling. With this model, the criticality of an off-axis ply orientation is discussed [17]. The initial fibre waviness is modelled by a sine function with amplitude V 0 . From the amplitude change due to an external compressive force, represented by a new sine function with amplitude V and half wavelength λ, a differential equation for the displacement v of the deflected fibre axis is derived. Assuming linear matrix behaviour and hence a constant matrix shear modulus G m , the relationship is simplified to the more easily to solve analytical approach given by Equation (4) [16]: where P is the compressive load on a fibre and G c is the composite shear modulus. E f , A f and I are the elastic modulus, cross-section area, and second modulus of inertia of the fibres, respectively. The load P can be correlated to the global stress on the 0°-ply by using Equation (5) [16,17]: A graphic representation of V/V 0 plotted over compressive stress σ 0 exhibits an asymptotic increase of the fibre amplitude. The asymptotic increase defines the failure stress of the UD FRP laminate [17,32].
The models from Budiansky [19,20] and Berbinau et al. [16] are compared to experimental results for CFRP by Jumahat et al. [32]. It was shown that the fibre kinking model from Budiansky overestimates the compressive strength, whereas the fibre microbuckling model from Berbinau et al. results in an underestimation of compressive strength. This is explained with the fact that the microbuckling model predicts the critical stress at which the fibres fail due to microbuckling. However, the stress at final failure of a composite laminate is dominated by both fibre microbuckling and kink-band formation [32]. Based on these two models, a combined modes model was derived by Jumahat et al. [32] that considers also the plastic deformation after fibre failure due to elastic microbuckling. The compressive stress is calculated according to Equation (6) with the predicted strength from the microbuckling model and an additional plastic kinking part described from the fibre kinking model [32]: The additional kinking stress σ kinking is calculated with Equation (7) as follows [32]: In this model, τ y , τ ult and γ y , γ ult are the yield and ultimate shear stress, respectively, shear strain. The parameters that are regarded in the different analytical models for predicting UD compressive strength of FRP are compared in Table 1. Table 1. Comparison of input parameters for different analytical models for compressive strength prediction of FRP UD laminates.
Rosen [2] Budiansky [19,20] Berbinau [16] Jumahat [32]
Fibre misalignment φ 0 X * Fibre radius r f X X Fibre Young's modulus E f X Matrix shear modulus G m X Shear yield stress τ y X X Shear yield strain γ y X X * φ 0 is considered, but variation has very small influence on prediction results.. The combined modes model considers the most input parameters and exhibits the best prediction results when compared to experimental results for CFRP [32]. However, the influence of different fibre types in these analytical models has not yet been investigated and the applicability for GFRP is not clear. A compressive test series is carried out for comparison of the models with experimental results for CFRP and GFRP. The necessary plastic shear parameters are determined in tensile tests.
Materials and Sample Preparation
Unidirectional unitex fabrics E-glass UT-E250 and carbon UT-C200 from Gurit Services AG (Zurich, Switzerland) are used as fibrous reinforcements. The E-glass fabric has a nominal areal weight of 250 g/m 2 and a fibre tex of 600 for the warp and 10 for the weft direction. The Young's modulus of the glass fibres is E GF = 80 GPa. The carbon fabric comprises of 12k rovings with a fibre tex of 800 in warp and 10 in weft direction and an areal weight of 200 g/m 2 . The carbon fibres have a Young's modulus E CF = 230 GPa according to the data sheet. Since the experimental results are compared with prediction models for the compressive strength, a focus during the laminate layup design is set on a similar fibre volume content ϕ f and specimen thickness t, especially for the unidirectional specimens for compression tests. To take the difference in density and fibre diameter of glass and carbon fibres into account, it was necessary to use different areal weights for achieving similar values for ϕ f at a constant specimen thickness.
The epoxy resin Momentive Epikote RIMR 135 (Momentive Performance Materials GmbH, Leverkusen, Germany) with the amine hardener Momentive Epikure RIMH 134 is used as a matrix system for all specimens. The resin to hardener mixing ratio is 10:3 as recommended by the manufacturer. It is an infusion matrix system with an onset of the glass transition region of T g,onset = 85 • C and midpoint glass transition temperature of T g = 93 • C, according to the data sheet. The matrix Young's modulus is E m = 2.7 GPa according to the data sheet and the shear modulus is G m = 1.0 GPa (ν = 0.3).
Laminate plates are produced by using vacuum assisted resin transfer moulding (VARTM). The mould is prepared with release agent before the dry fabrics are cut to size and placed inside the mould according to the stacking sequences [0] n for compression and [+45/ − 45] 2s for tensile tests (refer to Table 2). The matrix system is mixed, degassed under vacuum for 15 min and infused via a vacuum and an applied pressure of 2 bar. Curing is done for 20 h at ambient temperature and for 15 h at 80 • C in an oven, as recommended for this matrix system. This way, laminate plates with dimensions of length × width × t = 580 mm × 280 mm × 2 mm are produced. The fibre volume fraction after curing of each laminate is determined via a chemical etching process according to the standard DIN EN 2564 [34]. For the unidirectional specimens, a fibre volume content of ϕ f = 43.5% is measured for both GFRP and CFRP laminates allowing good comparability (refer to Table 2). Therefore, the influence of ϕ f on the evaluation of fibre material with regard to the applicability of the analytical models for predicting the compressive strength is eliminated. Specimens for compression and tensile tests are cut from the respective plates with a diamond saw. Specimen dimensions for compression tests are according to ASTM-D6641/ASTM-D3410 [35,36] with l c × w c = 145 mm × 25 mm. Before testing, 2 mm thick GFRP end tabs are applied on the specimens leaving a gauge length of 25 mm between the tabs. Specimen dimensions for tensile tests are set to l t × w t = 255 mm × 25 mm according to ASTM D3518 [37]. End tabs consisting of 2 mm thick GFRP/Aluminum stripes are applied on the specimens, resulting in a gauge length of 150 mm. All end tabs were applied with 2-component epoxy adhesive (UHU Endfest-300). Ten specimens of each configuration are tested in the respective test series to take statistical variations from the manufacturing process into account.
Quasi-Static Tests
Quasi static tensile tests of the ±45°specimens are carried out according to ASTM D3518 [37] in a Zwick/Roell Z100 universal test machine (ZwickRoell GmbH, Ulm, Germany) at a cross-head speed of 2 mm/min. The load is continuously measured with a load cell with a maximum load of 100 kN. The displacement is recorded with a long-travel extensometer on the specimen surface (measuring distance 50 mm). Transverse strain is calculated with the Poisson's ratio determined previously to ν CFRP = 0.75 and ν GFRP = 0.70 for a ±45 layup of CFRP and GFRP, respectively. It has to be noted that this type of strain measurement deviates from the standard, but the calculated results are expected to be of adequate accuracy. The in-plane shear stress is calculated directly from the measured axial load with τ = F/2A [37]. The shear strain is calculated from the longitudinal and the transverse normal strain by using the measured displacement and the Poisson's ratio.
Compression tests are executed by using a Zwick-Roell Z400 testing machine with hydraulic clamps. The specimens are mounted in a hydraulic composite compression fixture (HCCF) and mainly loaded by compressive force on their end surfaces. A scheme of the test set-up is shown in Figure 1. The cross-head speed is set to 0.5 mmmin. The compressive load is measured by a 400 kN load cell, whereas displacement is measured via the traverse of the machine. The deformation and force are continuously recorded to determine elongation, compressive strength and the modulus of elasticity. Strain gauges are fixed on some specimens of each type to verify that no global bending or out of plane buckling occurs.
Measurement of Fibre Misalignment
Measurement of fibre radii and fibre misalignment, with the method proposed by Yurgartis [38], is carried out by using light microscopy of polished laminate cross sections in an optical microscope (Olympus BX51, Olympus, Hamburg, Germany). Fibre radii of r f ,CFRP = 2.87 mm ± 0.27 mm and r f ,GFRP = 6.04 mm ± 0.89 mm are measured for CFRP and GFRP, respectively. Since the glass fibres are larger in diameter than the carbon fibres, they are expected to be less prone for local microbuckling at small defects [29]. The higher deviation of the glass fibres from the mean diameter value makes the compressive strength of GFRP more difficult to predict.
Several micrograph sections are analysed for each material to determine the in plane fibre misalignment φ 0,IP present in the specimens. Sections were cut under an angle of φ PC = 10°with regard to 0°-fibre direction. For each micrograph, 200 ellipses are measured and the untransformed fibre orientation ω is determined according to Equation (8) from the ratio of the semi-minor axis d to the semi-major axis l for each fibre [38]: In this equation, ω is the angle with regard to the sectioning plane. A transformation is thus necessary to obtain the misalignment angle φ with respect to the zero degree fibre direction. The transformation is calculated with Equation (9) [38]: The angle φ PC , referenced to the 0°-direction, is not known with good accuracy, but can be determined from the distribution of the untransformed fibre orientation f (ω). Since the zero degree direction is the mean of the fibre misalignment distribution, φ PC is equal to the mean of f (ω) and the distribution of the fibre misalignment angle can be calculated with f (φ) = f (ω − ω) [38].
The out-of-plane (OOP) fibre misalignment φ 0,OOP is determined from micrograph cut sections at the side of the specimens parallel to the fibre direction. OOP misalignment is present because of the nature of the used fabric. The maximum misalignment angles at the crossing points of the rovings with the thermoplastic binding yarns are measured by using the software of the light microscope.
Results and Discussion
Measurement of in-plane fibre misalignment results in a Gaussian distribution of the untransformed fibre orientation f (ω) and after the transformation of the fibre misalignment angles f (φ). The distribution range of f (φ) for GFRP is from −4°to 5°. Since 90% of all values are between −2.75°and 2.75°, a mean misalignment angle of φ GFRP 0,IP = ±3 • is valid for GFRP. For CFRP, the distribution range is smaller (−2°to 3°) with 92% of all values lying between −2°and 2°. Hence, a mean misalignment angle of φ CFRP 0,IP = ±2 • is present in the CFRP specimens. The median misalignment angle as well as the standard deviation of the misalignment for CFRP is smaller compared to GFRP.
The maximum local OOP fibre misalignment is measured to φ GFRP 0,OOP = 2 • for GFRP and φ CFRP 0,OOP = 3 • for CFRP. This type of misalignment is assumed to be more critical because the fibres are deflected with regard to loading direction in the plane where kink-band initiation and growth occurs. It is thus critical for damage initiation and growth leading to global buckling under compression loading. Figure 2 shows a representative in-plane shear stress versus shear strain curve for GFRP from tensile tests with [+45/−45] 2s specimens. In the diagram, the approach for determining the shear yield parameters for the analytical models from the stress-strain curve is presented. The approach for CFRP is similar. The fibre misalignment angle φ 0 in radian is plotted as a negative value on the x-axis. From this point, a tangent to the stress-strain curve is constructed. At the contact point of tangent and curve, the yield stress and strain for this angle can be read at the respective axis [30][31][32][33]. In the diagram, this is shown exemplarily for misalignment angles of 1°and 2°. The obtained shear yield stress τ y and shear yield strain γ y for both materials as a function of different fibre misalignment angles φ 0 are summarised in Table 3. The shear properties as determined with the tensile tests are summarised in Table 4. The shear modulus G c is determined within the linear slope of the stress-strain curve between 0.01% and 0.05% strain. The shear strength, shear modulus and maximum shear strain for CFRP is higher compared to GFRP (n = 10 specimens in each case). Since the same matrix system is used, the difference in shear properties between GFRP and CFRP can be attributed to the fibre type, with the stiffer and stronger carbon fibres accounting for the higher values.
Compression Tests and Comparison with Analytical Models
The UD compressive strength as experimentally determined is σ c,GFRP = 379.6 MPa ± 17.4 MPa for GFRP and σ c,CFRP = 569.1 MPa ± 24.2 MPa for CFRP (refer to Table 4). Compressive strength of CFRP is approx. 50% higher than that of GFRP. The higher strength for CFRP can be attributed to the higher strength of carbon fibres in comparison to glass fibres. Figure 3 shows photos of a representative CFRP and GFRP specimen after final failure. Final failure occurs in the form of a kink-band that is visible for both materials. The CFRP specimens exhibit a slightly higher amount of delaminations and fibre breakage next to the kink-band, which indicates the higher load at failure that results in more severe visible damage. The prediction accuracy for UD compressive strength of the analytical models is compared with the compression test results. With the determined plastic shear parameters (Table 3) and the material properties of fibre and matrix (refer to Section 3.1), the strength is calculated for different fibre misalignments φ 0 by using the various models presented briefly in Section 1.
The predicted strength values of the different models are summarised in Table 5 and compared with the experimental results. A fibre misalignment angle of 3°is selected for this comparison because it reflects the measured misalignment in the specimens well. Best prediction accuracy is achieved with the kinking model from Budiansky [19,20], especially for CFRP. The shear model from Rosen [2] (refer to Equation (1)) highly overestimates the compressive strength with a predicted strength of approximately 1785 MPa for both GFRP and CFRP in the shear mode. For the extension mode, the values are even higher. The model is not able to differentiate between different fibre types. This was expected because only the matrix shear modulus and the fibre volume fraction are used as input parameters in this approach to predict the strength of a material with a very complex damage process under the given load case and is in accordance with previous results from other authors [32].
For the fibre kinking model from Budiansky [19,20], the compressive strength is calculated with Equation (2). The model predictions in comparison with the experimental results are shown in Figure 4 for GFRP and CFRP. For small misalignment angles, the model highly over-predicts the compressive strength. The predicted compressive strength of 550.7 MPa for CFRP with a fibre misalignment angle of 3°correlates well with the experimental results, with the predicted value being within the standard deviation but below the mean value. As the fibre type is regarded only indirectly via the results from the shear test in the model parameters, the difference between predicted strength and failure strength is higher for GFRP. For a misalignment angle of 4°, which is higher than the measured misalignment in the GFRP specimens, the predicted value of 378.5 MPa is within the standard deviation of the experimentally determined strength. The better agreement of predicted values with experimental results compared to the literature [16,17,32] can be explained with the fibre volume content V f that is lower in our specimens. With a decreasing fibre volume content, plastic kinking is facilitated and the matrix properties become more and more relevant. Therefore, the fibre kinking model, which is mainly based on the matrix shear behaviour, is more accurate for lower V f . In the fibre microbuckling model from Berbinau et al. [16], the fibre type determines the fibre cross section area A f and second modulus of inertia I in Equation (4). For the calculation of I = π · (r 4 f /4) and A f = π · r 2 f , the measured mean fibre radius is used. The amplitude value of the unstressed fibre V 0 is calculated with Equation (10). It is reported that the wavelength λ equals the kink band width [39] and for λ 0 a value of λ 0 = 10 d f is reasonable [9,13,14,16,17], thus this approach is used here as well: A graphic representation of V/V 0 is plotted over the compressive stress and an asymptotic increase of the fibre amplitude predicts a compressive strength of approximately 720 MPa for CFRP. Therefore, this model overestimates the compressive strength, which is critical for conservative design of composite parts and in contrast to what is previously reported for this model [32]. The predicted strength for our material is lower compared to the values predicted for CFRP with a higher volume fraction [17]. Hence, the general influence of fibre content is represented qualitatively correct by the model, but the predicted value has a large error compared to the experiments with specimens that have a lower V f . Decreasing V f leads to a higher decrease of σ c than predicted by the model.
For GFRP, the microbuckling model in the current form is not applicable because the graphic representation of V/V 0 over the compressive stress results in an asymptotic decrease. With the slope of V/V 0 tending to zero instead of to infinite as expected, the compressive strength cannot be read at the x-axis and an adaption of the microbuckling model is necessary to be applicable for GFRP. This is related to the fact that the larger fibre diameter of the glass fibres that determines the moment of inertia I and the cross section area A f , in combination with the lower fibre Young's modulus E f , in comparison to CFRP, leads to a negative denominator in Equation (4) and thus a decrease of V/V 0 over σ c . In other words, the term x 2 = A f · G c has a higher value than the term x 1 = E f · I · (π/λ) 2 for glass fibres. Consequently, the fibre type and diameter r f , are important factors and should be considered for compressive strength prediction. When writing Equation (4) with the introduced abbreviations in the form V V 0 = (1 − P x 1 −x 2 ) −1 , for an asymptotic increase, x 1 > x 2 must be valid, which is not the case for GFRP. This can be avoided, when the absolute value of the term in the denominator is used, leading to the adapted Equation (11). This equation with the absolute values for the denominator is applicable for both CFRP and GFRP: The graphic representation of V/V 0 versus compressive stress σ 0 for GFRP is shown in Figure 5 for both the original microbuckling model from Equation (4) in Figure 5a and for the adapted model from Equation (11) in Figure 5b. Curves are plotted for misalignment angles between 1°and 5°, but the influence of fibre misalignment of the graphic representation of V/V 0 and thus the predicted strength is negligible. With Equation (11), a compressive strength of 950 MPa is predicted for GFRP (refer to Figure 5). This is higher than the predicted strength for CFRP, which results from the increased stability of thicker fibres against microbuckling in the model but does not represent realistic behaviour. When using comparable specimen geometry, CFRP achieves higher compressive strength than GFRP, as is also the case in the experiments. Regarding the predicted strength values, the adapted microbuckling model significantly overestimates the compressive strength. The prediction error is even larger for GFRP due to the fact that a higher predicted strength coincides with lower measured strength when compared to CFRP. The suggested adaption allows application of the microbuckling model, although originally derived for CFRP. However, a higher predicted strength for GFRP than for CFRP is not reasonable.
The microbuckling model uses the shear modulus of the composite G c for predicting the microbuckling behaviour. For thicker fibres and lower fibre volume fractions, it can be argued that the local microbuckling of a fibre depends more on the shear modulus of the surrounding matrix than on that of the composite due to the larger inter-fibre distance. This could be the case for the GFRP used in this study, which exhibits a significantly lower fibre volume fraction (V f = 43.5%) compared to the CFRP prepreg system against which the microbuckling model was verified [16,17]. When using the matrix shear modulus G m instead of G c in Equation (11), a compressive strength of 220 MPa is predicted for GFRP, which is lower than the experimentally determined strength but far more realistic. For CFRP, a compressive strength of 180 MPa is predicted when using the matrix shear modulus instead of the composite shear modulus in Equation (11). This underestimation of strength agrees better with the general behaviour of the microbuckling model as described in literature that led to the introduction of the combined modes model [32]. The combined modes model by Jumahat [32] consists of a microbuckling part and a kinking part, as described by Equation (6). For calculating the ratio of compressive strength attributed to fibre microbuckling, the adapted Equation (11) is used, so that the combined model is also applicable for GFRP. The fibre kinking ratio is calculated with Equation (7), with the parameters determined in the tensile tests for the respective material. Results from the combined modes model for different fibre misalignment angles φ 0 in comparison with the experimental results (mean value and standard deviation) are shown in Figure 6 for CFRP and in Figure 7 for GFRP. Realistic misalignment angles between 1°and 5°are chosen to analyse a certain range of fibre misalignment with the model. Figure 6. Comparison of prediction from the combined modes model from Jumahat [32] for different misalignment angles with experimental results for UD compressive strength of CFRP. The influence of using either the shear modulus of the composite G c or that of the matrix G m within the model is plotted as well. Figure 7. Comparison of prediction from the adapted combined modes model from Jumahat [32] for different misalignment angles with experimental results for UD compressive strength of GFRP. The influence of using either the shear modulus of the composite G c or that of the matrix G m within the model is plotted as well.
Since the microbuckling model already overestimates the compressive strength of the specimens, the combined modes model by Jumahat et al. [32] does so as well. The predicted UD compressive strength of 865 MPa for CFRP with an initial fibre misalignment of 3°, which is the local out-of-plane misalignment measured in our specimens, is lower compared to predicted strength values reported for prepreg-CFRP with a higher V f [32], but significantly higher than measured specimen strength. Therefore, the model is able to represent the general influence of a lower fibre volume fraction with its prediction, but leads to a overestimation of compressive strength for lower V f . For GFRP, the combined model also predicts higher compressive strength values for the investigated range of misalignment than measured in the experiments. For a misalignment angle of 3°, the predicted value is 1056 MPa. This is unrealistically higher that the value predicted for CFRP and results from the higher microbuckling ratio compared to CFRP because the microbuckling model predicts higher strength for GFRP.
If the matrix shear modulus G m instead of the composite modulus is used to calculate the microbuckling ratio, the prediction accuracy for GFRP is quite good. The use of the matrix shear modulus is motivated by the lower fibre volume fraction that results in more matrix dominated microbuckling of the fibres. For small misalignment angles, the predicted strength is within the standard deviation of the test results. For larger misalignment angles, the strength is slightly underestimated, which is less critical for conservative design. For CFRP, usage of G m is not meaningful because the compressive strength is highly underestimated.
When comparing the different analytical models for predicting the compressive strength of FRP (refer to Table 5), the fibre kinking model achieves the best results in comparison to the experiments. This is unexpected because it is in contrast to previous investigations [16,17,32]. Probable reasons for this deviation are the material and the manufacturing process. In the other investigations, a CFRP prepreg material with a fibre volume fraction of approximately V f = 65% was used that was autoclave cured. In our study, we used a non-crimp fabric and prepared specimens via a VARTM process, resulting in a lower fibre volume fraction (V f = 43.5%). Both the infusion process and the achieved fibre volume fraction are typical for many applications of FRP such as wind turbine blades or sporting goods and thus of relevance for an accurate prediction of compressive properties. As expected, the lower fibre volume fraction results in a lower compressive strength compared to the values in literature [16,17,32]. The trend of decreasing strength with lower V f is represented by the kinking model and the combined modes model, although the latter highly overestimates the strength for lower V f . It can be concluded that the matrix properties become more important with decreasing fibre content and that fibre kinking is the dominant failure mechanisms in that case. This is more pronounced for GFRP than for CFRP, where usage of matrix instead of composite shear properties leads to accurate prediction of compressive strength in the combined modes model considering both microbuckling and kinking.
It has to be noted that, in our experiments, the fibres exhibit circular cross sections and such a shape is used for calculation of fibre cross section area A f and moment of inertia I. However, in some composite parts, the fibre cross-section is of a kidney shape, which influences the mechanical properties and failure behaviour under compressive loading [40]. This should be considered, when applying the analytical models to predict the UD compressive strength of laminates with kidney-shaped fibres (e.g., by different equations for calculating A f and I in the microbuckling and combined modes model).
Conclusions
Existing models for predicting the UD compressive strength of FRP are compared with experimental results for CFRP and GFRP. A fibre kinking model is the most accurate and for CFRP within the standard deviation compared to the experiments. This is in contrast to previous investigations, where a combined modes model that considers microbuckling as well as plastic kinking achieved the best prediction accuracy. The deviation is explained by the variance in fibre volume fraction. For lower fibre volume fractions as in this study, the matrix properties apparently play a more important role and kinking instead of microbuckling is the dominant failure mechanism. However, in the original form, the combined modes model is not able to predict the compressive strength for GFRP. The model is modified by considering the stiffness to diameter ratio of the fibres. The adapted model is applicable to GFRP, but significantly overestimates the compressive strength of both GFRP (+178%) and CFRP (+52%) for the used fibre volume fraction. The fibre properties, especially the fibre diameter and stiffness, are not adequately considered. Further adaption of the models regarding the fibre morphology and a more detailed evaluation on the influence of fibre volume fraction are still necessary to consider different fibre types such as carbon, glass or natural fibres for the various applications of FRP.
Abbreviations
The following abbreviations are used in this manuscript:
CFRP
Carbon fibre reinforced polymer FRP Fibre reinforced polymer GFRP Glass fibre reinforced polymer IP In-plane OOP Out-of-plane UD Unidirectional VARTM Vacuum assisted resin transfer moulding | 7,740.6 | 2018-12-01T00:00:00.000 | [
"Engineering"
] |
Studies on the Thermo-Mechanical Properties of Gelatin Based Films Using 2-Hydroxyethyl Methacrylate by Gamma Radiation
Gelatin films were prepared by casting. Tensile strength (TS) and elongation at break (Eb) of the gelatin films were found to be 46 MPa and 3.5%, respectively. Effect of gamma radiation (Co-60) on the thermo-mechanical properties of the gelatin films was studied. 2-hydroxyethyl methacrylate (HEMA) was added to the gelatin during casting varying (10% 30% by weight) and found to increase the TS significantly. Then the films were irradiated and found further increase of TS. Thermo-mechanical properties of HEMA blended gelatin films were compared with those of the pure gelatin films. The coefficient of thermal expansion of the gelatin/HEMA films were also measured using thermo mechanical analyzer and found opposite trend with comparison of glass point.
Introduction
Gelatin is a relatively low cost protein, industrially produced all over the world and that have excellent film forming properties.Mainly because of that, this protein is being extensively explored in edible and/or biodegradable films production and characterization studies, pure [1][2][3][4][5][6], or blended with other biopolymers [7,8].Gelatin is a polymer, as produced by the partial hydrolysis of collagen derived from the skin, white connective tissues, and bones of animals.Being derivative of protein, it is used in food, cosmetics, pharmaceuticals and photographic industries for its gel forming abilities, non toxicity and cheap production cost.In pharmaceuticals, gelatin is used as capsule shell manufacturing raw material for controlled drug release.Because of various potential uses of gelatin, it has been considered worthwhile to modify gelatin to enable improved or alternative applications [9].
Gelatin is a unique polymer comprising multifunctionalities like gelling, thickening, water-binding, emulsifying stabilizing, foaming, film forming and fining characteristics.It forms thermo reversible gels through the formation of hydrogen bond stabilized triple helices when its solution is cooled.Again on heating it melts above 40˚C.Hydrogen bond stabilization is followed by rearrangement of individual molecular chains into ordered, helical arran-gement, or collagen fold and association of two or three ordered segments to create crystallites [10].Besides, gelatin, similar to synthetic high polymers, shows a rather wide molecular weight distribution [11].It is soluble in water and in aqueous solutions of polyhydric alcohols such as glycerol and propylene glycol and also hydrogen-bonding organic solvents like acetic acid, trifluoroethanol, and formamide.Gelatin is practically insoluble in less polar organic solvents such as acetone, carbon tetrachloride, dimethylformamide and most other non polar organic solvents.There are limitless possibilities for modifying the properties of gelatin because the number of bi and polyfunctional organic and inorganic compounds that can interact with the particular gelatin functions is very large indeed [12].Chemical modifications, aiming at an increase in the degree of protein cross-linking depend on the reactivity of the protein constituents, the specificity of the modifying agent [13], the amino acid composition, the reactivity of amino acid and the tri-dimensional structure of the protein molecule.Generally, chemical reactivity of proteins depends on the side chain, the amino acid composition and the free amino and carboxyl groups [14].The most reactive protein groups are serine (primary-OH), hydroxiproline (secondary-OH), threonine (secondary-OH), tyrosine (phenolic-OH), aspartic acid (-COOH), glutamic acid (-COOH), lysine (-NH2) and arginine (-C(:NH).NH2) [15].Crosslinking of gelatin macromolecules is known to increase the viscosity of the gelatin solutions, strength and melting points of gelatin gels.The cross-linking of gelatin matrix by chemical means is used extensively in photographic products, and this so-called hardening permanently reduces the solubility of gelatin.It is important that graft gelatin copolymers retain the valuable properties of the parent gelatin, the ability to form gels and helices and the high heat resistance.The synthesis, structure, thermo-physical and physico-mechanical properties of graft gelatin copolymers have been studied in detail [16][17][18][19][20][21][22].On heating, gelatin undergoes not only structural and mechanical but also physico-chemical transformations such as partial or complete loss of solubility in water.Stejskal et al. [23] observed that when methyl methacrylate is polymerized in aqueous medium in the presence of gelatin, gelatin graft copolymer macromolecules are formed.Little work on radiation-induced simultaneous copolymerization with acrylic monomers associated as blend with gelatin is reported.In the present study we reported on simultaneous copolymerization of 2-hydroxyethyl methacrylate with gelatin using blending and casting method, where simultaneous evaporation at room temperature was driving force.Later the films were subjected to gamma irradiation.The mechanical and thermo-mechanical properties of the films were analyzed.
Materials
Pharmaceutical grade gelatin was collected from Global Capsules of Opsonin Pharma Ltd.HEMA was supplied by E. Merk, Germany.
Preparation of Gelatin Films
Granules of gelatin (15 g) were dissolved in hot water and different percentages of HEMA (10% -30% by wt) were mixed for different formulations and heated at 60˚C for about half an hour until it reaches a viscous state.Three formulations were prepared named as B1 -B3; their compositions are given in Table 1.The solution was then cast on to the plastic covered uniform surface to form film under room conditions.It was then dried at room temperature.The dried films (about 0.30 mm thickness) were peeled off and cut into small pieces of length 70 mm and width 10 mm using conventional scissors.HEMA-blended gelatin films were subjected to irradiation with gamma radiation using a Co 60 source (25 kCi model gamma beam 650 is loaded with source GBS-98 that comprises of 36 double encapsulated capsules.Type C-252 loaded with Co 60 pellets).The gelatin films were subjected to irradiation with different gamma doses (50 -500 krad) at a dose rate of 350 krad/hr using Co 60 gamma source.The relative humidity was around 78% and the temperature was 32˚C.These samples were stored in a laminated poly ethylene bag until testing.
Mechanical Test
Tensile properties such as tensile strength (TS) and percent elongation at break (Eb) of the cured films were measured with Universal Testing Machine (Hounsfield Series S, UK) with a cross head speed of 10 mm/min.The load range of 500 N and the gauze length 20 mm were used throughout the experiment.Four different blends with different concentrations of HEMA in gelatin were analyzed using universal testing machine.But higher compositions of HEMA films are softer and can easily absorb moisture.
And due to radiation they became brittle.So we investigated the physico-mechanical properties of 10%, 20%, 30% HEMA containing gelatin film at 65% relative humidity at room temperature to enable identical moisture content.
Thermo-Mechanical Analysis
On-set of melting, glass point, off-set of melting and linear coefficient of thermal expansion were measured for all the films using Thermo-mechanical Analyzer (Liensis 200) with an efficiency of ±3 degree centigrade.
Mechanical Properties of Irradiated and Non Irradiated Films
Tensile strength (TS) is very important in selecting diverse application of polymer.The results of TS values of the non irradiated and irradiated films are plotted in Figure 1 against total gamma radiation dose for gelatin (G), 10% (B1), 20% (B2) and 30% (B3) HEMA blended gelatin film produced by blending.It is observed that with the loading of HEMA the TS significantly decreased which may be HEMA is acting as filler between inter molecular and inter chain spaces hindering the helix structure of film .But, due to irradiation the tensile strength was improved up to some radiation dose and above that dose it was decreased for almost all compositions.In case of pure gelatin film tensile strength value was also increased with the increase of radiation doses and attains a maximum at 100 krad dose and then decreases with increasing radiation doses.Higher gamma radiation dose may have caused degradation of the polymer chains and the film became hard and brittle whilst at lower doses cross linking may have dominated over chain seasoning.
On the other hand, in case of gelatin/HEMA blend biofilm, tensile strength value reaches a maximum at a dose of 250 krad and then decreased further with increasing gamma dose as well as gelatin concentration.When the gelatin/HEMA film subjected to the radiation, acrylic double bond from HEMA and gelatins functional groups might have initiated to form cross linked network.So, tensile strength value increases with radiation, but higher radiation doses might have caused chain scission due to the breaking of the polymer chains.The probable reaction is shown in Scheme 1. So, at higher radiation doses tensile strength decreased.From the Figure 1, it is clear that tensile strength value of gamma treated film is higher that of untreated film.Highest tensile strength was found for 30% HEMA containing gelatin film at 250 krad dose and was found to be 270% higher than the non-irradiated sample of same composition and with respect to pure gelatin film it was 23.3 % higher.
Elongation at Break (% Eb)
The results of elongation at break (%) of the non irradiated and irradiated films against total gamma radiation dose are plotted in Figure 2 for Gealtin (G), 10% (B1), 20% (B2) and 30% (B3) HEMA containing gelatin films produced by blending.From the Figure 2, it is observed that percent elongation at break increases for non-irradiated samples drastically due to incorporating HEMA.From the figure it is also observed that for non-irradiated film the highest elongation at break was found to be 32% for 20% HEMA containing gelatin film which is 28.5% higher than pure gelatin film.This is may be caused by plasticizing effect of HEMA over gelatin which made the film comparatively softer.Elongation is an important mechanical property in the application of polymer.Observing the figure, it is found that the value of elongation is decreasing with the increasing value of radiation intensity though tensile strength was increased and so happened due to cross linking.But the elongation at break increased due to incorporation of HEMA comparing with that of pure gelatin.Gelatin film containing 30% HEMA at 250krad radiation dose, at which highest tensile strength was found, was also found to have 5.6% elongation at break.This amount is higher than that of pure gelatin film.
Glass Point
Graphs, showing amount of probe movement vs. temperature, for glass point analysis is shown in Figures 3-6.Glass point is plotted against composition of HEMA in gelatin films have been plotted in Figure 7.An almost linear trend in increasing glass point with increasing amount of HEMA blended with gelatin has been observed.The highest glass point observed was 86.4˚C for gelatin/HEMA biofilm containing 30% HEMA.The change in molecular interaction and different thermal response of the monomer and gelatin might have shifted the glass point of film.
Coefficient of Thermal Expansion (CTE)
CTE is an important quality as it indicates its topological and morphological change when subjected to change in temperature.CTE for different composition of gelatin/HE-MA film is shown against temperature in Figure 8. Again CTE of control specimen and gelatin/HEMA biofilms of different compositions at 40˚C and 80˚C have been plotted against respective percentages of HEMA present in films in Figure 8. Opposite trend to glass point was observed, that is, with increasing amount of HEMA the CTE was decreased almost linearly and the increase in CTE at 80˚C was more rapid.Gelatin has a negative coefficient of thermal expansion while acrylic polymers have a positive CTE.So, the decreased contraction, in other words negative expansion was quite expectable.The difference between CTE at 40˚C and 80˚C was found lowest for gelatin/HEMA biofilm containing 30% HEMA.Thus film containing 30% HEMA would go through less physical distortion while exposed to higher temperature.
Conclusions
During the study physico mechanical and thermo mechanical properties of irradiated and non-irradiated pure gelatin and HEMA/gelatin film have been studied.Due to incorporating HEMA and gamma radiation the tensile strength was found to be improved and for 30% HEMA /gelatin film irradiated at 250 krad, it was 23% higher than pure gelatin film.Also the elongation at break was improved to 5.6%.For 30% HEMA containing unirradiated film it was 32%.The thermo mechanical properties have been drastically improved due to HEMA content in films.The glass point increased almost linearly due to increasing HEMA content in gelatin film.Gelatin showed contraction on heating, hence negative thermal expansion was found.The coefficient of thermal expansion showed the opposite trend as it was found to be decreased with temperature but it was comparatively smaller for films containing higher percentages of HEMA.That indicates much thermal stability due to blending HEMA with gelatin.
Figure 8 .
Figure 8. CTE for different composition of gelatin/HEMA film against temperature. | 2,797.2 | 2012-01-17T00:00:00.000 | [
"Materials Science"
] |
Integrated investigation of DNA methylation, gene expression and immune cell population revealed immune cell infiltration associated with atherosclerotic plaque formation
The clinical consequences of atherosclerosis are significant source of morbidity and mortality throughout the world, while the molecular mechanisms of the pathogenesis of atherosclerosis are largely unknown. In this study, we integrated the DNA methylation and gene expression data in atherosclerotic plaque samples to decipher the underlying association between epigenetic and transcriptional regulation. Immune cell classification was performed on the basis of the expression pattern of detected genes. Finally, we selected ten genes with dysregulated methylation and expression levels for RT-qPCR validation. Global DNA methylation profile showed obvious changes between normal aortic and atherosclerotic lesion tissues. We found that differentially methylated genes (DMGs) and differentially expressed genes (DEGs) were highly associated with atherosclerosis by being enriched in atherosclerotic plaque formation-related pathways, including cell adhesion and extracellular matrix organization. Immune cell fraction analysis revealed that a large number of immune cells, especially macrophages, activated mast cells, NK cells, and Tfh cells, were specifically enriched in the plaque. DEGs associated with immune cell fraction change showed that they were mainly related to the level of macrophages, monocytes, resting NK cells, activated CD4 memory T cells, and gamma delta T cells. These genes were highly enriched in multiple pathways of atherosclerotic plaque formation, including blood vessel remodeling, collagen fiber organization, cell adhesion, collagen catalogic process, extractable matrix assembly, and platelet activation. We also validated the expression alteration of ten genes associated with infiltrating immune cells in atherosclerosis. In conclusion, these findings provide new evidence for understanding the mechanisms of atherosclerotic plaque formation, and provide a new and valuable research direction based on immune cell infiltration.
Introduction
Cardiovascular diseases are the most important threat tightly associated with life quality and health condition of all humans worldwide [1]. In most cases, the underlying cause of cardiovascular diseases is atherosclerosis, treated as the pathological basis of other cardiovascular diseases, including atherosclerotic cerebral infarction [2]. The pathogenesis of atherosclerosis is associated with a Open Access complex interplay of endothelial dysfunction [3], lipid accumulation [4], inflammation [5], vascular smooth muscle cell proliferation [6], matrix turnover, calcification [7], and other complex interactions representing the dynamic process from fat streaks to stable or unstable atherosclerotic plaques [8]. A cellular biology study demonstrated that atherogenic processes in multiple cell types were activated to induce atherosclerosis [9]. One of the key causes of atherosclerosis is the dysregulation of immune response and inflammation in the artery wall with the activation of T helper cells [10,11]. Extensively understanding the underlying mechanisms could greatly help researchers and medical staff overcome atherosclerosis.
In the process of atherosclerosis, inflammatory response is accompanied by the increase of many proinflammatory factors, including MCP1, interferon-gamma (IFN-γ), IL-8, VCAM1 and TNF [10,12]. Among them, oxidative low-density lipoprotein (ox LDL)-induced monocyte/macrophage inflammatory response is a key event in the pathogenesis of atherosclerosis [13,14]. As an important factor is apolipoprotein E (ApoE) that could be treated as a therapeutic target by promoting clearance of lipoproteins and normalization of serum cholesterol levels in mice [15]. ApoE deficiency can lead to the accumulation of sphingomyelin-rich residues and induce macrophages to accumulate more cholesterol [16]. Recent studies have reported the relationship between abnormal DNA methylation and atherosclerosis [17,18], and found that promoter methylation of ApoE and miRNA-223 genes are significantly associated with atherosclerotic cerebral infarction (ACI) [19,20], indicating that epigenetic regulation affected by environment plays an important role in the pathogenesis of ACI. In atherosclerosis, macrophages and monocytes are exposed to inflammatory cytokines, oxidized lipids, cholesterol, and other factors. These factors could cause specific transcription reactions and interact with each other, resulting in transcriptional and apparent heterogeneity of macrophages in plaques [21].
Some innate immune cells play important roles in different stages of atherosclerotic development, but macrophages are the main type of innate immune effector cells in plaques. T cells are involved in the regulation of the plaque development [10]. The process of atherosclerosis is accompanied by significant changes of the immune cell infiltration [22]. In the early stage of atherosclerosis, macrophages, T cells and dendritic cells are recruited into the adventitia and surrounding vascular system [23]; in the late stage, the inflammation of adipose tissue will continue to increase, and the content of macrophages and B cells will also further increase [24]. Blood DNA methylation biomarkers have important application value in diagnosis, prediction, prognosis and treatment. In chronic inflammatory diseases, methylation module represents an immune component, and its specific performance is related to the changes in immune cell infiltration and distribution. Immunomethylation markers can be used as biomarkers of such diseases [25].
To further study the transcription outcome of DNA methylation influence, we performed an integration analysis based on the previously reported differential DNA methylation gene data between carotid atherosclerotic plaque and normal artery (GSE46401) in patients with atherosclerosis [26], and the differential expression gene data between carotid atherosclerotic plaque and peripheral blood monocytes (PBMCs) (GSE21545) [27]. We analyzed the abnormal gene expression level and DNA methylation (DNAm) level in atherosclerotic plaque or PBMC samples, then we validated the expression changes using the RT-qPCR experiment. Finally, we further studied the correlation between DNA methylation-related differentially expressed genes and different cell type changes, which could provide a potential link between DNA methylation, gene expression, and cell types in atherosclerotic plaque.
DNA methylation (DNAm) analysis
DNA methylation microarray data was downloaded from the NCBI GEO database (GSE46401) [26]. A highdensity (485,577 CpG sites) DNA methylation microarray (Infinium HumanMethylation450 BeadChip) was utilized to identify specific loci of differential DNA methylation with a set of donor-matched aortic samples, including 19 stable and advanced atherosclerotic carotid samples (carotid), 15 atherosclerotic lesion samples (A), and 15 matched normal aortic tissue samples (N). Quality control, data normalization, and statistical filtering procedures were performed according to the published paper [26]. The methylation levels of detected probes that were associated with genes were used to perform differential methylation statistical analysis between 15A and 15 N samples (paired t-test, Bonferroni-corrected p value < 1 × 10 -7 ). Genes with differentially methylated probes were used to perform functional enrichment analysis.
Transcriptome analysis
In this project, we downloaded the transcriptome microarray datasets GSE43292 (34 atheroma plaques (ATH) and 34 macroscopically intact tissue) [28] and GSE21545 (126 carotid plaques in patients with atherosclerosis vs. 98 peripheral blood mononuclear cell (PBMC) samples, including 97 paired samples) using Affymetrix HG-U133 plus 2.0 oligonucleotide arrays [27]. Gene expression profile was obtained from the Gene Expression Omnibus database (https:// www. ncbi. nlm. nih. gov/ geo). Raw data processing, quality control, data normalization and filtering were done according to the published paper [27]. The microarray probes were transformed into gene symbols according to annotation. If several probes were mapped to one gene symbol, the mean density of these probes was set as the final expression value of this gene. We also used limma package [29] to consider the age covariate (detailed gender of each sample was not provided in the published paper). We found that the differentially expressed genes were the same, indicating the small contribution of age covariate. Thus, we used online GEO2R with default parameters (https:// www. ncbi. nlm. nih. gov/ geo/ geo2r/) to compare the two groups in order to identify genes that were differentially expressed under experimental conditions. Two thresholds, including adjusted p value < 0.05 and |log2fold change (FC)|> = 1, were set as the cut-off criteria. We then analyzed the differentially expressed genes (DEGs) by principal component analysis and functional enrichment analysis.
In this project, we analyzed the association between DEGs and DMGs obtained from the two studies, and recognized the gene expression changes related to DNA methylation. DEGs and DMGs were overlapped to identify the co-regulated genes at both DNA methylation and transcriptional levels. The DEGs were classified into two classes: DNA methylated (with DMG) and DNA nonmethylated (without DMG).
Cell-type quantification
Atherosclerosis is a chronic inflammatory disease with dysregulated fractions and functions of immune cells [10], so it is important to decipher the fraction changes of immune cells in carotid plaques versus normal samples. Based on all detected genes from GSE21545 transcriptome microarray data, the types of immune cells in each sample group were analyzed. An R package, immunedeconv [30] that provides a unified interface to seven deconvolution methods, was used for estimating immune cell fractions. Besides, CIBERSORT method [31] was applied in this study. The CIBERSORT algorithm is the most widely used deconvolution method, which characterizes its cell composition from the gene expression profile of complex tissues. Its results have been shown to correlate well with flow cytometric analysis. We also tested other two software, including immunecellAI [32] and EPIC [33], but they showed cell fraction bias or less cell types compared with CIBERSORT. With the default parameter, CIBERSORT was finally adopted to estimate immune cell fractions using expression values of all expressed genes. A total of 22 human immune cell phenotypes can be deconstructed by CIBERSORT, including 7 T cell types [CD8 T cells, naïve CD4 T cells, memory CD4 resting T cells, memory CD4 activated T cells, T follicular helper cells, and regulatory T cells (Tregs)]; naïve and memory B cells; plasma cells; resting and activated NK cells; monocytes; macrophages M0, M1, and M2; resting and activated dendritic cells; resting and activated mast cells; eosinophils; and neutrophils.
RT-qPCR experiment
To further validate the immune cell type changes which could be reflected by the marker gene expression changes, we performed an RT-qPCR experiment to explore the deregulated gene expression levels. We extracted PBMCs from 15 atherosclerosis and 15 normal samples from The First Affiliated Hospital of University of Science and Technology of China and tested the expression levels of 10 selected genes. This process was approved by the ethics committee of First Affiliated Hospital of University of Science and Technology of China (2021KY131), and all volunteers. Informed consent was obtained from all subjects and/or their legal guardian(s). All methods were employed in accordance with the relevant guidelines and regulations. Clinical information of these patients and volunteers was provided in Additional file 2: Table S1. We have strictly followed the standard biosecurity and institutional safety procedures in our country (Biosecurity Law of People's Republic of China). All the blood samples were processed immediately after collection for the isolation of peripheral blood mononuclear cells (PBMCs). The PBMCs were extracted according to the previously described method [34], and then stored at − 80 °C before RNA extraction.
First, total RNAs were extracted from PBMCs using TRIzol reagent (Invitrogen) according to the manufacturer's instructions. The RNA integrity of each sample was estimated using a 1.5% agarose gel electrophoresis and quantified by spectrometer. Then, 10 μg of the purified RNA was reverse-transcribed and taken for complementary DNA with PrimeScript RT reagent Kit (Takara). Subsequently, qRT-PCR was conducted using TB Green Fast qPCR Mix (Takara) and specific primers (Additional file 3: Table S2) under the following amplification conditions: denaturing at 95 °C for 30 s, followed by 40 cycles of denaturing at 95 °C for 10 s and annealing and extension at 60 °C for 30 s. Relative gene expression was determined by employing the 2 −ΔΔCT method and normalized against U6 RNA. Mann-Whitney U test was carried out to determine the expression differences between sepsis and control groups. Statistical analyses were carried out using GraphPad Prism software [35]. All P values are two-sided. P < 0.05 was considered statistically significant.
Functional enrichment analysis
Gene Ontology (GO) terms and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways were identified using the KOBAS 2.0 server to investigate the comprehensive set of functional annotations of a large list of genes. The Benjamini-Hochberg FDR controlling procedure and the hypergeometric test were used to define the enrichment of each term. Reactome pathway profiling (http:// react ome. org) was also used for the functional enrichment analysis of the sets of selected genes. A p value < 0.005 was set as the cutoff criterion.
Other statistical analysis
Principal component analysis (PCA) was performed with R package factoextra (https:// cloud.r-proje ct. org/ packa ge= facto extra) to show the clustering of samples with the first two components for both DNA methylation and transcriptome microarray datasets. After normalizing the density values of each gene/probe in samples, an inhouse script (sogen) was used for visualization of nextgeneration sequence data and genomic annotations. The pheatmap package (https:// cran.r-proje ct. org/ web/ packa ges/ pheat map/ index. html) in R was used to perform the clustering based on Euclidean distance. Student's t-test was used for comparisons between two groups.
Analysis of the hypermethylated genes previously identified in atherosclerotic aortas and carotid plaques
To further interpret the underlying molecular mechanisms in atherosclerosis, we downloaded the DNA methylation microarray data associated with atherosclerosis [26], containing 19 stable and advanced atherosclerotic carotid samples (carotid), 15 atherosclerotic lesion samples (A), and 15 matched normal aortic tissue samples (N). In the referred study, the 19 stable and advanced atherosclerotic carotid samples were used to validate the differentially methylated CpGs (dmCpGs) that don't have regional epigenetic changes or batch effects; the results showed a very high consistency (98% of dmCpGs) [26]. Thus, we included the 19 carotid samples in our analysis to analyze the methylation levels of dmCpGs identified from the 15 paired samples. We then figured out the differentially methylated genes (DMGs) between the 15A samples and 15 N samples (Fig. 1A, Additional file 4: Table S3). After obtaining the DMGs, we performed principal component analysis (PCA) to explore the methylation pattern among the three groups (Fig. 1B). The top two components could explain 41.7% of the total variation, and the first component explained 31.8%. The three groups could be separated by the first component (Fig. 1B), suggesting the obvious differential methylation among these three groups. We then performed functional enrichment analysis for these DMGs. Gene ontology (GO) analysis revealed that the top ten enriched biological processes (BPs) included cell adhesion, blood coagulation, axon guidance, signal transduction, and extracellular matrix organization (Fig. 1C). We extracted the detailed methylation levels of genes from cell adhesion and blood coagulation pathways. Most of these genes showed increased methylation level in carotid samples, and the methylation levels of these DMGs showed a gradual increase or decrease from normal to advanced atherosclerotic development (Fig. 1D). KEGG pathway analysis also demonstrated that the focal adhesion and ECM-receptor interaction pathways were enriched with top p-value (Additional file 1: Fig. S1A). Reactome analysis was carried out to further explore the DMG functions. Translocation of ZAP−70 to immunological synapse, phosphorylation of CD3 and TCR zeta chains, and PD−1 signaling, which were related to immune response, were the top three enriched pathways (Additional file 1: Fig. S1B). These results suggest that ECM and immune response-related pathways may be related to the changes of collagen fibrin in carotid atherosclerotic plaque.
Transcriptome analysis of deregulated gene expression in atherosclerotic carotid plaques
The DNA methylation level of CpG islands at the promoter region of genes was tightly associated with their transcriptional level. To uncover how the DMGs were expressed between atherosclerotic carotid plaques and normal samples, we downloaded two expression profiling datasets, including GSE43292 (34 atheroma plaque (ATH) and 34 macroscopically intact tissue (MIT)) [28] and GSE21545 (126 carotid plaques in patients with atherosclerosis vs. 98 peripheral blood mononuclear cell (PBMC) samples, including 97 paired samples) [27]. After normalizing the expression level, PCA result showed that the plaque samples were clearly separated from PBMC samples by the first component ( Fig. 2A), while the ATH and MIT samples were not clearly separated (Fig. 2B). We then performed differentially expressed genes (DEGs) analysis for these two datasets. We finally obtained 1551 up DEGs and 1158 DEGs in plaque vs. PBMCs pair, as well as 512 up DEGs and 358 down DEGs in ATH vs. MIT pair. Heatmap analysis of the DEGs in plaque vs. PBMCs pair revealed the distinct expression pattern between plaque and PBMCs samples (Fig. 2C), while several ATH and MIT samples were not clearly separated (Additional file 1: Fig. S2A). We then analyzed the functions of DEGs. The down DEGs in plaque samples were mainly enriched in immune response-related terms, including innate immune response, T cell receptor signaling pathway, and immune response (Fig. 2D). The up DEGs in plaque samples were mainly enriched in ECM-related terms, including collagen catabolic process, extracellular matrix disassembly, cell adhesion, and angiogenesis (Fig. 2E). KEGG enrichment analysis for up DEGs and down DEGs also showed similar results ( Figure S2B-C). Meanwhile, functions of DEGs from ATH vs. MIT pair showed a reverse pattern with immune response terms enriched in up DEGs (Fig. 2F) and ECM-related terms enriched in down DEGs (Fig. 2G). We observed that focal adhesion and ECM-receptor interaction pathways were also enriched in DMGs (Additional file 1: Fig. S1A). We then analyzed gene expression pattern from these two pathways in Additional file 1: Fig. S1A, and found half of them were consistently elevated in plaque samples and the other half of the genes also showed higher expression in several plaque samples (Additional file 1: Fig. S2D-E), suggesting that DNA methylation alternation could influence the expression levels of genes.
Analysis of the dynamics of immune cell population in atherosclerotic carotid plaques and PBMCs
Clinical samples often show more diversity than cell line samples because of their heterogeneity with multiple cell types. It is important to decipher the main cell types, especially immune cells, and the relative percentage change of these cells in atherosclerotic carotid plaque samples. We used CIBERSORT software [31] to estimate the relative fractions of immune cells using expression profiles in plaque tissues and PBMCs. Except for uncharacterized cells, a total of 22 immune-cell types were identified. Fraction analysis of each cell type showed a dramatic difference between plaque and PBMC samples (Fig. 3A). Macrophages and mast activated cells were dominant in plaque samples, while immune T cells, monocyte, and resting NK cells contributed a high fraction in PBMC samples (Fig. 3A). We also observed that gamma delta T cells were with high fraction (> 0.1) in both plaque and PBMC samples (Fig. 3A). We then performed PCA analysis to estimate the immune cell fraction influence on sample distribution. The result showed that immune cell fractions had the ability to identify plaque samples from PBMC samples (Fig. 3B), validating the distinct cell type differences in these two groups. We estimated the relative fraction difference of each cell type by calculating log 2 fold change (log 2 FC) and p-value in plaque samples vs. PBMCs. Although with a low fraction, resting NK cells showed the highest absolute fold change among PBMCs enriched cell types (Fig. 3C). Various CD4 + T cell types, including native, activated, and resting, as well as activated dendritic cells, and monocytes were significantly enriched in PBMC samples (Fig. 3C), which was consistent with the natural composition of PBMCs [36]. Three types of macrophages, including M0, and M2, were dominantly enriched in plaques with highly significant p values and fold changes. Other cell types also showed significant differences between plaque and. PBMCs (Fig. 3C). We then showed the detailed fractions of each immune cell type that were enriched in atherosclerotic plaque samples or PBMCs. Macrophages and several T cell types showed a high fraction in plaque and PBMC samples, respectively (Additional file 1: Fig. S3A-B). Despite the cells with fractions of more than 0.1, we observed that the resting and activated mast cells were specifically enriched in PBMC and plaque samples, respectively (Fig. 3D). This fraction shift between PBMC and plaque samples for the same cell type with different cellular states was also observed for natural killer (NC) cells, showing elevated fraction of activated state and decreased fraction of resting state in plaque samples (Fig. 3E). For dendritic cells (DCs), only activated DCs were enriched in PBMCs, and resting DCs showed no differences between PBMC and plaque samples (Fig. 3F). We also observed that T follicular helper cells (Tfh cells) showed higher fraction in plaques (Fig. 3G), while other T cells were enriched in PBMC samples (Additional file 1: Fig. S3B). Eosinophils cells, with immunomodulatory functions and homeostasis promotion [37], showed higher fraction in PBMCs (Fig. 3H). Other cell types, including naïve and memory B cells, neutrophils and plasma cells, showed very low fractions and small differences between plaque and PBMC samples (Additional file 1: Fig. S3C). These results demonstrated that the immune cell fractions were greatly affected in atherosclerotic carotid plaques, suggesting that these cell types enriched in carotid plaques might modulate the progression of plaques.
Integrated analysis of deregulated DNA methylation, gene expression and immune cell population
To figure out how DNA methylation influences gene expression, we made an integration analysis between DEGs and DMGs using the two published datasets. The results showed that 224 DEGs in atherosclerotic carotid plaque samples also had DNA methylation change at their promoter region, accounting for 26% of total DMGs (Fig. 4A, p value = 7.73e-70, hypergeometric test). Functional analysis of the 224 overlapped genes revealed they were highly associated with ECM organization, cell adhesion, and focal adhesion-related pathways (Additional file 1: Fig. S4A-B), suggesting ECM was dysregulated in plaque by modulating the DNA methylation levels of related genes. We classified these DEGs into immunecell types by their prior classification in immunedeconv package [30], and then conducted a correlation analysis of the DEG (with or without DMGs) expression level and the cell population percentages from plaque and PBMC samples. The cell type populations and their co-expressed DEG numbers were shown in Fig. 4B. We found that most of the non-methylated DEGs were highly correlated with the macrophages, monocytes, and CD4 + memory activated T cells (Fig. 4B). Resting NK cells were also correlated with 1132 non-methylated DEGs and 47 methylated DEGs. Meanwhile, we found that gamma-delta T cells were correlated with 151 non-methylated DEGs and 4 methylated DEGs, ranking second among T cells (Fig. 4B). We then constructed the relationship between cell types and the functions of their correlated DEGs with methylated or without methylated changes after classifying them into immune cell types. For DEGs without DNA methylation change, heatmap plot of the enriched GO terms showed immune response and T cell stimulation terms were specifically and positively correlated with gamma-delta T cells (Fig. 4C). While ECM-related terms had a positive correlation with macrophages, resting NK cells, CD4 + memory-activated T cells and monocytes (Fig. 4C). KEGG analysis of DEGs without DNA methylation change showed that these highly correlated cell types had similar enriched pathways (Additional file 1: Fig. S4C). Reactome and KEGG pathway analysis of DEGs with DNA methylation change showed that gammadelta T cells were also positively correlated with immune system-related pathways (Fig. 4D). Other immune cell types, including macrophages, resting NK cell, monocytes, and activated CD4 memory T cells, were positively correlated in ECM-related pathways ( Fig. 4D and Additional file 1: Fig. S4D). We then performed correlation analysis between cell infiltration and immune response or ECM organization gene expression in the datasets (absolute correlation coefficient > 0.8 and p value < 0.01, Additional file 5: Table S4). Strikingly, most of the genes from ECM organization were positively correlated with three types of macrophages, and negatively correlated with monocytes, resting NK cells, and activated CD4 memory T cells (Fig. 4E). We also checked genes from focal adhesion pathway, and found they showed a similar correlation pattern with genes from ECM organization pathway (Additional file 1: Fig. S4E). Meanwhile, genes from immune response pathway showed a contrary correlation pattern with immune cell types compared with genes from ECM organization (Fig. 4F).
Verification of genes deregulated at both expression and DNA methylation levels in atherosclerotic clinical samples
To further validate the relationship between cell composition and gene expression in atherosclerosis, we conducted an RT-qPCR experiment for several genes. We selected ten genes that were both differentially methylated and differentially expressed in atherosclerotic carotid plaques, including COL1A1, THBS2, RGS5, PRKCB, MYH10, FGF2, WNT2B, ETS1, CD8A, and EGFR, to explore their expression changes in atherosclerosis patients. Due to limited resources and time, plaque samples could not be obtained. We extracted PBMCs from 15 patients and control samples (See methods for detailed information). Immune cell correlation analysis revealed they were associated with macrophages, monocytes, resting NK cells, activated CD4 memory T cells, and gamma delta T cells (Additional file 6: Table S5). Functional annotation revealed these genes were associated with ECM disassembly/organization (COL1A1, FGF2), cell adhesion (THBS2, MYH10), signal transduction (RGS5), blood coagulation (PRKCB), cell fate commitment (WNT2B), and immune response (ETS1). Box plot analysis of the 10 genes showed that 7 and 3 genes were down-regulated and up-regulated in atherosclerotic PBMCs, respectively (Fig. 5A-B). In this study, PBMCs were extracted from 15 atherosclerosis patients and 15 normal control individuals. Using the RT-qPCR method, we found that all these 10 genes showed significant differences between atherosclerosis patients and normal controls (Fig. 5C-D). COL1A1, THBS2, RGS5, FGF2, WNT2B, and EGFR were down-regulated in atherosclerotic PBMCs, while PRKCB, MYH10, ETS1, and CD8A were up-regulated in atherosclerotic PBMCs (Fig. 5B-C). These results revealed that, except for MYH10, the changing tendency of the other nine genes was completely consistent between normal PBMCs and plaque samples, compared with that between the PBMC samples from atherosclerotic patients. The gene expression array experiment was performed using carotid plaques and PBMCs from atherosclerotic patients, and the RT-qPCR experiment was performed using PBMCs from atherosclerotic patients and control individuals. We found that these nine genes showed a high expression variation in PBMCs from atherosclerotic patients, suggesting that the differentially expressed genes in PBMC cells from atherosclerotic patients may play important roles in the formation of plaques. Our study highlights the regulatory roles of key genes associated with infiltrating immune cells in atherosclerosis.
Discussion
At molecular level, the pathogenesis of atherosclerosis is associated with multiple factors. Transcriptional and epigenetic regulation of macrophages is a major driver of atherosclerosis [21]. In this study, we integrated DNA methylation profile and expression profile from atherosclerosis and control individuals to decipher how DNA methylation modulates the progression of atherosclerosis by regulating transcription of genes involved in atherosclerosis, and try to investigate the composition of immune cell types in atherosclerosis. We found that the DNA methylation profile showed a distinct pattern among normal aortic, atherosclerotic aorta, and atherosclerotic carotid plaque samples. The DMGs and DEGs interaction analysis demonstrated that hundreds of genes had expression changes which may be caused by DNA methylation regulation at the promoter region, and that these genes were tightly associated with atherosclerosis. We explored the immune-cell fraction changes in atherosclerotic carotid plaque samples and PBMCs and found that plaque samples showed distinct immune cellular fraction distribution. Several activating immune cells, including NK cells and mast cells, were specifically enriched in plaque cells. Further analysis revealed that these specifically enriched cell types were highly correlated with immune response and ECM organizationrelated pathways associated with the formation and progression of plaques [38]. Taken together, our results highlighted the important roles of DNA methylation on gene expression changes and proved that specific immune cell types play potential functions during atherosclerosis development and progression. DNA methylation alteration is one of the most important epigenetic regulation manners. Several studies have demonstrated the global DNA methylation changes between atherosclerosis and normal individuals [26,39]. A recent review paper suggested that targeting the epigenetic landscape of plaque macrophages can be a powerful therapeutic tool to modulate pro-atherogenic phenotypes and reduce the rate of plaque formation [21]. Correlation analysis between DNA methylation drift and histological grade showed that hypermethylation was associated with lesion progression [40]. CD14+ blood monocyte transcriptome and epigenome signatures suggest that ARID5B expression, possibly regulated by an epigenetically controlled enhancer, promotes atherosclerosis by dysregulating immunometabolism towards a chronic inflammatory phenotype [41]. DMRs in the promoter region of BRCA1 and CRISP2 were consistently associated with subclinical atherosclerosis measures, suggesting their potential blood surrogate markers for early risk stratification [42]. We found that the global DNA methylation profile showed a distinct pattern between atherosclerotic lesions and donor-matched normal samples. Adhesion junction and blood coagulation were the most enriched pathways in DMGs. Cellular adhesion molecules are the dominant members to recruit inflammatory cells to vascular endothelium [43]. While blood coagulation is an essential determinant of the risk of atherothrombotic complications [44]. DNA methylation changes around the promoter region of these genes could trigger following transcriptional and post-transcriptional alternations. These results demonstrated that DNA methylation could regulate atherosclerosis by modulating the status of CpG islands of associated genes.
The immune cell infiltration is a prominent feature of the adipose tissue inflammation, which leads to vascular remodeling and contributes to vascular disease, atherosclerosis, and plaque instability [45]. Several studies have shown that immune-cell types have different DNA methylation patterns in many diseases, including multiple sclerosis [46], type 1 diabetes [47], and metastatic melanoma [48]. By classifying the expressed genes into immune cell types, we found that the fraction of immune cells showed significant changes between plaque tissues and PBMCs. Higher fractions of macrophages, including M0, M1 and M2, were observed in plaque tissues. It has been demonstrated that macrophage phenotypes dysregulation plays a major driver in atherosclerosis, including the transcriptional and epigenetic heterogeneity [21]. Our data suggests that macrophages are dysregulated in atherosclerotic plaque tissues, and that they participate in the inflammatory progression of atherosclerosis by accumulating their fraction. One protein-gene-associated multi-omics model between low-and high-risk lesion segments revealed that it was corrected with Arg1 + macrophage content and αSMA − PDGFRα + fibroblast-like cell content [49]. We also found that M2 macrophage (Arg1 + ) showed a higher fraction in plaque (Fig. 3A), but fibroblast-like cells were not fully identified. Gene set over-representation analysis pointed to a clear cardiovascular disease signature, including extracellular matrix synthesis and organization, and focal adhesion [49], which were also observed in enriched pathways of DEGs. Meanwhile, T follicular helper (Tfh) cells were also specifically enriched in plaque samples. Tfh cells play important roles in many diseases during the decade from their identification [50]. A study has demonstrated their pro-atherogenic roles in a Bcl6 mouse model, and proved their existence in the plasma of human subjects with coronary artery disease [51]. The high fraction of Tfh cells in the plaque from our study revealed their regulatory roles during plaque formation. Other T cells were mostly enriched in PBMC samples of atherosclerosis (Additional file 1: Fig. S3B). We also discovered that activating mast cells and natural killer (NK) cells were enriched in plaque samples, while their resting cells were enriched in PBMC samples. It has been reported that mast cells were accumulated in human atherosclerotic lesions [52] and could promote atherosclerosis by releasing proinflammatory cytokines [53]. NK cells could induce an immune response and participate in the pathogenesis and progression of atherosclerosis [54]. The infiltration of immune cells from blood to vessel is closely associated with the progression and prognosis of atherosclerosis [55]. The state transition of these two cells suggests that they could infiltrate into plaque and be activated by other factors to function, the process of which needs to be uncovered by further studies. Collectively, our study demonstrated that PBMC and plaque tissues have very distinct immune cell fractions and that their population changes are phenotypes of atherosclerosis and associated with complex plaques that may be related to clinical events. Recent studies on single-cell technology also demonstrated the various cell types in plaque samples of AS patients, which extended our understanding of immune cell infiltration during AS development [56,57]. One shortage of this study is that the identified immune cell fraction is unitary as we did not consider the other cell types due to technical limitation. Thus, it will be very helpful to make a deeper exploration of immune cell variation in plaques using single-cell technology in future studies.
We then integrated the DEGs and DMGs to further identify the expression outcomes of DNA methylation of associated genes. Out of the 859 DMGs, 224 showed significant expression changes in plaque tissues. Functional analysis of these overlapped genes demonstrated that they were highly related to cell adhesion and ECM organization, suggesting that these genes participate in the progression of atherosclerosis by altering the ECM structure of plaque tissues. The connection between immune cell fractions and biological functions was analyzed on the basis of the gene expression data. Several immune cells were found to be specifically associated with extracellular inflammatory-related pathways or immune response-related pathways besides immuneand inflammatory-related pathways. Gamma delta (γδ) T cells were positively correlated with immune response pathways, while macrophages, resting NK cells, monocytes, and activated CD4 memory T cells, were positively correlated in ECM-related pathways. In the multi-ethnic study of atherosclerosis (MESA), γδ T cells are associated with systolic blood pressure [58]. However, it is also reported that γδ T cells do not contribute to the early atherosclerotic plaque development by generating TCRδ knockout ApoE −/− mice [59]. These results indicate that the functions of γδ T cells in atherosclerosis are not fully understood and need to be deciphered with further studies. Among the immune cell types correlating with the ECM-related pathways, macrophages were enriched in plaques, while resting NK cells, monocytes, and activated CD4 memory T cells were enriched in PBMC samples ( Fig. 3 and Additional file 1: Fig. S3). The association between these immune cells and atherosclerosis has been discussed. These cells have positive correlation with genes involved in ECM, which is composed of various macromolecules and plays important roles during the development of atherosclerotic plaques [60,61]. Then we selected 10 genes that were correlated with immune cell types shown in Fig. 4C-D to investigate their expression in PBMC samples. These genes contained WNT2B, COL1A1, EGFR, CD8A, and ETS1 and have strong biological implications that can be linked with WNT and EGFR signaling (WNT2B and EGFR), collagen production (COL1A1 and ETS1) and immune cells (CD8A). The validation of these genes suggests that their expressions were highly regulated in PBMCs between AS patients and normal samples. It is very interesting that they all had significant expression changes in atherosclerotic PBMC samples versus normal samples. When compared with atherosclerotic PBMC samples, they showed a consistent expression change between normal PBMC and plaque samples. One explanation of this phenomenon is that immune cells expressing these genes in the blood infiltrate into the vessel wall and trigger the formation of plaques, resulting in the reduction of these cells in the atherosclerotic PBMC samples. Several recently published studies have demonstrated the profile of immune cell infiltration and the potential regulatory genes in the progression of atherosclerosis [62][63][64], suggesting that these genes identified in this study may also play important roles in immune cell infiltration and plaque development. Further investigations into the molecular mechanisms of these genes in atherosclerosis could greatly help us understand the pathogenesis of plaque formation.
In summary, a comprehensive analysis was made to explore the DNA methylation and its regulated transcriptome profile changes in atherosclerotic plaques. The high correlation between DMGs and DEGs revealed their potential regulatory roles and functions in immune cell infiltration. Meanwhile, we also systematically investigated the immune cell alteration in atherosclerotic plaque samples and identified several immune cell types tightly associated with plaque formation and development. Our study highlights the dysregulated methylation and expression levels of key genes associated with infiltrating immune cells in atherosclerosis, extending our understanding of the immune cell infiltration and its potential underlying mechanisms during atherosclerosis pathogenesis or development. | 8,211 | 2022-05-09T00:00:00.000 | [
"Biology"
] |
Graph-Based Representation Of Syntactic Structures Of Natural Languages Based On Dependency Relations
:Deep Learning approach using probability distribution to natural language processing achieves significant accomplishment. However, natural languages have inherent linguistic structures rather than probabilistic distribution. This paper presents a new graph-based representation of syntactic structures called syntactic knowledge graph based on dependency relations. This paper investigates the valency theory and the markedness principle of natural languages to derive an appropriate set of dependency relations for the syntactic knowledge graph. A new set of dependency relations derived from the markers is proposed. This paper also demonstrates the representation of various linguistic structures to validate the feasibility of syntactic knowledge graphs.
Introduction
Linguistic intelligence is one of the ultimate goals of Natural Language Processing (NLP) in Artificial Intelligence (AI). For several decades, a considerable amount of research on language modeling and syntactic/semantic analysis has been accomplished to understand the written texts and spoken dialogs. Recently, a revolutionary approach prompted by Deep Learning (DL) provides a breakthrough insight in NLP and achieves the significant advancement [1,2,3,4,5]. Several innovative language models using Attention mechanism and Transformer such as ELMo, BERT, and OpenAI GPT demonstrate remarkable performance in text generation, sentiment analysis, question answering, the conversational chatbots, machine translation, and many other important applications of NLP. Significantly, the most recent language model GPT-3 shows human-like, impressive capability in natural language performance [3] However, despite such the noteworthy progress initiated from DL, the substantive issues inherent in natural languages still remain in efficient processing and understanding natural languages. The current language models of NLP are based on a probability distribution over sequences of words [4,5]. However, what language comprehension is really about is not a probabilistic prediction, but conceptual interpretation. Natural languages are a formal production system holding native syntactic/semantic structures, unlike random probabilistic structures. This fact implicates that language models should stand on natural languages' linguistic perspectives rather than random stochastic events.
In general, computational linguistics has tried to develop computational modeling of natural language, as well as the appropriate computational interpretation of natural language phenomena. For several decades, a considerable amount of research on grammar formalisms has been accomplished from diverse language modeling perspectives. Most approaches have focused on the computational interpretation using diverse mechanisms such as grammar rules, feature-based unification, and logic inference [6,7]. Although the computational interpretation contributes to understanding natural languages' linguistic properties, it does not provide the expected achievement in linguistic performance.
Nowadays, dependency relations become a common framework in natural language analysis. Since the dependency relations are evident and efficient for syntactic/semantic analysis, this approach can improve NLP applications' linguistic performance. However, there are many variations in dependencies, and they lack a shared consensus on the set of dependency relations. Most of all, there is no definite approach to define dependency relations. In addition to these, NLP approaches using dependency relations use a rigid dependency tree diagram to represent syntactic/semantic structures. The dependency tree is less efficient in representing linguistic knowledge. As a knowledge graph (KG) is used as a general model to represent domain knowledge, the dependency graph is desirable to represent syntactic/semantic structures of natural languages [8,9]. So, a formal way to define dependency relations based on the universality of natural languages and graph-based representation of linguistic structures is a significant issue in NLP.
This paper presents a graph-based representation of syntactic/semantic structures of natural languages similar to KG. The proposed syntactic knowledge graph is based on dependency relations. This paper investigates the universal principles of natural languages to derive an appropriate set of dependency relations for the syntactic knowledge graph. This paper proposed a new set of dependency relations based on the valency theory and the markedness principle. This paper also demonstrates the usability of the graph-based representation of syntactic/semantic structures of natural languages.
The remainder of this paper is structured as follows. Section 2 reviews the related work. Section 3 discusses the valency theory and the markedness principle that are the theoretical foundation to derive dependency relations from natural languages' diachronic perspectives. Section 4 analyses the linguistic properties of dependency relations deduced from Section 3. Section 5 presents the construction of syntactic knowledge graphs using the derived dependency relations and markers. Section 5 demonstrates the representation of various linguistic structures to validate the feasibility of syntactic knowledge graphs. Section 6 summarizes the contributions and puts forth the prospects for further work.
Related Work
The main objectives of computational linguistics are to explore syntactic/semantic structures inherent in natural languages. For several decades, a considerable amount of research on grammar formalisms has been accomplished from the diverse perspectives of language modeling. Several notable grammar formalisms such as Lexicalized Functional Grammar(LFG), Categorical Grammar(CG), General Phrase Structure Grammar (GPSG), and Headdriven Phrase Structure Grammar (HPSG) have been developed to describe the complex linguistic structures [6,7]. These grammar formalisms employ a rule-based system, logic-based system, or feature-based system using unification as the underlying mechanisms. In general, the gramma formalisms adopt syntactic tree structure based on the compositional phrase structures. However, unfortunately, such approaches do not show the expected linguistic analysis achievement, although these grammar formalisms provide linguistic insights into natural languages.
Recently, a revolutionary approach motivated by DL provides a breakthrough insight in NLP and achieves the significant advancement. Several language models such as ELMo, BERT, and GPT-3 shows the remarkable performance in natural language processing [1,2,3,4,5]. The language models developed in DL demonstrate human-level language performance in a conversational chatbot, sentiment analysis, machine translation, text summarization and generation, and question answering that are deeper applications of NLP. The language prediction approach based on DL generally uses vector semantics based on a probability distribution over sequences of words [4,5]. Although the NLP approach using distributed, probabilistic semantics demonstrates surprising performance and shows great promise, it still exhibits some issues and arguments that have long plagued DL [10]. Since the language model implemented in this approach is mostly a black box to humans, it does not provide any substantial properties about natural languages. There are no ways for a deeper understanding of how language models work. In other words, there are no formal representation methods to understand syntactic/semantic structures, only vectorised values.
Natural languages are a generative system based on the unique linguistic principles that systematically compose diverse, complex structures. NLP should be able to stand on the linguistic properties of natural languages to exploit and describe linguistic structures. The dependency relations attract considerable attention for syntactic and semantic analyzis in NLP [12]. Nowadays, it is common to use dependency relations in natural language analysis since the dependency relations provide the underlying foundation for representing the linguistic structures [11,12,13]. Many systems and open tools, such as Stanford CoreNLP, are widely available to provide the standard framework for developing natural language applications [11,13].
Valency Theory and Markedness Principle
Natural languages are a kind of generative systems. The linguistics uses grammar rules to describe the generative capability of natural languages. However, the grammar rules to generate linguistic structures are a formal system that enables more fundamental valency property of natural languages. The valency value of linguistic elements plays the principal role in constructing complex linguistic structures. In the realization of linguistic structures, the valency property is closely related to the markedness principle. This section describes the valency theory and linguistic relationship with markedness.
Valency Theory for Linguistic Structures
The valency theory derived from chemistry regards as the universality of natural language that can clarify the underlying principle of how a sentence is constructed or generated. The origins of valency theory are found in dependency grammar formalism, especially in Lucien Tesnière [14,15]. Valency theory takes an approach towards linguistic constructions that focus on verbs' syntactic and semantic valencies and occasionally of arguments. In the valency framework, the verb is considered the most central element of a sentence and the major determinant of its structure. The valency is the verb's ability to open up certain positions in its syntactic environment, which can be filled by obligatory or optional complements. The arguments that a verb can take are defined in terms of its valency value. The valency pattern consisting of various types of valency values is a model of a sentence containing a fundamental element (typically, the verb) and a number of dependent elements referred to as arguments, expressions, or complements whose number and type are determined by the valency pattern of the verb [16,22]. The following description of (1) is a typical example of the valency pattern, where SCU is a subject complement unit [17,23].
A lot of the research has published the list of valency patterns [16,17]. Each list of the valency pattern defines its unique complement types, such as INF, WH-CL and V-ing, and semantic roles such as AGENT, LOCATION,and SOURCE [16]. Most of the valency patterns focus on the depiction of dependency relations accompanied between the verb and arguments. However, this approach to define valency patterns neglects the original objectives of the valency concepts. The valency is the perspective of the generation, while dependency relation is for analyzing linguistic structures. The more critical problem is that there are no investigations on how the valency is realized into dependency relation in the surface sentence. This paper addresses the objectification of dependency relations established by the valency in surface sentences.
Markedness Principle of Linguistic Functions
The valency is the universal linguistic property to combine with other elements in forming phrases and sentences. The valency properties of verbs are closely related to the overall structures of a clause or sentence, in other words, the sentence complements are dependents of the main verb of a sentence or clause [18]. The valency patterns that are directed binary relations are materialized as dependency relations between the governor and the dependent in surface sentences. However, natural language systems need linguistic apparatus to manifest dependencies in the surface structures definitely. The markedness principle, another important universality of natural languages, is used to specify the syntactic/semantic function of the constituents of a surface structure.
Although there are many linguistic perspectives about the markedness, the dependency relations are embodied in markedness. The markedness plays a role in the specification for the constituents of a sentence's syntactic/semantic roles. Specifically, the valency values and dependency relations are the cohesive principles to generate linguistic structures, and markedness is the apparatus to realize the grammatical functions of dependency relations in sentences. While the conventional concepts of markedness have focused on describing the distinctive features of linguistic elements, markedness should be better understood as the bearer of dependency relations. In other words, dependency relations based on valency patterns can be realized by means of the markedness.
(2) a. He gave his mother's ring to the bride at the wedding. b. Einstein assumed that light travels at a constant speedto derive the relativity.
The simple sentences in (2) show that the markedness plays a vital role in representing syntactic/semantic dependencies. Every constituent should have its marker that represents dependency relation and syntactic/semantic linguistic functions. In a broad sense, two types of the markedness can be recognized in the three linguistic levels of words, phrases, and clauses: the explicit markers that imply syntactic/semantic functions such as shown in prepositional phrases and the implicit markers related to the subcategorization of the predicate.
The explicit markers are used to be a syntactic flag that indicates additional linguistic functions. For example, the prepositions TO and AT of (2-a) and THAT and TO-inf of (2-b) are used to represent syntactic functions and semantic roles. The explicit marker plays the binder's role to connect the dependent constituent to the governor in a surface structure. The explicit markers are the principal element to construct complex sentences by expanding the constituents' primary linguistic functions. And more importantly, it should be noted that the explicit markers become the governor of its dependent since the explicit markers define additional syntactic/semantic function of the dependents. In most research about dependent relations, the explicit markers are regarded as an auxiliary dependent of their associated constituent. However, as shown in (2), the syntactic/ semantic functions are decided by the explicit markers TO and AT, not by bride and wedding. The syntactic/ semantic functions of the complex constituents are also decided by the clausal complement markers such as THAT and TO-inf. The explicit markers as the governors realize the graph-based representation of grammatical structures more consistently and semantically.
In principle, all constituents should accompany their markers that expose linguistic functions in surface structures. In the agglutinative languages such as Korean, the markedness principle is strictly kept and offers free word-order property. However, some languages like English use unmarked constituents. These languages use the word order as the implicit markers that have linguistic functions. The subject, direct object, and indirect object are the specific positions with the implicit marker. In general, the implicit markers are related to the subcategorization of the predicate [19].
The syntactic/semantic functions of the markers can be categorized into two types: government-dependency and attachment-restriction. The government-dependency markers related to subcategorization are used to construct syntactic structures, while the attachment-restriction markers represent the optional modification relationships to impose additional semantic features. However, it should be noted that semantic features of the markers depend on semantic relationships among the governor, marker, and dependent, although some grammar systems like case grammar try to define semantic features of the markers [20]. It is difficult to disclose semantic features by means of the markers all by itself.
Most of the valency patterns define a lot of the markers to formalize the patterns. However, these approaches do not consider the markers' functions as the enabler to realize valency patterns and dependency relations in surface structures. This paper proposes a compact set of markers, as shown in Table 1, considering the linguistic perspectives of the valency and dependencies. The subcat is the implicit markers of subcategorization positions. The empty is another implicit markers used in the positions where are not for subcategorization. The mark is for the relationship between the marker and its associated constituent. xcomp and ccomp are open clausal complement markers and clausal complement markers, respectively.
Analysis of Dependency Relations by Markedness Principle
The dependency relations play a vital role in analyzing syntactic/semantic structures of natural languages. Many open platforms and tools based on dependency relations such as Stanford CoreNLP are widely available to support the efficient development of NLP applications [11,12,13]. However, the definition of dependency relations relies on linguistic intuition without a formalized basis. This section discusses the definition and property of dependency relations based on the markedness.
Definition of Dependency Relations
Most of the dependency relations used in NLP applications are broadly taken from the linguistic analysis of the contemporary linguistic resources such as textbooks, social media messages, and corpus. This approach tries to extract as many as dependency relations that can cover even idiosyncratic structures. The approach is interested in finding new dependency relations and compacting similar relations. There are no definite criteria or systematic approaches to defining dependency relations, although some basic properties in dependency relations such as uniqueness, non-crossing, and acyclic property can verify the consistency [12].
Since the dependency relations are originated from the valency patterns and realized by the markers in the surface structure, the markers are the starting point of the definition of dependency relations. So the dependency relations can be defined in accordance with syntactic/semantic functions of the markers. This paper proposes a set of dependency relations, as in Table 1. The set of dependency relations in this paper is more compact than the set of Stanford CoreNLP popular in NLP. This paper focuses on the inherent concepts of the markedness principle of natural languages and reflects the diachronic perspectives of linguistic structures. The mark is the dominant relation between the marker and its associated constituent. The link is clausal dependencies, and the bind is modification relations between governor and dependent. This paper does not consider the detailed semantic functions of link and bind that can excessively breed the diverse semantic dependency relations. The semantic dependencies go beyond the scope of syntactic analysis since the markers' primary functions cannot discriminate the semantic dependencies.
4.2Properties of Dependency Relations
Since the linguistic functions of the markers can be classified into two types, there are two types of dependency relations accordingly. As shown in Table 1, one is the government-dependency type of dependency relations usually related to the subcategorization of the predicate. This type of dependency relation represents the inherent valency capability of the predicate. The dependency relation of subcategorization such as subj, iobj and dobj is actually the placeholder by means of the relative position to the predicate. So they do not imply any linguistic roles of constituents. The semantic interpretation of these dependencies relies on the contextual meaning of the sentence. The other type is attachment-restriction dependencies used to construct complex syntactic structures or attach additional semantic senses. Even though the proposed dependencies are compact compared to other lists of dependency relations, they are based on the inherent conceptions of the markedness and dependency relations. Thus the proposed set is enough to implement the dependency relations related to the modification structures. However, it should be noted that the proposed set is for the representation of dependency relations between constituents, not for the signature of semantic relationships of the constituent. The semantic interpretation of dependency relations demands another level of NLP as shown in (3) In general, the representation of dependency relations, for example, the dependency diagram of Stanford CoreNLP, uses a unidirectional arrow from the governor to dependent [11,13]. This representation is inadequate to implement the inherent conceptions of dependency relations. Since there are two types of dependency relations, they should be distinguished in the representation. For the attachment-restriction dependencies, they are optional, not required by the governor. In some senses, these dependencies are autonomous relationships initiated by the modifier. So, it is appropriate to use a directed arrow from the modifier to the core constituent. In other words, the arrow of dependency relations goes from the autonomous constituent to the target constituent.
This paper argues that the markers, whether they are government-dependency or attachment-restriction, play the governor's role to its associated constituent since the explicit markers define additional syntactic/semantic function for the associated. The markers are representative of the associated phrasal structure for representing syntactic/semantic functions.
Syntactic Knowledge Graphs of Natural Languages
The conventional NLP is usually based on dependency tree structures. Although syntactic dependency trees can give linguistic intuition, tree structures have some limitations to support flexible syntactic/semantic processing. The graph-based representation is more efficient for data and knowledge modeling as seen in NoSQL databases and KGs. The syntactic knowledge graph can be incorporated with the domain KGs for knowledge processing. The proposed dependency relations are suitable for syntactic knowledge graphs. This section demonstrates various syntactic knowledge graphs based on dependency relations. Fig. 1 is a simple syntactic knowledge graph of (4). The dependency relations are represented with dependency_relation/marker format on the edge.
Simple dependency relations
(4) Michelangelo could create a huge statue of David.
Fig.1. Syntactic Knowledge Graph with simple dependencies
The syntactic knowledge graphs of Fig. 1 shows that it can localize all linguistic functions. For verbal constituents, theout-going arrows indicate dependent constituents by their valency value. For nominal constituents, the in-going arrows mean syntactic/semantic features. Notably, the syntactic knowledge graph shows the dependencies of each linguistic constituent, not how the constituents are composed. Fig. 2 is a typical dependency structure with clausal complements, ccomp and xcomp of (5). In the syntactic knowledge graph of (2.a), the predicate assumed dominates the marker THAT with dependency relation dobj/ccomp and the marker THAT governs the predicate travels. This means that the government of the predicate assumed is related to the predicate travels.
5.2Dependencies with clausal complements
(5) Einstein assumed that light travels at a constant speed to derive the relativity.
The open clausal complement TO dominating thepredicate derive are bound to the predicate assumed. Though the mandatory valency value subj of the predicate derive is unseen, this dependency relationship can be found in the syntactic knowledge graph. Fig. (2.b) is a dependency tree diagram of (5). The dependency relations of the markers TO and THAT dominated by the predicate derive are not explainable. The noun speed dominates different types of constituents, in, a, and constant. The dependency between speed and in is unclear. In general, the dependency tree diagram tries to demonstrate how the constituents are composed in a sentence and to show syntactic structure rather than dependency relations between individual constituents. Fig. 3 is another example of (6) with an open clausal complement. Two clauses are loosely linked via open complement marker WHEN. The dependency link/ccomp shows that the clause marked by WHEN needs the predicate take, but the predicate take does not.
5.3Clausal dependency
(6) When I am traveling, I always take something to read in my pocket.
5.4Long-distance dependency
The linguistic structures involving long-distance dependencies such as topicalization, questions, or relative clauses are the cumbersome problem representing syntactic structure under the current grammar formalisms [21]. Since long-distance dependency occurs when the dependent constituent moves to another place or some constituent intervene dependency relations, it violates the basic syntactic structure rules. Although many resolutions have been proposed, most of them rely on special mechanisms under specific grammar formalisms.
(7) This is the apple that William hit with his arrow. Fig. 4 is a syntactic knowledge graph of (7) containing a simple relative clause. The predicate hit in (7) has long-distance dependency relation with the antecedent apple. In general, the long-distance dependency is implicitly represented in syntactic representations regardless of their structures. The import issues of longdistance dependency are a reasonable way to restore or estimate unseen dependency relations. In Fig. 4, since the predicate hit does not have mandatory dependency obj/subcat related to subcategorization, the missing dependency can be restored by means of graph traversal. Graph traversal is more efficient than tree traversal or other long-distance dependency algorithms.
Conclusions
NLP is the crucial area of AI to realize linguistic intelligence and knowledge processing. Recently, NLP based on Deep Learning achieves breakthrough advancements. However, natural languages have unique linguistics structures inherently, not probabilistic structures. NLP should be able to enlighten linguistic features to achieve more intelligent performance. Nowadays, the dependency relation is the basic framework for natural language analysis and the dependency tree diagram provides essential information for NLP.
However, several issues such as systematic definition and representation of dependency relations remain to be resolved. This paper addresses the syntactic knowledge graph that is a graph-based representation of syntactic/semantic structures based on dependency relations of natural languages similar to knowledge representation and KG. This paper revises the concepts of the valency theory and the markedness principle from the universality of natural languages. The valency value of the predicate is the underlying capability to generate sentences. In the generation of sentences, the valency pattern is expressed in the form of dependency relations. The dependency relations are embodied in terms of the marker in surface structures. This paper explores the relationships of the valency patterns, dependency relations and markers. The paper then proposes the markers and dependency relations as in Table 1, which is used for syntactic knowledge graphs. The paper demonstrates a syntactic analysis of various linguistic structures using syntactic knowledge graphs, including clausal complements and long-distance dependencies. This validates that syntactic knowledge graphs are more feasible than dependency tree diagrams in NLP. | 5,433.4 | 2021-04-11T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
How Different Government Subsidy Objects Impact on Green Supply Chain Decision considering Consumer Group Complexity
+is paper fully considers the complexity characteristics of the consumer group, such as the heterogeneity of consumer environmental preferences and consumption levels and constructs a two-stage price decisionmodel of green supply chain composed of the manufacturer and retailers. Under the four different scenarios, no government subsidies, government subsidies are given to the manufacturer, government subsidies are given to the green product retailer, and government subsidies are given to green product consumers, the impact of government subsidies on green supply chain member price decisions is analyzed, and the validity of the model is verified by an example. +e results show that compared with the no government subsidies, government subsidies to the manufacturer will reduce the wholesale and sales prices of green products, and subsidies to the green product retailer will lead to higher wholesale prices and lower sales prices of green products, and subsidies to green product consumers will increase the wholesale and sales prices of green products. No matter which object is subsidized by the government, the wholesale price of general products will not change and the sales price will decrease. Government subsidies will facilitate the sales of green products, thereby expanding the market share of green products.
Introduction
With the advancement of science and technology and the development of the economy, resources on the planet are becoming increasingly scarce, and environmental pollution is further intensifying. In this context, the development of green technology and the promotion of green products have become particularly important. However, although the benefits of green development in reducing pollution are obvious, most green technologies require a large amount of up-front capital, which leads to an increase in production costs, which will reduce the incentives for green production [1]. erefore, in order to promote green development, reduce pollution, and protect the environment, it is particularly important for the government to implement some green development incentives [2]. For example, in May 2012, the State Council of China announced that it would allocate 26.5 billion yuan to subsidize energy-saving appliances for one year, mainly covering five categories of household appliances: flat-panel TVs, refrigerators, air conditioners, washing machines, and water heaters [3]. In 2015, the Ministry of Finance of China issued the "Notice on the Financial Support Policy for the Promotion and Application of New Energy Vehicles in 2016-2020," which provides certain subsidies to consumers who purchase new energy vehicles. In 2018, the Ministry of Finance of China officially issued the "Notice on Adjusting and Improving the Financial Subsidy Policy for the Promotion and Application of New Energy Vehicles," and made corresponding adjustments to the subsidy policy for new energy vehicles. e implementation of these subsidy policies has greatly promoted the green development of the supply chain.
As an important factor to be considered in supply chain decision-making, government subsidies have an important impact on the operation of the supply chain. In recent years, many scholars have conducted in-depth and extensive research on the impact of government subsidy supply chain members on green supply chain decision-making. Firstly, in terms of the impact of government subsidized manufacturer on green supply chain decisions, Yang and Xiao [4] constructed three game models of the green supply chain for the government subsidized manufacturer under the conditions of fuzzy uncertainty in manufacturing costs and consumer demand. Xue et al. [5] studied the impact of the government subsidized manufacturer on retail prices, energy efficiency, market demand, supply chain profits, and social welfare for energy-saving products.
e results showed that government subsidies can significantly improve social welfare levels and promote the improvement of energy-saving products. Zhuo and Wei [6] analyzed the incentive effect and green lower limit in the case of government subsidized manufacturers, based on the characteristics of uncertainty in the consumer market. Zhan et al. [7] studied the decisionmaking issues of the manufacturer and retailer in a decentralized and centralized decision-making model under the scenario of government subsidized manufacturers and increased environmental awareness of the consumer. Yu et al. [8] established an optimization model that considers green preferences and government subsidized manufacturers with the goal of maximizing manufacturers' profits. Guo et al. [9] explored the impact of the government subsidized manufacturer on social welfare and the profits of supply chain members.
Secondly, the influence of government subsidized consumers on green supply chain decision-making has also attracted the attention of many scholars. Chemama et al. [10] examined governments use consumer subsidies to promote green technologies and how policy adjustments over time will interact with industry production decisions. Cohen et al. [11] analyzed the government's interaction with the supplier when designing consumer subsidy policies and the impact of demand uncertainty on each participant in designing strategies. He et al. [12] explored the channel structure and pricing decisions of the manufacturer and the government's consumer subsidy policy for purchasing remanufactured products. Huang et al. [13] analyzed the fuel vehicle supply chain and power and fuel vehicle supply chain in the duopoly environment; the government implements a subsidized consumer incentive plan to promote the sales of electric vehicles, and the results show that, with the strong bargaining power of consumers, government subsidies can increase the sales of electric vehicles more effectively. Li et al. [14] studied the strategy of government subsidies to consumers, analyzed the impact of consumption subsidy, and replaced the subsidy on environmentally friendly products in the dual-channel supply chain. Ma et al. [15] studied the impact of subsidized consumers on the dual-channel closedloop supply chain. Based on the introduction of the government consumption subsidy program and the dualchannel closed-loop supply chain, the decision of the channel members before and after the performance of the government-funded plan is analyzed. e abovementioned studies are more concerned with government subsidies for a single, specific supply chain member, but there is less literature to analyze and compare the impact of government subsidy on green supply chain member decisions in different object subsidy scenarios. In addition, in the scenario of government subsidies for different objects, it is less common to consider the impact of consumer group complexity on the decision-making of green supply chain members. is paper mainly considers two aspects of the complexity of the consumer group: one is the heterogeneity of consumer environmental preferences, and the other is the heterogeneity of consumer consumption levels. Based on the abovementioned analysis, this paper constructs a two-stage game model of the manufacturer, general product retailer, and green product retailer from the perspective of the green supply chain, with the manufacturer and retailers as research objects. On the basis of fully considering the complexity characteristics of consumer groups, this paper studies the influence of government subsidies to different objects on the price decision-making of members of the green supply chain. is paper considers the heterogeneity of consumer environmental preferences and the heterogeneity of consumption levels into the market demand of green products, studies the impact of government subsidies on green supply chain member price decisions in four different scenarios, including no government subsidies, government subsidies given to the manufacturer, government subsidies given to the green product retailer, and government subsidies given to green product consumers, and compares product price decisions, sales volume, and profits under different subsidy scenarios. e aim is to provide a theoretical basis for promoting the development of green supply chains.
Compared with the existing research, this paper has the following innovations and expansions.
Firstly, most of the existing studies consider the scenario of government subsidies to single object, and this paper considers four different scenarios: no government subsidies, government subsidies given to the manufacturer, government subsidies given to the green product retailer, and government subsidies given to the green product consumers. It separately analyzes and compares the wholesale prices, sales prices, sales volume, and profits of various companies and the changes brought by government subsidies to the price decision of green supply chain members in four different scenarios.
Secondly, for four different scenarios, this paper takes the heterogeneity of consumers' environmental preference and consumption level into consideration in the price decision of the green supply chain. In the research process, the complexity characteristics of consumer groups and their impact on the decision-making of the green supply chain are fully considered, making the model more realistic.
Problem Description and Conditional Assumptions
is paper studies a secondary supply chain consisting of a manufacturer, a general product retailer, and a green product retailer. e green supply chain structure is shown in Figure 1. e manufacturer, general product retailer, and green product retailer are represented by M, R 1 , and R 2 , respectively. e manufacturer produces general products and green products, the general product retailer sells general products, and the green product retailer sells green products. Assume that the production cost of unit general product and unit green product is c n and c g , respectively. Since the production of green products requires a large amount of green technology, it is assumed that c g > c n .
In order to promote the development of green supply chains, s indicates the government's subsidy quota for each unit of green products. Assume that there are three ways for the government to subsidize green products, one of which is to subsidize the manufacturer, one is to subsidize the green product retailer, and the other is to subsidize green product consumers. o means the scenario of no government subsidies, mmeans the scenario of government subsidies are given to the manufacturer, r means the scenario of government subsidies are given to the green product retailer, and c means the scenario of government subsidies are given to green product consumers. Assuming that the manufacturer is dominant in the market, retailers are subordinate.
Assume that the wholesale price of the unit general product and the unit green product is ω i n and ω i g , respectively, the sales price is p i n and p i g , and the sales volume is q i n and q i g , respectively. π i M , π i R1 , and π i R2 represent the profits of the manufacturer, general product retailer, and green product retailer, respectively, where i represents four different subsidy scenarios, i � o, m, r, c { }. e complexity of the consumer group in the market is reflected in the two aspects of environmental preference heterogeneity and consumption level heterogeneity, with θ indicating the consumer's environmental preference coefficient (θ > 1) and η indicating the consumer's consumption level coefficient (η > 1) [16]. Use V to indicate the product utility perceived by the consumer for the unit product and to obey the uniform distribution on [0, 1], assuming that the same consumer has the same product utility for the general product and the green product, ηV indicates consumers' willingness to pay for general products, and θηV indicates consumers' willingness to pay for green products.
According to the abovementioned assumptions, the consumer surplus of consumers purchasing general products and green products is U n � ηV − p n and U g � θηV − p g , respectively. According to the principle of utility maximization, consumers must meet the conditions for purchasing general products: and consumers must meet the conditions for purchasing green products: is paper assumes that the market has demand for both general products and green products, so equations (1) and (2) are (p n /η) < V < ((p g − p n )/η(θ − 1)), ((p g − p n )/η(θ− 1)) < V < 1, respectively. e demand function for obtaining the general product and the green product is q n �
Model of No Government Subsidies.
In the absence of government subsidies, the demand functions for general products and green products are According to the supply chain structure and the demand function of general products and green products, the profit functions of the manufacturer, general product retailer, and green product retailer are Lemma 1. Equations (6) and (7) are concave functions for variables p o n and p o g , respectively. Substituting the optimal solutions of equations (6) and (7) into equation (5), it can be concluded that equation (5) is a concave function for variables ω o n and ω o g .
Proof. See Appendix. According to Lemma 1, the price decision, sales volume, and profit of the manufacturer, general product retailer, and green product retailer in the absence of government subsidies are shown in Table 1.
Mathematical Problems in Engineering
To ensure that the solution obtained is an effective solution, refer to the constraint 0 ≤ q g < q n to obtain eorem 1 [17].
Theorem 1. Under the condition of no government subsidy, the market competition between general products and green products is as follows:
(1) When 1 < η ≤ X, the sales volume of green products is 0 (2) When X < η < Y, there are general products and green products in the market It can be seen from eorem 1 that when the consumer's consumption level coefficient satisfies 1 < η ≤ X, the manufacturer will not produce green products, mainly because the consumer's consumption level coefficient is low and consumers do not want to buy high-consumption and high-environmental green products, which leads them to buy low-consumption and low-environmental general products. erefore, the sales volume of green products is 0, and the manufacturer will no longer produce green products. When X < η < Y, the manufacturer produces both general and green products, mainly because the consumer's consumption level is increased and some consumers in the market are willing to buy highconsumption and high-environmental green products. erefore, the manufacturer will choose to produce both general and green products.
Model of Government Subsidies to Green Product
Manufacturer. When the government subsidizes the manufacturer who produces green products, the profit of the manufacturer producing unit green product is ω m g − c g + s. erefore, the profit functions of the manufacturer, general product retailer, and green product retailer are Solving equations (8)-(10), the price decisions, sales volume, and profits of the manufacturer, general product retailer, and green product retailer in the scenario of government subsidized green product manufacturer are shown in Table 2: When the government subsidizes the manufacturer of green products, the impact of the government's subsidy quota s on the wholesale price, sales price, and sales volume of green products per unit is as follows: It can be seen from eorem 2 that, in the scenario of government subsidized manufacturer, the wholesale price and sales price of green products are negatively correlated with the quota of government subsidies and the sales volume is positively related to the quota of government subsidies. e main reason is that, after receiving government subsidies, the manufacturer shares government subsidies with the green product retailer by lowering the wholesale price of green products; the green product retailer attracts consumers to purchase products by reducing the sales price of green products after obtaining shared subsidies, thereby increasing the sales volume of green products. For general products, the wholesale price of general products does not change with the change of subsidy quota and the sales price and sales volume of general products are negatively correlated with the quota of government subsidies. is is because the government subsidized manufacturer leads to a decline in the sales price of green products, and the general product retailer has to lower the sales price of general products to maintain the market share. However, due to lack of financial support, the price cuts are limited and some markets are replaced by green products.
Model of Government Subsidies to Green Product Retailer.
When the government subsidizes the green product retailer, the green product retailer sells unit green products with a profit of p r n − ω r n + s, so the profit functions of the manufacturer, general product retailer, and green product retailer are π r M � ω r n − c n q r n + ω r g − c g q r g , π r R1 � p r n − ω r n q r n , π r R2 � p r g − ω r g + s q r g .
Solving equations (11)-(13), the price decisions, sales volume, and profits of the manufacturer, general product Table 1: Price decisions, sales volume, and profit of each member when there is no government subsidy.
Variable
No government subsidies retailer, and green product retailer in the scenario of government subsidies to the green product retailer are shown in Table 3: When the government subsidizes the green product retailer, the impact of the government's subsidy quota s on the wholesale price, sales price, and sales volume of green products per unit is as follows: Proof. See Appendix. It can be seen from eorem 3 that, in the scenario of government subsidies for the green product retailer, the wholesale price and sales volume of green products are positively related to the quota of government subsidies, and the sales price is negatively related to the quota of government subsidies. is is because when the government subsidizes green product retailer, the manufacturer increases the wholesale price of green products in order to share certain subsidies, and after the green product retailer receives government subsidies, it shares subsidies with consumers by lowering sales prices, thereby the sales volume of green products has gradually improved. erefore, government subsidizes the green product retailer and has promoted the increase in wholesale price and sales volume of green products and the reduction of sales prices, that is, government subsidizes that the green product retailer can promote the sales of green products. For general products, the impact of government subsidies on the price decisions and sales volume of general products is the same as in the scenario of government subsidizes the manufacturer.
Model of Government Subsidies to Green Product
Consumers. When the government subsidizes green product consumers, the consumer surplus of purchasing general products and green products is U n � ηV − p c n and U g � θηV − p c g + s, respectively. According to the principle of maximizing utility, when the government subsidizes green product consumers, the demand function of general products and green products is q c n � (p c g − θp c n − s)/η(θ − 1) and q c g � 1 − (p c g − p c n − s)/η(θ − 1), respectively. When the government subsidizes green product consumers, the profit functions of the manufacturer, general product retailer, and green product retailer are Solving equations (14)- (16), the price decisions, sales volume, and profits of the manufacturer, general product retailer, and green product retailer in the scenario of government subsidize green product consumers, as shown in Table 4: Theorem 4. When the government subsidizes green product consumers, the impact of the government's subsidy quota s on the wholesale price, sales price, and sales volume of green products per unit is as follows: Proof. See Appendix. It can be seen from eorem 4 that, in the scenario of government subsidies for green product consumers, the wholesale price, sales price, and sales volume of green products are positively related to government subsidies, and government subsidies increase the wholesale price, sales price, and sales volume of green products. e main reason is that when the government subsidizes green product consumers, the manufacturer and green product retailer increase the wholesale price and sales price of green products in order to share certain subsidies. In addition, since consumers can obtain the corresponding government subsidies for purchasing green products, they can further promote the sales of green products so that the sales volume of green Table 2: Price decision, sales volume, and profit of each member when the government subsidizes the manufacturer.
Variable
Government subsidies are given to the manufacturer Mathematical Problems in Engineering products increases, that is, the government subsidies for green product consumers are also conducive to increasing the market share of green products. For general products, the impact of government subsidies on the price decisions and sales volume of general products is the same as in the scenario of government subsidized manufacturer and subsidized green product retailer.
Theorem 5.
e comparison of wholesale price, sales price, sales volume, and profit under different government subsidy scenarios is as follows: According to eorem 5, by comparison, firstly, when the government subsidizes the manufacturer, the wholesale price and sales price of the green product are the smallest. When the government subsidizes green product consumers, the wholesale price and sales price of green products are the largest. When the government subsidizes the green product retailer, the wholesale price of green products is the same as when subsidies are given to green product consumers; the sales price of green products is the same as when subsidies are given to the manufacturer. Mainly because when the government subsidizes the manufacturer, the manufacturer shares government subsidies with the green product retailer by lowering wholesale prices, and the green product retailer shares government subsidies with green product consumers by lowering sales prices. When the government subsidizes green product consumers, the manufacturer and green product retailers share government subsidies by increasing wholesale prices and sales prices, respectively. When the government subsidizes the green product retailer, the manufacturer shares government subsidies by raising wholesale prices and green product retailers share government subsidies with consumers by lowering sales prices. Secondly, for the three subsidies, no matter what kind of subsidy the government adopts, the wholesale price, sales price, sales volume of the general products, the sales volume of green products, and the profits of each enterprise remain unchanged.
irdly, the wholesale price of general products has not changed before and after subsidies, and the sales price and sales volume are smaller than before the subsidy, indicating that government subsidies reduce the market demand for general products. In terms of corporate profits, the profit of the green product retailer is greater than that before subsidies, the profit of the general product retailer is less than the profit before subsidies; when s > max 0, T { }, the profit of the manufacturer is greater than the profit before subsidies. ese indicate that the government subsidies increase the profits of the green product retailer, while the profits of the general product retailer decreaseand the profit of the manufacturer depends on the quota of government subsidies.
Numerical Example and Analysis
In order to verify the effectiveness of the green supply chain price decision model based on consumer complexity under the scenarios of government subsidy to different objects, the following will further analyze and verify the relevant Table 4: Price decision, sales volume, and profit of each member when the government subsidizes green product consumers.
Variable
Government subsidies are given to green product consumers Table 3: Price decision, sales volume, and profit of each member when the government subsidizes the green product retailer.
Variable
Government subsidies are given to green product retailer ω r * n (η + c n )/2 ω r * g (s + ηθ + c g )/2 Mathematical Problems in Engineering conclusions by assigning relevant parameters in the model. Based on the relevant parameter settings in the literature [18,19], the parameters in the examples are assigned as follows: c n � 1.5, c g � 2, θ � 1.16, and η � 4.
Analysis of the Impact of Changes in Government
Subsidy Quota on Product Prices When Government Subsidizes the Manufacturer. As can be seen from Figure 2, when the government subsidizes the manufacturer, the wholesale price and sales price of the green product decrease with the increase of the government subsidy quota. e wholesale price of general products does not change with the change of subsidy quota; the sales price of general products decreases with the increase of government subsidies, and it can be seen from Figure 2 that as the subsidy quota continues to increase, the sales price of general products is gradually reduced, even close to the wholesale price.
Analysis of the Impact of Changes in Government Subsidy Quota on Product Prices When Government Subsidizes Green Product Retailer.
When the government subsidizes the green product retailer, the impact of changes in government subsidy quota on product prices is shown in Figure 3. As can be seen from Figure 3, as the quota of subsidies increases, the wholesale price of green products gradually increases and the sales price gradually decreases. e wholesale price of general products does not change with the change of subsidy quota, and the sales price of general products decreases with the increase of government subsidies. Combined with Figure 3 and related calculations, when the government subsidy quota meets 0 < s < 0.098, the sales price of green products is greater than the wholesale price. When 0.098 < s < 0.5, the sales price of green products is less than the wholesale price.
Analysis of the Impact of Changes in Government Subsidy Quota on Product Prices When Government Subsidizes Green Product Consumers.
When the government subsidizes green product consumers, the impact of government subsidy quota changes on product prices is shown in Figure 4. As can be seen from Figure 4, as the quota of subsidies increases, the wholesale price and sales price of green products increases. e wholesale price of general products does not change with the change of subsidy quota, and the sales price of general products decreases with the increase of government subsidies.
Analysis of the Impact of Changes in Government Subsidy
Quota on Product Sales Volume. Combined with the abovementioned analysis and Figure 5, it can be seen that, under the scenarios of government subsidized three different objects, changes in the government subsidy quota have the same effect on the sales volume of green products and mL-ω n mL-ω g mL-p n mL-p g general products, that is, as can be seen from Figure 5, the sales volume of green products increases with the increase of government subsidies, and the sales volume of general products decreases with the increase of government subsidies. Figure 6(a) shows that the wholesale prices of general products are the same with or without government subsidies and are lower than those of green products, and the wholesale price of green products is the same and largest in the two scenarios of government subsidies to the green product retailer and green product consumers and is the smallest when government subsidies are given to the manufacturer. As can be seen from Figure 6(b), the sales price of general products is the same in the scenarios of government subsidized three different objects and is smaller than the sales price when there is no government subsidy.
Comparison of Product Sales Volume under Different
Government Subsidy Scenarios. e comparison of product sales volume under different subsidy scenarios is shown in Figure 7. As can be seen from Figure 7, when there is no government subsidy, the sales volume of general products is always greater than the sales volume of green products, and the sales volume of general products is the same in the scenario of government subsidized three different objects Mathematical Problems in Engineering and is smaller than the sales volume when there is no government subsidy, and the sales volume of green products is the same in the scenario of government subsidized three different objects and is greater than the sales volume when there is no government subsidy. Combined with Figure 7 and related calculations, when the government subsidy quota meets 0 < s < 0.073, the sales volume of green products is less than the sales volume of general products. When 0.073 < s < 0.5, the sales volume of green products is greater than the sales volume of general products, indicating that a certain amount of government subsidies can promote the sales of green products and improve the market competitiveness of green products, thereby promoting the development of green supply chains.
Comparison of Profits under Different Government
Subsidy Scenarios. It can be seen from the calculation that T � − 0.886, and because s > max 0, T { }, so under the condition of s ∈ [0, 0.5] and as shown in Figure 8(a), the manufacturer's profit increases with the increase of government subsidies, and the profit of the manufacturer is the same in the scenario of government subsidized three different objects and is greater than the sales volume when there is no government subsidy. It can be seen from Figure 8(b) that, in the scenario of no government subsidy, the profit of the general product retailer is greater than the profit of the green product retailer, the profit of the general product retailer is the same in the scenario of government subsidized three different objects, and the profit of the green product retailer is the same in the scenario of government subsidized three different objects. However, as the quota of subsidies continues to increase, the profit of the general product retailer is decreasing, and the profit of the green product retailer is increasing. When 0.05 < s < 0.5, the profit of the green product retailer is greater than the profit of the general product retailer.
Conclusions
is paper constructs a two-stage game model composed of the manufacturer, general product retailer, and green product retailer. Based on the complexity characteristics of the consumer group, this paper studies the impact of government subsidized different objects on price decisions of members in the green supply chain, compares the product price decision-making, sales volume and profit under different subsidy scenarios, and discusses the effectiveness of the model in combination with numerical analysis. e research results show that firstly, compared with the no government subsidies; when the government subsidizes the manufacturer, the wholesale price and sales price of green products are reduced; when the government subsidizes the green product retailer, it will lead to an increase in the wholesale price of green products and a decrease in sales prices; when the government subsidizes green product consumers, the wholesale price and sales price of green products are increased. In the scenario of the government subsidized three main objects, the wholesale prices of general products do not change and the sales prices of general products will decrease. Secondly, regardless of which object the government subsidizes, the sales volume of green products will increase compared with the nongovernment subsidies, and the sales volume of general products will decrease, indicating that government subsidies can promote the sales of green products and suppress the sales of general products, thereby expanding the market share of green products. irdly, the three kinds of subsidies of government subsidized manufacturer, green product retailer, and green product consumers have the same effect on corporate profits. Compared with nongovernment subsidies, government subsidies increase the profit of the green product retailer and reduce the profit of the general product retailer, while the impact on the manufacturer's profit is related to the quota of government subsidies.
is study proposes the following recommendations. Firstly, when formulating a green industry development strategy, the government should strengthen the publicity and education of the green economy, raise consumers' awareness of environmental protection, and increase the level of consumer consumption by adopting some measures to improve national income. Secondly, the government should provide financial support for the development of the green supply chain through subsidy policies, encourage enterprises to carry out technological innovation, reduce the production cost of green products, and thus expand the market share of green products. Finally, enterprises should continuously improve the level of green production technology, as far as possible to reduce pollution to the environment. | 7,372.4 | 2020-05-04T00:00:00.000 | [
"Environmental Science",
"Economics",
"Business"
] |
Clinically Significant Cytochrome P450-Mediated Drug-Drug Interactions in Children Admitted to Intensive Care Units
Objectives Children admitted to intensive care units (ICUs) often require multiple medications due to the complexity and severity of their disease, which put them at an increased risk for drug interactions. This study examined cytochrome P450-mediated drug-drug interactions (DDIs) based on the Pediatric Intensive Care (PIC) database, with the aim of analyzing the incidence of clinically significant potential drug-drug interactions (pDDIs) and exploring the occurrence of actual adverse reactions. Methods The Lexicomp database was used to screen cytochrome P450-mediated DDI pairings with good levels of reliability and clear clinical phenotypes. Patients exposed to the above drug pairs during the same period were screened in the PIC database. The incidence of clinically significant pDDIs was calculated, and the occurrence of adverse reactions was explored based on laboratory measurements. Results In total, 84 (1.21%) of 6920 children who used two or more drugs were exposed to at least one clinically significant pDDI. All pDDIs were based on CYP3A4, with nifedipine + voriconazole (39.60%) being the most common drug pair, and the most frequent being the J02 class of drugs. Based on laboratory measurements, 15 adverse reactions were identified in 12 patients. Conclusions Clinically significant cytochrome P450-mediated pDDIs existed in the children admitted to ICUs, and some of the pDDIs led to adverse clinical outcomes. The use of clinical decision support systems can guide clinical medication use, and clinical monitoring of patients' needs has to be enhanced.
Introduction
Children admitted to intensive care units (ICUs) often suffer from severe, complex medical conditions that expose them to multiple medications [1]. Many studies have shown that simultaneous use of multiple drugs increases the risk of potential drug-drug interactions (pDDIs) [2][3][4]. Drug-drug interactions (DDIs) are common and preventable prescribing errors. According to the US Food and Drug Administration (FDA), DDIs refer to the phenomenon that the effects and duration of drugs are changed to varying degrees due to drug interactions when two or more drugs are used simultaneously or sequentially [5].
DDIs are generally classified as pharmacokinetic and pharmacodynamic interactions. Pharmacokinetic interactions can occur during the absorption, distribution, metabolism, and excretion phases, with cytochrome P450 (CYP450)-mediated interactions during the drug metabolism phase being the most common and preventable drug interactions. Current studies have shown that most DDIs have adverse effects on patient care, potentially reducing drug efficacy or enhancing drug toxicity, causing treatment failure, adverse drug events, and even death [6]. In children, their drug metabolism in the liver is low maturity and many CYP450 enzymes are expressed at low levels, which may mean that children admitted to ICUs are more susceptible to adverse effects of CYP-mediated DDIs [7]. However, there is literature on the prevalence of CYP-mediated drug interactions in elderly patients [8,9]and psychiatric patients [10,11], but there is no information on CYP-mediated pDDIs in children.
Although pDDIs are important causes of adverse drug reactions (ADRs), not all pDDIs are clinically significant, and identifying the incidence of clinically significant pDDIs is even more important for children in ICUs [12], which can help clinicians or pharmacists identify drug combinations that need to be avoided [13]. However, there are situations where certain drugs must be used together for therapeutic purposes even though they may interact with each other. Assessing the occurrence of pDDI-related adverse reactions in such cases can prompt physicians to monitor patients for serum drug concentrations and adverse reactions to avoid the adverse consequences of drug interactions whenever possible.
ere have been some studies on DDIs of children in ICUs [4,12,14,15]. e incidence of pDDIs has been found to be related to the number of drugs used, and pDDIs can increase the length of stay. However, the occurrence of clinically significant CYP-mediated pDDIs in children is poorly studied, and the related adverse effects have not been investigated. erefore, in this study, we aimed to assess the prevalence of clinically significant CYP450-mediated pDDIs in children admitted to ICUs using medication information from the Pediatric Intensive Care (PIC) database and to evaluate the incidence of the actual adverse reactions based on laboratory test data. e complexity of medications in the ICUs may lead to an increased incidence of pDDIs, and the unique nature of the children may expose them to higher risks of associated adverse reactions. Increased understanding of DDIs of children in the ICUs can help improve the safety of drug prescriptions and provide guidance for clinical monitoring, thus improving ICU children's care.
Data Sources.
is retrospective study was conducted using the patient's data from the PIC database [16], which contains information of patients admitted to the Children's Hospital of Zhejiang University School of Medicine (Zhejiang, China) between 2010 and 2019. e database includes demographic information, length of hospital stay, vital sign measurements, laboratory measurements, diagnoses, medications, and survival data.
Eligibility Criteria and Study Population.
e drug information in the PIC database contains the approved drug names, the time and mode of administration, and the dose. e medication information in the database was initially cleaned to exclude the following medications: (1) topical medications such as creams and drops; (2) Chinese herbal medicines; (3) glucose injection and sodium chloride injection series. Patients aged 0-17 years and who took two or more medications (after cleaning) during hospitalization were screened for further study.
Definitions of CYP-Mediated pDDIs.
e CYP-mediated pDDI pairings with good levels of reliability and clear clinical phenotypes were selected for this study. Potential pDDI pairings were identified using the information provided in the Lexicomp database (an online drug interaction checker, https://www.uptodate.com/drug-interactions), which classifies pDDIs into 6 reliability ratings, from low to high. In this study, we selected 3 high levels: reliability rating fair, reported in the prescribing information, reliability rating good, and reliability rating excellent. e database also gives patient management recommendations, and we selected pDDI pairings with clear clinical phenotypes (with clear clinical management recommendations) for further study (see in supplementary table).
Identification of Clinically Significant CYP-Mediated pDDIs.
In this study, clinically significant CYP-mediated pDDI was defined as exposure to two drugs of the above pDDI pairings during the 24 h period of hospitalization. is criterion was used to identify the occurrence of clinically significant pDDIs in all included patients.
Identification of Adverse Reactions.
Criteria for identifying adverse reactions based on laboratory test results: laboratory test results were normal at the 1st test and abnormal at the nth test, and patients were exposed to the DDI pairings within 7 days before the abnormality [17]. e abnormal values were determined according to the reference literature or relevant treatment guidelines. e diagnostic criteria for adverse reactions based on laboratory test results used in this study are shown as supplementary methods.
Clinical and Demographic Characteristics of Patients.
A total of 6920 patients in the PIC database used at least two drugs during their hospitalization, ranging from 2 to 104 types. Of these patients, 84 (1.21%) were exposed to clinically significant CYP-mediated pDDIs (Table 1), and their ages ranged from 0 to 14 years (median age of 4 years). e patients' common diagnoses were diseases of the respiratory system (20, 23.81%), neoplasms (16,19.05%), and certain conditions originating in the perinatal period (12, 14.29%). e length of stay ranged from 6 to 335 days, with a median length of 37.5 days. During the period, the minimum type of medication was 23, the maximum was 103, and the median was 45.
Occurrence of Adverse Reactions Based on Laboratory Test
Results. A total of 12 (14.29%) of the 84 children had 15 adverse reactions (Table 4), of which 4 were rhabdomyolyses, 4 leukopenia, 3 neutropenia, 2 acute kidney injury, 1 myocardial injury, and 1 thrombocytopenia. e most frequently occurring DDI pairing that caused adverse reactions was nifedipine + voriconazole (10 times).
Discussion
In this study, we identified the prevalence and characteristics of CYP-mediated and clinically significant pDDIs in ICU hospitalized children from the PIC database and identified the occurrence of adverse reactions based on laboratory test results.
ere were several studies on the prevalence, common drug pairs, risk factors, and adverse outcomes of pDDIs in children in ICUs [4,12,14,15,18]. However, the occurrence of CYP-mediated and clinically significant pDDIs and related adverse effects have not been studied to the best of our knowledge.
International Journal of Clinical Practice
Our study found that 84 (1.21%) of 6920 children who used more than two drugs were exposed to at least one clinically significant CYP-mediated pDDI. e pDDIs identified in our study involved a total of 8 pDDI pairings, with nifedipine + voriconazole (39.60%) and erythromycin + fluconazole (33.66%) being the two most common drug combinations, accounting for more than 70% of all drug pairs. In our study, we only focused on pDDIs based on CYP450 with good reliability levels and clinical significance. But most studies examined all types of pDDIs, which may also include pharmacodynamics, other types of pharmacokinetics, and interactions of unclear clinical significance. us, our study showed a low incidence of pDDIs and the pDDI pairings with a high incidence found in this study as well as commonly used drugs were also inconsistent with other studies. e pDDIs we identified were all CYP3A4 involved. Human cytochrome CYP3A4 is the most abundant hepatic and intestinal phase I enzyme, metabolizing about 50% of the drugs [19]. In humans, CYP3A4 shows an age-dependent maturation pattern [20], which allows for possible differences in CYP3A4-mediated drug metabolism between children and adults.
In addition, the 10 drugs involved were categorized using ATC codes, and the most frequent category of occurrence was J (anti-infectives for systemic use). It was consistent with the most frequently reported drug categories that led to ADR visits in previous studies [21]. However, the most frequent drugs in our study were voriconazole (29.70%) and erythromycin (19.80%), which may be due to their common clinical use as antifungal drugs [22] and also as effective inhibitors of CYP3A4. e concomitant use of substrates and inhibitors of CYP3A4 may lead to higher drug concentrations, resulting in a higher risk of ADRs [23].
In our study, a total of 12 patients experienced 15 adverse reactions. Some of these adverse reactions can be explained by the abovementioned theories. For example, in the FDAapproved drug label information, adverse effects of nifedipine [24] have been seen with thrombocytopenia, leukopenia, and damage to the heart. Also, elevated creatine kinase has been found in patients using nifedipine, but the relationship with nifedipine treatment is uncertain. e combination of nifedipine with voriconazole, a strong inhibitor of CYP3A4, increases the blood concentration of nifedipine and may aggravate the adverse effects. In addition, voriconazole [25] also has side effects of granulocyte deficiency, thrombocytopenia, and leukopenia. e combination of the two drugs may increase the chance and severity of these adverse reactions. Similarly, amlodipine [26] can cause leukopenia, and fluconazole [27] has leukopenia and neutropenia. e combination of amlodipine and fluconazole or voriconazole to produce ADRs could be similarly understood using the abovementioned theory.
Some adverse reactions cannot be the direct side effects of drugs but rather further injuries. For example, the possible mechanism of acute kidney injury (AKI) is caused by the combination of nifedipine and voriconazole. Due to the effective inhibition of CYP3A4 by voriconazole, voriconazole increases the blood concentration of nifedipine and excessively enhances its hypotensive effect. Severe hypotension may lead to inadequate renal perfusion, resulting in ischemic AKI [28]. In addition, there were still some adverse reaction symptoms that may be natural history or complication of the patient's primary disease, such as systemic lupus erythematosus, which may have manifestations of rhabdomyolysis [29,30].
ere are still some limitations to our study. First, our study excluded Chinese herbal medicines, which studies have shown are metabolized by cytochrome P450 and can be involved in interactions [31]. However, due to the complex composition of herbal medicines and the unspecified metabolizing enzymes of some components, the pDDIs involving herbal medicines were not evaluated in this study. Secondly, we explored the occurrence of adverse reactions based on the laboratory test results because there were no drug monitoring data or adverse reaction records in the database. However, changes in laboratory test results may be due to a variety of reasons, not all of which are caused by pDDIs. Moreover, there were many adverse reactions, such as DDI-induced tardive dyskinesia, which could not be identified by the available data.
However, our study can still provide some reference to ICU children's care. We found that CYP-mediated pDDIs were still occurring in the children admitted to ICUs. CYPmediated interactions are usually measurable and, therefore, preventable. We recommend using clinical decision support systems such as Lexicomp to try avoiding combinations that would produce serious adverse effects. Sometimes, the combination of these drugs may be unavoidable. So, we recommend monitoring the serum drug concentrations and paying attention to clinical monitoring for possible adverse reactions. We have also identified a number of drugs that are associated with pDDIs and the occurrence of adverse reactions, such as voriconazole. e risk of pDDIs and adverse reactions may be significantly reduced if these drugs are appropriately discontinued or switched to other drugs with the same pharmacological effects.
Conclusions
We explored the occurrence of clinically significant CYPmediated pDDIs in ICUs at a large children's hospital in China and identified adverse reactions based on laboratory test results. We recommend the use of clinical decision support systems in ICUs to improve medication safety as well as better monitoring.
Data Availability
e data used to support the findings of this study may be released upon request.
Conflicts of Interest
e authors declare no conflicts of interest for this work.
Authors' Contributions
TL, CG, and GY contributed to the study design, data analysis, and manuscript writing and revision. BH, LY, ZF, and LH contributed to data analysis and manuscript revision. ChengjG, XW, WT, and YW contributed to data extraction. All authors read and approved the final manuscript. | 3,211.6 | 2022-08-23T00:00:00.000 | [
"Medicine",
"Biology"
] |
Analysis of Corporate Governance Index using Asean Balanced Score Card and Firm Performance
Governance plays a crucial role in most activities of socio-economic life. For any organizations, units, businesses or higher, a country, a community, the role of governance becomes more important than ever. For businesses, with globalization taking place and market becoming more competitive than ever, good Corporate Governance is now considered as a factor of success. Good corporate governance will help the company improve its ability to access to various sources of capital and operate more efficiently. However, in Vietnam particularly, Corporate Governance system still need to enhance and improve. Therefore, with the aim of surveying the current situation of Corporate Governance practices in Vietnam and giving the analysis of its impacts on businesses’ performance, this research is conducted from a sample of 60 listed-companies in both Construction and Food & Beverage industry for the time 2015, 2017 and 2018. By using the tool of ASEAN balanced scorecard to evaluate Vietnam Corporate Governance practices on 05 aspects: Rights of shareholders, Equitable treatment of shareholders, Roles of stakeholders, Disclosure and transparency and Board duties and responsibilities, the paper came to the low results of companies’ Corporate Governance practices in Vietnam. Based on this method, different relationships were found, one of which is the positive relationship between Corporate Governance Index and Tobin’s Q.
Introduction
Since people began to form groups to accomplish goals that could not achieve individually, governance has become essential to ensure coordination between different individuals. Alongside with the fast growth of businesses in both amount and scale today, particularly within the listed firm in Vietnam, Corporate Governance -a tool to help separate proprietorship and management, is increasingly drawing attention of numerous organizations and lawmakers on businesses.
Generally, governance is the process of laying the foundations of basic operational principles for an enterprise. The topic of Corporate Governance is conceived from the matter of partition between management and shareholders of companies. Whether a company is a public company or a private one, Corporate Governance is always meant to protect stakeholders' rights and people related. In the case of public companies, there are a lot of small shareholders whose voices seem to be restricted, thus there should exist a transparent system to protect their rights. Moreover, as investors depend on public sources of information to have an understanding of the company, they need to make sure of the quality, accuracy and clarity of financial information. Thereby, transparency in Corporate Governance system is necessarily required. This transparency will increase investors' confidence in making investment decisions. As a result, businesses can attract more capital, particularly from foreign investors. Overall, Corporate Governance give companies directions to create value for both shareholders and society in a competitive market.
In Vietnam particularly, the Corporate Governance is now taken more seriously by businesses but it is yet in a strong position. In ASEAN disclosure index 2018 conducted by FTI Consulting Group on top 180 listed companies ranking on disclosure quality, Vietnam ranked last in all three categories: composite disclosure, board quality and risk disclosure. Meanwhile on State enterprises, according to the Ministry of Planning and Investment, in 2017, only 265/622 enterprises (accounting for 42.6% of the State enterprises) sent reports to the Ministry to publish information on Business Portal and in 2016, this rate was only 38.9% (D, A 2018).
With globalization taking place and market becoming more competitive than ever, the importance of good Corporate Governance is seriously considered and its impact on firm's performance are unavoidable and undeniable. It might be the reason that many studies have been done to analyze this relationship.
Hence, this paper will contribute to the topic of the connection between Corporate Governance Index and firm performance. Particularly, with the use of ASEAN scorecard method, the score for each category in Corporate Governance will be calculated based on disclosed information of companies in public. After that, the relationship between Corporate Governance score and firm performance is tested and discussed. In order to conduct the paper, a sample of 60 non-financial companies in Food & Beverage & Construction industry listed on Ho Chi Minh Stock Exchange (HOSE) and Hanoi Stock Exchange (HNX) for three years (2015, 2017 and 2018) is chosen. Year 2016 is omitted, as we want to observe the changing progress more clearly.
In details, this paper covers the following four purposes: Providing basic information about Vietnam Corporate Governance, Corporate Governance Index as well as the use of ASEAN scorecard; Identifying the score for each categories in Corporate Governance mechanism of 60 listed companies in Vietnam for three years (2015, 2017 and 2018) by using ASEAN scorecard; Discussing the relationship between Corporate Governance score calculated based on ASEAN scorecard and firm performance measured through market-based valuation and accounting-based valuation and Provide discussing the reasons for the Vietnam Corporate Governance scores and recommendation for improvement. This paper is divided into 5 sessions. Session 1 introduces the topic of the paper. Next, session 2 provides information on the literature review of theoretical literature and empirical studies that have been done in previous work. Followed by is session 3, which includes the methodology of the research, and session 4 is given for results. Finally yet importantly, session 5 concludes on the discussion of findings of the paper as well as some limitation when conducting the research and recommendation for improving.
Corporate Governance in Asia
According to the book Corporate Governance in Development (2003), which was published by OECD Development Centre, the matter of Corporate Governance had never been considered adequately in developing countries. It remained practically imperceptible in those nations until the East Asia financial crisis of 1997-1998. Mentioned in the study of Corporate Governance in Southeast Asia by Philippines Institution for Development Studies, the downturn of economy revealed latent problems (e.g. corruption), exacerbated others (e.g. poor resource management) and gave rise to new ones (e.g. political instability) (Eduardo & Magdalena 2009). Also in major Southeast Asia countries such as Malaysia, Philippines, Indonesia and Thailand, the factors of poor investment structure, weak legal and accounting system, faulty financial practices, questionable political interventions had been described as substantial contributors to the decline of economic during the crisis stage (Ho 2005). Such pressure has provoked the desire of implementing new policies to strengthen the condition of Corporate Governance, recover economy and prevent external shocks from transforming into major crisis. Those policies included transparency, institutional accountability and fiscal prudence (Ho 2005). Moreover, the corporate restructuring was not restricted to these countries affected only but also adopted by those nations that were not influenced by the crisis, such as China, Taiwan and Singapore since they believed that prevention is better than cure. Vietnam is not an exception in this turning point of reforming.
However, not until 2005 when Vietnam Enterprise Law was published that it led to a significant improvement in Corporate Governance in general and in shareholding companies in particular (Nguyen 2008). In term of legal status, the advent of the stock market and the issuance of Decision 12/2007/QD-BTC on Corporate Governance Regulations applied to companies listed on the Stock Exchange have shown the initial attention of Vietnam to listed companies' Corporate Governance. Since then, Vietnam has continuously improved the legal framework on Corporate Governance by issuing Circular 52/2012/TT-BTC about information disclosure on the stock market, Circular No.121/2012/TT-BTC on Corporate Governance Regulations applicable to public companies or Decree No.108/2013/ND-CP about sanctions executed for administrative violations in the field of securities and securities market. The introduction of these documents has helped improve Corporate Governance compliance and enforcement of public companies in Vietnam. Moreover, these actions are also seen as the determination of Vietnam in improving the legal framework for Corporate Governance operations with the aim to shorten the distance with the world.
In 2008, the World Bank made a comparative study of Corporate Governance in different countries such as Thailand, Vietnam, Indonesia and Malaysia, for the periods 2003-2006, the author summarized and made the comparison between four countries based on scores observed in each category. Generally, there were total of 22 categories and the maximum score attainable was 110. As a result, Vietnam achieved lowest overall score (50.9) and ranked below Malaysia, Thailand, and Indonesia whose scores were 77.3, 72.7 and 60 respectively. In details, it is seen that over 22 categories, Vietnam ranked at the last position in 19 categories, which included Basic shareholder rights, shareholders' annual general meeting rights, equal treatment of shareholders, prohibition of insider trading, disclosure of interests, etc., to name a few. Especially with such categories like Objective judgment exercise, The board responsibility, Law compliance, Fair treatment to shareholders, Fair and timely dissemination, Disclosure standards, Disclosure of Interests, Insider trading prohibition, Equal treatment of shareholders, it was hardly observed from Vietnamese companies' Corporate Governance mechanism. The outcomes evidently shown a fact that the corporate governance system in Vietnam was still lacking and unclear compared to other countries in region.
A report published by ASEAN Capital Markets Forum also indicated the inferior status of Vietnam Corporate Governance compared to another five ASEAN country members including Indonesia, Malaysia, Philippines, Singapore and Thailand. Although the point for Vietnam Corporate Governance practices got better and better during the period 2011-2015 (from 28.4 points in 2011 to 36.75 points in 2015), Vietnamese listed companies still had the lowest average management score among the six ASEAN member countries surveyed, reflecting the Research Journal of Finance and Accounting www.iiste.org ISSN 2222-1697 (Paper) ISSN 2222-2847 (Online) Vol.11, No.6, 2020 13 limited Corporate Governance activities in Vietnam's listed companies.
Corporate Governance and firm performance in Asia
The situation of globalization worldwide has exacerbated competition within the country and even across national boundaries, followed by a controversy around the world that whether better-governed firms outperform the others (Akshita & Shernaz 2018). This is undoubtedly an urge for researchers to start the discovery into the influences of Corporate Governance system on corporations' performance.
In the research published by Shafie, Kamilah & Khaw (2016) took into consideration the relationship between Corporate Governance Practices and Firm Performance with the evidence from Top 100 Public Listed Companies in Malaysia. The researchers used Board size and Board Independence as two indicators to test the hypothesized relationship between the Corporate Governance and firm performance. Wen-Yen & Pong Pitch presented another paper studying about The Corporate Governance on the Efficiency Performance of Thai non-life insurance industry in 2010. The research discovered the negative impact of audit committee size, diligence, voting rights, board tenure, board age and board ownership on firm performance whose measures were based on firm's technical, allocative, cost, and revenue efficiency.
In Vietnam, the structure of Corporate Governance has just been in a beginning period of development. However, there are gradually lot of researchers paying attention to this matter and start to conduct analysis exploiting the better Firm Performance by developing Corporate Governance. One of some outstanding papers is "The impact of Corporate Governance on firm performance: Empirical Study in Vietnam" conducted by Vo & Nguyen (2014). By taking the dataset of 177 listed companies in Vietnam for the time period from 2008 to 2012, the authors found some noticeable results which were the positive correlation between duality role of the CEO and firm performance, the opposite effects of board independence on firm performance and the structural change between the organizational ownership and firm performance.
With the approach of taking a sample of 30 listed companies from VN30 Index, Dao (2018) analyzed the dataset and got the findings of positive correlation between number of Director Board, independent CEO and Major Shareholders and performance of Vietnamese non-manufacturing firms. Also conducted by Dao & Hoang (2012) but focused on banking industry only, the research of Corporate Governance and Performance in Vietnamese Commercial Banks concluded the huge influence of the Director Board member's number and ratio of capital adequacy on the bank's performance. The findings can be utilized usefully and be great assistance in helping organizations minimize business risks. Pham (2016) can examine the impact of Corporate Governance on firm performance measured as ROA and Tobin's Q. However, in case of Tobin's Q, there was no significant correlation between ownership of managers and firm performance.
Manmeet and Madhu (2018) constructed a CGI to examine the good governance practices of Indian banks and to see whether the banks had great performance accordingly. The outcome demonstrated that CGI is significantly and positively in correlation with banks' financial performance measures, which are, return on assets, economic value added and Tobin's Q. Also conducting the research in India context but on business firms, Akshita and Shernaz (2018) used essential parameters of Corporate Governance such as ownership structure and board structure to build CGI to discover the relationship between Corporate Governance Index and firm performance. As a result, the firm performance metrics (Returns on assets, earning per share and return on net worth) are proved to have significant positive relationship with CGI. With these empirical evidence, firms may have good incentives to deliberately improve their Corporate Governance as it helps to enhance their performance. Additionally, the investors would likewise have positive view on business firms keeping up high governance standards, therefore lessening possible funding costs.
Another study needed to mention is the research conducted in 2007 by Langfen et al. By taking the sample of firms listed in Taiwan, they constructed a CGI based on four different dimensions of a firm's Corporate Governance structure: CEO duality, size of the board of directors, managements' holdings and block shareholders' holding to clarify the connection between ownership/leadership structures and stock returns of these firms.
In Vietnam, to my best knowledge, there have not been many researchers studying the association between the firm performance and CGI. In 2018, Dao & Nguyen published their study of "The impact of Corporate Governance Index on the performance of listed companies VN30 Index". The index questions are based on Thailand Corporate Governance Report (2012) and OECD principals. The result shows that Corporate Governance Index has significant effect on the firm performance of company, which is measured by ROA.
There have not had many researches that using ASEAN scorecard to measure the standards of listed companies' corporate governance in Vietnam or the study about relationship between ASEAN governance scorecard and firm performance. The only source that using the ASEAN scorecard to giving an overview of Vietnamese Corporate Governance is the ASEAN Corporate Governance Scorecard Country Reports and Assessments done by the cooperation of ASEAN Markets Forum & The ASEAN Development Bank from 2012 to 2015. Besides the assessment of publicly listed companies in Indonesia, Malaysia, Philippines, Singapore and Thailand, Vietnam is also included in the group for evaluation. By giving score to 2 levels of scoring, which covers Research Journal of Finance and Accounting www.iiste.org ISSN 2222-1697 (Paper) ISSN 2222-2847 (Online) Vol.11, No.6, 2020 areas of OECD principals (rights of shareholders, equitable treatments of shareholders, role of stakeholders, disclosure and transparency and responsibility of the board), among 50 Vietnamese listed companies participating in the evaluation Petro Viet Nam Fertilizer and Chemicals, Ho Chi Minh City Securities and Viet Nam Dairy Products Joint Stock were the top three companies having highest ASEAN Corporate Governance Scorecard total score in 2015.
Methodology
The research is designed to evaluate Corporate Governance mechanism of 60 listed companies in two industries in Vietnam at time 2015, 2017 and 2018 by using the ASEAN balanced scorecard and their relationship with firm performance. There will be 180 observations in total.
Sample selection
Sample size of 60 non-financial companies, in which 30 listed companies are in Food & Beverage & Consumer Goods Industry, another 30-listed company belong to Construction and Real Estate Industry in HOSE and HNX Stock Exchange are chosen to undertake the Corporate Governance assessment using ASEAN scorecard method. These are the two biggest industries in Vietnam market with market capitalization estimated to be 861,459.38 and 577,288.97 billion Vietnam dong respectively (Financial Industry is excluded). Additionally, the information needed for evaluation is taken from three years, which are 2015, 2017 and 2018 so that it will be easier to track the improvement of each company's Corporate Governance practices over the time. Hence, there will be 180 observations in total.
ASEAN scorecard for Vietnam Corporate Governance
According to the report prepared by a group of Association of Southeast Asian Nations Corporate Governance Experts (2016), The ASEAN corporate governance scorecard is a great supporting tool to measure ASEAN corporate governance, which is endorsed by the ASEAN Capital Markets Forum (ACMF).
The ASEAN scorecard was developed based on international benchmarks and framework of the Organization for Economic Co-operation and Development (OECD) Principles of Corporate Governance (2004), the International Corporate Governance Network Corporate Governance Principles and industry-leading practices from ASEAN & the world. The objective of developing publicly listed companies' governance standards is to give Southeast Asia Nations a greater universal perceptibility towards well-governed listed companies and encourage them to improve their practices to reach their global counterparts' level. Moreover, it also complements other ASEAN Markets Forum initiatives and promotes ASEAN as an asset class (ACMF 2017).
There are two levels of scoring designed for ASEAN scorecard assessment. The calculation based on two scoring dimensions is likely to reflect the actual execution of the substance of good Corporate Governance better.
According to ACMF country report (2015), Level 1 contains descriptors or components that are, fundamentally, characteristics of the laws, regulations, rules and basic expectation of OECD principles. In details, in this level, there are five areas of OECD principle that the scorecard covers: Part A: Rights of Shareholders Part B: Equitable Treatment of Shareholders Part C: Role of Stakeholders Part D: Disclosure and Transparency Part E: Responsibilities of the Board It is essential to note that the attributes included in two levels are not necessarily legally required but those constructed ones are considered as the good components contributing to great Corporate Governance mechanism by international standards.
The ASEAN scorecard, further understanding as CGI, is not survey-based. Thus, it means that the questions are answered from available information, which is already revealed by listed companies. In other words, the method used keeps assessors away from conceivably emotional or subjective answers. The sources of information needed to accomplish the questions in two levels are normally gotten from annual reports, company's charter & regulation and annual general meeting documents. It is easy to get access to those documents through company website or stock exchange information website such as finance.vietstock.vn, cafeF.vn -the two useful sources of website for listed firms in Vietnam.
There are about 183 questions in ASEAN scorecard, which are divided into 02 level. Each question corresponds to a "Yes" or "No" answer. The maximum value for each question is one and the minimum value is 0. If answer to the question is "Yes", then the score of question is 1 point. In other hands, if answer to the question is "No", then the value of 0 is attributed to that question. A reasonable weight, which is in relation to the total Level 1 score of 100 points, is assigned to each part depending on the relative significance of the category. 10 Level 2 includes bonus and penalty elements, which is designed with the intention of improving the strength of the ASEAN scorecard in evaluating Corporate Governance of companies in practice. The reason for the bonus questions is to acknowledge organizations that exceed the fundamental in Level 1 by implementing good Corporate Governance practices. In contrast, penalty questions are meant to punish the companies with poor Corporate Governance practices that are not included in Level 1 scored categories, for example, getting sanctions for violating the listing rules. In details, Level 2 comprises of 12 bonus questions and 25 penalty questions and each category is given different total score. In penalty section, the score achieved will be then deducted from the total score gotten from Level 1 and Level 2.
Lastly, the maximum score that can be achieved in total of 02 levels is 110 points and the minimum score is -10 points.
Methodology specification
Firstly, it is necessary to have a summary of all variables used in this paper: The variables are summarized as the following table below: Size of the firm log(assets) Positive Next, the study uses OLS to regress the association between Corporate Governance index and firm performance: π = β + β TOTAL + β SIZE + β LEVERAGE + ε Reasons to separately test the influence of independent variable TOTAL on the firm performance: Total score of each company's Corporate Governance is calculated by combining points from all other categories, which include: Part A: Shareholders' rights, Part B: Equitable treatment of shareholders, Part C: Roles of stakeholders, Part D: Disclosure and transparency, Part E: Board duties and responsibilities, Bonus section and Penalty section. It means that the variable TOTAL shares same information with other independent variables (PA_SHAREHOLDERS, PB_EQUITABILITY, PC_STAKEHOLDERS, PD_DISCLOSURE, PE_BOARD). ). Moreover, as stated in descriptive data section after, there exists high correlation (over 0.8) between TOTAL and other independent variables. Therefore, in order to avoid the problem of multi-collinearity, it is better to separately
Total score of Corporate Governance also includes the score calculated in Bonus and Penalty section. Because of the small weight of score in two sections, it is unnecessary to separate them into two more independent variables. Thus, it will be better if those sections included in TOTAL variable to test the overall significance of Corporate Governance Index on firm performance.
Step 2: Error Testing
Descriptive statistics
The following Table provide (2015, 2017 and 2018) are summarized in the table 10. It can be seen that overall, the total score of Corporate Governance Index using ASEAN balanced scorecard of 60 companies in two industries is about 45 points in average over a maximum attainable score of 110. Although it is not a high score compared to other countries', the number has demonstrated the determination of Vietnam in improving Corporate Governance practices. According to the information collected by ACMF on 55 listed companies in Vietnam, the average total score of Vietnam Corporate Governance using ASEAN balanced scorecard was only 28.42 in 2012 and went to highest in 2015 with 36.75 points. However, after that, this number has gradually improved to over 45 points.
Similarly, the scores for other categories such as shareholders' rights, equitable treatments of shareholders, disclosure and transparency and board responsibilities have also demonstrated the positive improvement over the years. For example, the closure and transparency of information (PD_DISCLOSURE) have experienced a considerable change from just only 9.30 points in 2012 to 16.28 after that. Additionally, the average point for part C, Stakeholders' roles, during three year-period is almost the same with the score achieved in 2012. It means that companies must reconsider the implementation of those corporate governance practices in this part for achievement of higher score in future.
Besides, the mean figures of ROA, ROE and Tobin's Q are quite good as they are all positive number. Particularly, average Tobin's Q is 1.25, which is higher than 1. This is a good sign to see as the companies generally do generate returns and have high market value.
This section presents the results of six final regression models to demonstrates the quantitative correlation between the dependent variables (ROA, ROE and Tobin's Q) and six independent variables (PA_SHAREHOLDERS, PB_EQUITABILITY, PC_STAKEHOLDERS, PD_DISCLOSURE, PE_BOARD and TOTAL. The data is collected and calculated from two industries including 60 listed companies at time 2015, 2017 and 2018.
Rights of shareholders
First, average points in three years for the category of "Shareholders' rights" (PA_SHAREHOLDERS) of companies in Construction Industry is 6.21 points over the maximum points of 10 for this part. Overall, it means that the companies are doing the protection for shareholders quite well.
Similar to Construction industry, companies in Food & Beverage industry gain the average point of 6.17 for "Rights of shareholders" category. Specially, there is a company in the industry that achieves an outstanding score for this part, which is Vinamilk. With the nearly perfect scores of 9.5, 8.6 and 9.0 in 2015, 2017 and 2018 respectively, Vinamilk has shown the whole market a model of what the good corporate governance practices for shareholders are.
Equitable treatment of shareholders
The maximum score for equitable treatments of shareholder (PB_EQUITABILITY) in ASEAN scorecard is 15 points, and the average score for this category of 30 companies in Construction industry for three-year time is 9.16, which is quite a good one.
Expectedly, the average score for equitable treatments of shareholder (PB_EQUITABILITY) in ASEAN scorecard in Food & Beverage industry is not much different from Construction industry as the two scores are all around 9.3 over 15 points. Reason for the similarity in score is that Food & Beverage industry are also encountering the same issues with Construction industry.
Roles of stakeholders
In Construction industry, "Role of stakeholders" category average score is only 3.11 over the maximum attainable score for this part is 10 points. The maximum score of 6.15 points belongs to FLC Faros, which is a comparable low score. Reason for the low score in this part is mainly because Construction companies lack of evidence in demonstrating their responsibilities to society. In details, in 2015, there are 29 companies that did not have the separate section in annual report discussing their efforts on environmental and social issues. In 2017 and 2018, it seemed that Construction companies gradually notice and become more concerned about their responsibilities for community.
Stakeholders' right in Food & Beverage industry is also not clearly shown by companies so that the mean point for this section is quite low with 4.69 over 10 points. However, this point is still higher than Construction industry about 1 point. The reason for this difference is that Food & Beverage companies have better performance in showing their concerns toward environment and society. In 2018, there are only four companies that did not have separate section in annual report to discuss their efforts on environmental and social issues as well as promote sustainable development. Similar to the case of Construction industry, almost all Food & Beverage companies rarely disclose the policy of protecting employees from retaliation when informing unethical behaviors except for Vinamilk.
Disclosure and transparency
That the constructed category includes 32 questions and accounts for 25% of the total score of Corporate Governance has demonstrated its importance of information transparency in Corporate Governance. Both industry gain the average score of about 16 points over 25 in "Transparency and disclosure", which shows the attempt of companies to provide their stakeholders with as much information as possible.
Due to the results of ASEAN balance scorecard, it is observable that there are some certain questions that constantly receive "No" answers. Those parts have somehow indicated the problems in "Disclosure and transparency" that Vietnam is struggling with. In details, firstly, companies do not completely inform shareholders about the name, relationship and value for each related party transactions. There are no information found in 60 companies for this question. Moreover, about 26 Construction companies and 26 beverage companies do not disclose the direct and indirect shareholdings of substantial shareholders and 60 companies do not reveal the direct and indirect shareholdings of senior management. This may partly make investors and shareholders unable to accurately evaluate the value of the business in the present and in the future.
Board duties and responsibilities
There are 65 questions in this part accounting for 40% of total level 1 score. The weight of score assigned for this category shows the significance of board toward its company's Corporate Governance.
The mean score for Construction companies in this part is only 10.01 points, which is less than the maximum attainable score about 30 points. Though FLC Faros (2017) achieved highest score among 30 companies with 14 points but it is still unexpectedly low. Higher than the average score of Construction industry about 1 point, Food & Beverage companies record the mean score for the whole industry with 11.26 points. Surprisingly, Vinamilk has done such an amazing performance when achieving the highest score among companies with 30.77 points in this part.
Generally, companies often lose points to the questions relating to people on the board, audit committee, remuneration committee and nominating committee. As almost company, except for Vinamilk, do not establish different committee for audit, remuneration or nomination, 59 companies cannot give answers to those questions related. Therefore, for each company, 19 questions covering information about 03 committee are constantly marked with 0 point.
Discussion on relationship between Corporate Governance Index and firm performance
Firstly, in Construction industry, variable "Rights of shareholders" is statistically significant with 10 percent level of significance, displaying a negative effect with ROA. This result implies that when company does not allow shareholders to participate effectively in company's decisions and limit their rights, company's ROA will increase. Surprisingly, this finding is completely contrary to author's prediction when building the model. As in Food & Beverage industry, there is not enough evidence to conclude the impact of "Shareholders' rights" on any firm performance measures given 90% confidence level.
Secondly, in Construction industry, variable "Equitable treatments of shareholders" demonstrates its significant impact on ROA, ROE and Tobin's Q with 1%, 5% and 10% level of significance respectively. Moreover, it is found that "Equitable treatments of shareholders" demonstrate its positive impacts on ROA and ROE, which completely coincides with the paper's hypothesis and expectation. Meanwhile, the variable shows a negative correlation with Tobin's Q, which is a market-based performance of the firm. The outcomes implies a opposite relation of Tobin's Q to ROA, ROE, which reflects the difference in public market's perception and reality of what firm is needed to generate returns.
Thirdly, "Transparency and disclosure" surprisingly shows no impact on the performance of companies in both industries", It was expected that the category might show some connection with the firm performance, at least with Tobin's Q since it is a market-based measure. This result is contrary to the results discovered by Anjala & Shikha 2016 whose research topic was about the influence of Corporate Governance Disclosure Index on 38 nonfinancial companies listed in National Stock Exchange of India for a five-year period from 2008-2012. Moreover, another conclusion might be withdrawn from the research result is that at a highly instable market like Vietnam, where authentication and fake are confusing, it requires higher point of transparency and information disclosure for companies to gain more investors' trust to surrender funds for equity finance, which in turns strengthens the financial performance of the firms.
Fourth, variable "Roles of stakeholders" shows no influence on any measures except for Tobin's Q in Construction industry. It is statistically significant with 10 percent level of significance, displaying a positive effect with Tobin's Q. Furthermore, the finding of "Roles of stakeholders" having no relationship with ROA is also found in research "Analyzing the impact of the Corporate Governance Index on the performance of listed companies VN30 Index" conducted by Dao & Nguyen (2018). Their Corporate Governance Index is similarly constructed based on OECD principles with 148 questions divided into 05 categories like ASEAN Corporate Governance balanced scorecard: Shareholders' rights, Equitable treatment of Shareholders, Roles of stakeholders, Disclosure and Transparency, Board duties and responsibilities.
Additionally, as can be observed from the result (P-value = 0.00005), "Board duties and responsibility" demonstrates its positive influence on only Food & Beverage companies' Tobin's Q with 1% level of significance. In contrast, in Construction industry, it is insignificance to companies' ROA, ROE, and Tobin's Q given 90% confidence level. This finding is unexpectedly contrary to author's expectation as the important role of a board within an organization is inevitable. Last but not least, although there have been different results of association between Corporate Governance practices and firm performance in both industries, the finding has discovered a similarity between them, which is the positive influence of total Corporate Governance score (TOTAL) on Tobin's Q. Overall the average value of Tobin's Q in Food & Beverage industry is 1.48, meaning that majority of firms sill have market value higher than book value. Meanwhile, this figure for Construction industry is just 1.02. With the similar average total scores of 45.7 (Food & Beverage) and 45.38 (Construction), it can be seen that companies in Food & Beverage industry generates higher value of Tobin's Q than that of Construction industry. In other words, total Corporate Governance Index affects companies in Food & Beverage industry more than those in Construction industries. For further implication, since Tobin's Q is valued based on market expectation and sources of public information, it seems that people' perception of transparency in Food & Beverage industry is much better than Construction one. It can also understand that people value the public information more in Food & Beverage sector since Food & Beverage deals directly with people' health.
Recommendation
Overall, Vietnam needs to keep on improving the legal framework related to Corporate Governance practices in public companies in general and listed companies in particular. After the results of Corporate Governance Index using ASEAN balanced scorecard, there are some outstanding points that every lawmaker, business owner, investor need to pay attention to in order to get higher outcomes.
To begin with, it is important to enhance guidelines on corporate responsibilities for stakeholders, especially for communities, society and the environment because stakeholders play significant roles in corporate businesses. The State needs to supplement regulations to guarantee fairness between major shareholders and small shareholders, between domestic shareholders and foreign ones. For instance, the information disclosure should be provided in two languages English and Vietnamese on listed company's website. This will help to ensure the fairness for foreign shareholders and encourage the foreign investment flows at the same time.
Moreover, it is necessary for Vietnam to learn the implementation of good Corporate Governance from other countries in region as they have many similarities with our countries in general. In this case, Thailand will be given as our example. From the reports published by ACMF in 2015, on the list of top 50 listed companies whose Corporate Governance scores are highest in Southeast Asian region, Thailand companies accounted for 23 spots over 50. In addition, the Institution of Directors was established is such a turning point for Thailand in the battle of improving Corporate Governance quality. They have had great merits in helping to develop professional directorial standards and giving great practice rules for organization chiefs. Learning from the success of Thailand, it is necessary that Vietnam should establish an organization dedicated to Corporate Governance. This organization will help stakeholders such as directors, executives, investors to raise awareness and knowledge about Corporate Governance as well to develop grading criteria based on international standards as well as the actual situation in Vietnam. Annual surveys can be carried out to assess aspects of Corporate Governance practices in Vietnamese listed companies.
In summary, Corporate Governance is a significant issue for all of the nations in period of globalization and integration. Although Vietnam is starting with the initial period of Corporate Governance implementation, Vietnam needs to learn from countries that have been successful in executing Corporate Governance practices in order to have better performance outcomes. | 8,162 | 2020-09-04T00:00:00.000 | [
"Business",
"Economics"
] |
Development of a laser-driven ultrasonic technology for characterizations of heated and aged concrete samples
ABSTRACT We have demonstrated a full noncontact laser technology to measure the velocity of ultrasonic waves and their spectra propagated through concrete samples exposed to specified high-temperature conditions with specified durations as models of concrete structures in a severe accident at the Fukushima Daiichi Nuclear Power Station. The velocities and spectra of the ultrasonic waves were strongly dependent on the exposed temperature, i.e. at a high-temperature condition of 400°C, the velocity was 3700 m/s, and at room temperature, the velocity was 5000 m/s. Experimental results are almost comparable to those obtained by the contact ultrasonic technique.
Introduction
We experienced the Great East Japan Earthquake followed by the severe accident at the Fukushima Daiichi Nuclear Power Station, in which pieces of the molten nuclear fuel dropped down to the lower part of the Primary Containment Vessel (PCV) [1]. During this period, the concrete structures inside the PCV were exposed to a high-temperature environment [1,2]. Recovery of resolidified molten fuel mixed with the surrounding structures is called fuel debris; the development of remote inspection technology for degraded concrete structures is one of the key issues for safe and reliable decommissioning [2]. Such a remote inspection technology is useful not only for decommissioning but also for a wide range of aged infrastructures if the inspection technology is simple, reliable, and cost-effective.
Laser technologies have recently become increasingly practical; one of the attractive areas includes laser-based ultrasonic waves and their detection. Laser-produced plasma drives ultrasonic waves on the surface of a solid body [3,4], which passes through it, bringing information of macroscopic or mechanical properties of it. In this case, the laser intensity on the surface was more than 10 7 W/cm 2 , called the plasma regime [4]. However, a laser-based interferometric technology is used to detect ultrasonic waves driven by a laser illumination composed of the longitudinal and shear vibration modes where the former one is faster than the latter one [5]. The technology applies to the noncontact nondestructive testing of high temperature or molten metals [6][7][8] and various concrete structures [9,10], which we could not apply easily to the contact diagnostic systems.
In this paper, we propose measuring the velocity of ultrasonic waves and their spectra propagated through the concrete samples, which were functions of mass density -the elastic modulus of a concrete sample to develop inspection technology for judging its soundness. To support the purpose, we showed the experimental characterization of laser-driven ultrasonic generation and its application to the nondestructive testing of concrete samples that had been heated and degraded. Various concrete samples were exposed in the specified high-temperature conditions with specified durations as concrete models experienced in the severe accident at the Fukushima Daiichi Nuclear Power Station. Figure 1 presents a schematic view of this technology. Ultrasonic waves traverse through the sample and drive a surface vibration when the waves reach the other end of the sample. In this configuration, a laser probe can detect the longitudinal mode of surface vibration with a velocity V. The frequency of the He-Ne laser probe is shifted by the Doppler shift Δf as follows:
The laser-driven ultrasonic generation and the detection system
where λ represents the laser wavelength. The Dopplershifted component is detected at the heterodyne detector using the Mach-Zehnder type heterodyne interferometer [11]. Finally, we obtained the timedependent velocity and frequency of the longitudinal mode of surface vibration. Then, we achieved the time-dependent displacement with integration over a specific time duration. One of the useful applications of this technology is the characterization of mechanical properties that depend upon the velocity of longitudinal waves measured by the traversal time of an object. The maximum velocity of the longitudinal wave c is represented as: c ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi where E, μ, and ρ are Young's modulus, Poisson's ratio, and mass density of the sample, respectively. As shown in Equation (2), the velocity of the longitudinal wave is derived from Young's modulus, Poisson's ratio, and the Lamb's constant [5]. Distinguished fastest signal contains macroscopic or mechanical properties of target material. Figure 2 presents the experimental setup. The ultrasonic wave was driven by a pulsed YAG laser, which can deliver energy up to 0.8 J in 6 ns pulse duration at 1.064 μm wavelength with a repetition rate of 10 Hz (Quantel Q-smart 850). The ultrasonic wave was detected by a few mW continuous wave (CW) high precision He-Ne laser beam at a wavelength of 633 nm, coupled with the Mach-Zehnder type interferometer and the heterodyne detection system (Polytech OFV-505 sensor head coupled with OFV-505 KA-LR controller) to measure the Doppler shifted component. Vibrations on the sample surface cause a shift on the laser frequency, and the shifted component was detected as a beat wave frequency at the heterodyne detection system. The amplitude and period of the beat wave represent the velocity and frequency of the surface vibration [11]. For detection, the laser beam was illuminated at the opposite side of the concrete sample, as shown schematically in Figure 1. We could measure the propagation time of the ultrasonic wave in the sample without contacting the sample. Table 1 lists the composition of concrete samples prepared for this study. A mechanical strength associated with compressible stress to the concrete sample was 34.8 N/mm 2 , which was similar to that used at the Fukushima Daiichi Nuclear Power Station [12]. The notation s/a* represents a volume ratio of fine aggregate/coarse aggregate. The part of coarse aggregate passed through the sieve with a nominal dimension of 15 mm, and the rest of the coarse aggregate was reassembled (mass ratio of 50 vs. 50). The notations of W, C, S, G*, AD, and AE stood for tap water, normal Portland cement, mountain sand, mountain gravel (maximum size = 25 mm), air-entraining water reducing agent (standard 1 st kind)/master poly zaglo15S, and air-entraining agent (1 st kind)/MasterAir202, respectively. Concrete samples were heated and cooled for specified heating and cooling rates, and heating durations at the specified temperatures. These parameters are listed in Table 2. Note that the samples were heated slowly enough to almost keep the equilibrium temperature in the samples. The volume of the samples was measured using the electronic Vernier caliper to measure the lengths with an accuracy of ±0.04 mm (Niigata Seiki Co. Ltd. DT-300). The mass of the samples was also measured using an electronic scale with an instrumental error of ±0.04 g (SHIMAZU UW6200H). Figure 3 presents the measured mass density as a function of the heated temperature.
Experimental setup
The physical and chemical processes of the heated concrete samples [13] are summarized as follows: volumes get expanded by heating the concrete samples, resulting in a decrease in mass densities and chemical changes; dehydration of calcium silicate hydrate and gypsum decomposition (CaSO 4 .2H 2 O), including water loss, occur in the first region of temperature between 100°C and 300°C; and, in the second region of the temperature between 300°C and 900°C, dehydration of calcium hydroxide occurs as: The decomposition of calcium carbonate occurs as: Figure 3. The mass density of concrete samples is shown as a function of the heated temperatures. The time history of each temperature exposed to the samples is listed in Table 2. The variation of each piece caused the error. Plot points denote the mean value obtained from two or three experiments. They were estimated based on Student's t-distribution with a confidence level of 95%. Note that the errors of the solid circles without the error bars were within the circles.
During this period, the gases (H 2 O, CO 2 ) are released from a concrete sample [13]. These physical and chemical mechanisms are caused because of the variation of the mass density, as shown in Figure 3. Figure 4 presents a typical ultrasonic wave. The abscissa represents the time axis, and the ordinate represents the velocity of the vibration given by the Doppler-shifted component, as shown in Figure 1 and Equation (1). We could identify the wavefront of vibration indicated as A in Figure 4 at the sample surface corresponding to the arrival time of longitudinal ultrasonic wave (p-wave). The traversal time through a sample and the internal delay were also indicated as B, as shown in Figure 4. Both measured wavefront of the vibration and the distance between the source of the ultrasonic waves and the detection point provide the velocity of ultrasonic waves as mentioned in the previous section.
Temperature dependence of velocity of the ultrasonic waves
To measure the velocity of the ultrasonic wave passing through a concrete sample, we used a retroreflective sheet to enhance the signal-to-noise ratio and make an accurate and stable measurement. Signals obtained with and without the retroreflecting sheet are shown in Figure 5(a,b), respectively. The sheet may discourage the real applications, although an order of magnitude higher ratio can be obtained. For practical use of this technique, we tried to use the higher power lasers, for example, a few W CW laser instead of few mW lasers; a signal level is expected to increase ~10 3 times. In this case, we did not use any retroreflective sheet [14].
After the preparation, we successfully measured the traversal velocity through the concrete samples. The velocity as a function of the heated temperature listed in Table 2 is shown in Figure 6. The velocity decreased at 105°C centigrade due to the water loss; it dropped rather significantly at 400°C due to the change of chemical compositions (described in the previous section), causing a change in the mechanical property. Figure 7 shows a list of the ultrasonic waveforms and corresponding spectra.
When the samples are heated at 105-400°C, the number of peaks in the spectra increases compared with that of unheated samples. We expect ultrasonic waves to reflect inside the samples due to heating effects. The number of cracks in the samples exposed to higher temperatures is increased, and the ultrasonic waves are more frequently scattered. Although spectral information may provide useful information, it does not give us clear quantitative information on the material properties in these experimental conditions. However, the velocity of the ultrasonic waves provides us clear information on the material properties. We also estimated the displacement of the vibration at the detection surface of the sample as follows: where t, X(t), and f(t) represent time, the time-dependent displacement of vibration, and the time-dependent velocity, respectively. We have already obtained all these. For example, the calculated amplitude of the vibration as a function of time is shown in Figure 8. Figure 8(a,c,e,g) are a typical displacement of the vibration. Notably, lowlevel direct current (DC) components were found in the velocity signals, which were as small as 3.1-4.6% of the peak amplitude of the velocity signals. The DC components in the velocity signals caused a linear increase in the displacement signals with vibration components. According to the observed velocity signals, the samples were found to approach the driving laser, while this physical phenomenon is not possible. Therefore, the DC component was thought to be caused by the electronic system. We subtracted the DC components in the velocity signals, as shown in Figure 8(b,d,f,h). The magnitude of the vibrating components in the displacement signals was typically 0.02-0.04 μm under the present experimental conditions.
Dependence of the velocity on the laser-focused condition on the samples
We set a sample at a translational stage to change the sample position systematically and to measure the velocity as a function of the irradiated surface position of the YAG laser where the laser-focused position was fixed. Figure 9 shows the velocity as a function of the YAG laser irradiation position. The position of S represents the standard irradiation and is placed at the laser focus position on the surface of the samples. B represents the place where an air breakdown happens; in that position, the laser beam is focused on the air, and the measured velocity drops down. Figures (10,11) present the snapshots of air breakdown indicated in Figure 9. Figure 10 shows a full view of the breakdown recorded using a video camera with a frame rate of 30 frames/s. Figure 11 also shows the breakdown with much higher temporal resolution recorded by a high-speed camera (Photron FASTCAM Mini AX50) with a frame rate of 5000 frames/s. The emission from the breakdown region was growing in time; however, it was shortened along the laser propagation axis after maximizing the region. The velocity of the ultrasonic waves was insensitive to the focal position of the YAG laser unless an air breakdown happened. Figure 12 shows ultrasonic waveforms and their spectra, corresponding to the data indicated in Figure 9. In Figure 12(d), the laser-driven air breakdown occurred.
Dependence of YAG laser pulse energy on the velocity of the ultrasonic waves
We investigated the traversal time of the ultrasonic waves through an unheated sample as a function of YAG laser pulse energy, as shown in Figure 13. The velocity was almost insensitive to the laser pulse energy.
The signal amplitude at the wavefront indicated in Figure 4 as a function of YAG laser pulse energy is shown in Figure 14. The signal amplitude tends to be almost proportional to the laser pulse energy. The laser pulse energy of more than a few hundred mJ was considered desirable for the measurement due to suitable signal levels in the present experimental conditions.
Discussions
The velocity of the longitudinal wave was represented in Equation (2), as shown in the previous section. We should take into account the mass density correction shown in Figure 3; the Lame's constants, which are a function of Young's modulus, Poisson's ratio, bulk modulus, and modulus of rigidity for calculating the velocity of the longitudinal wave. When the temperature increases, the mass density becomes smaller, but it is less than 6.3% compared with that of a sample kept at the room temperature. Figure 15 shows the measured compressible strength obtained with JIS A 1108 and the static elastic modulus obtained with JIS A 1149 by the measured compressive strengths for the concrete samples. When the heated temperature increases, the decrease of the elastic modulus is thought to be caused by the creation of number of cracks and the chemical decompositions. According to Eq. (2) and Figure 15, the decrease of the elastic modulus leads to the decrease of the velocity of the ultrasonic waves, as shown in Figure 6.
From this point of view, the surfaces of the samples were observed with a digital microscope (Hirox Co. Ltd. KH-8700), as shown in Figure 16(a-d). When the samples were heated at the temperature of 105°C, the relatively small and larger number of cracks were observed on the surface as shown in Figure 16(b) compared with a surface of an unheated sample as shown in Figure 16 (a). The widths of the crack were <9.8 µm. The small cracks were presumed that hydrated regions shrank with dehydration. In addition, the result of X-Ray Diffraction test showed clear peaks of Ca(OH) 2 at the temperatures of 105°C, 200°C, and 400°C, as shown in Figure 17. It represents that almost no chemical decomposition of cement paste occurred below 400°C. While the peaks of Ca(OH) 2 were significantly reduced when the heated temperature was 600°C [15] and it was expected that the chemical decomposition started to occur over 400°C.
When the concrete samples were heated at 200°C, the widths of cracks are <16.0 µm, as shown in Figure 16(c). When the concrete samples were heated Figure 4 as a function of YAG laser pulse energy. Errors are caused by each YAG laser shot variation and the variation of the traversal time at different but identical places. Plot points denote the mean value obtained from three to six experiments. Error bars were estimated based on Student's t-distribution with a confidence level of 95%.
at 400°C, the decomposition occurred and the aggregate expanded. Simultaneously, the cement hydrate regions shrink and its inhomogeneity causes selfstraining stress, which is thought to create more cracks and reduced strength [16]. The widths of cracks were <48.9 µm, as shown in Figure 16(d). The cracks are expected to distribute almost homogeneously in the samples because each component of the concrete is distributed homogeneously in the samples and the number and size of the cracks are increased by heating. These results show that the vibrational frequency is affected by the increase in the number of cracks in the concrete samples, as shown in Figure 7. When the samples were heated at 400°C as shown in Figure 7(d), spectral components of vibrational frequency of >100 kHz decreased significantly compared with that of unheated samples as shown in Figure 7(a). Moreover, lower frequency spectral components did not decrease significantly in the samples exposed at the higher temperature atmospheres. On the other hand, the velocity of the ultrasonic waves became slower in the samples exposed in the higher temperature atmospheres and the tendency was similar to the result obtained with the contact ultrasonic technique [15]. This present result shows the applicability of nondestructive testing of degraded concrete structures.
Conclusions
We proposed to measure the velocity as a function of mass density and the elastic modulus of the concrete sample through its samples to develop inspection technology for assessing its soundness. To support this proposal, we have performed an experimental characterization of laser-driven ultrasonic generation and its transport through the concrete samples that have been heated and degraded. For this purpose, the various concrete samples were exposed in the specified hightemperature conditions with defined temporal profiles as concrete models experienced in a severe accident at the Fukushima Daiichi Nuclear Power Station. We could successfully obtain the experimental results by measuring the velocities of an ultrasonic wave passing through the concrete samples. The velocities and spectra of the ultrasonic waves are strongly dependent on the exposed temperature, i.e. at a high-temperature condition of 400°C, the velocity was 3700 m/s, and at room temperature, the velocity was 5000 m/s. The measurement of velocities of an ultrasonic wave with the present technology provides almost the same result as with the contact ultrasound technology [15]. The present technology shows the applicability of nondestructive testing of degraded concrete structures. It is noted that in the actual Fukushima Daiichi Nuclear Power Station, significant parts of the concrete structures have been exposed to water, where the strength of the concrete has been expected to recover from the initial condition, which was caused by the severe accident [17,18] Further studies on variety of experimental conditions are needed to apply this technology to the decommissioning of Fukushima Daiichi Nuclear Power Station. The technology is also expected to contribute to the maintenance of a wide range of concrete structures.
Supplementary file
We also tested the samples heated at 600°C, 700°C, and 800°C with the same experimental configuration. As described previously, when the temperature increased above 600°C, the mechanical property changed significantly. We expect that it is very difficult to detect transmitted ultrasonic waves, because the waves are significantly attenuated because of severe thermal deterioration of concrete and the creation of cracks, as shown in Figure A1. However, we detected the vibration signals, as shown in Figure B1. The spectra are concentrated in the low-frequency region of < 50-60 kHz compared with those shown in Figure 7. The origin of the signals could not interpret as ultrasonic waves. Measured signals were stable and reproducible. Signals came from the surface vibration, probably caused by the vibration of the sample body driven by the laser irradiation. Figure C1 shows the compressible strength obtained with JIS A 1108 and the static elastic modulus obtained with JIS A 1149, as presented information for readers. | 4,620 | 2021-12-08T00:00:00.000 | [
"Materials Science"
] |
The Influence of Single and Double Steel Plate Hardness on Fracture Behavior after Ballistic Impact
This study aims to determine the ballistic characteristics of the two steel plates with different hardness levels and mix in the form of layered in non-permanent constructions. Ballistic testing by caliber 5.56 × 45 mm deformed full metal jacket on a sample plate with each a thickness of 6 mm at a distance of 15 m with a normal angle of attack. The results of ballistic testing on both single plates are they can be pierced by a projectile. While for the layered plate, projectile can only penetrate the front side of the plate. The characteristic of each hole that is formed shows the difference caused by the level of hardness of the plate. On the rear part of the plate, a bulge appears because of an impact from the front side of the plate. In the Soft Plate appear high petals around the hole on the front side with the microstructure deformed on the crater walls. While the hard plate forms small petals on the back side and slightly deformed crater walls. The Soft plate is perforated due to deformation with petaling and fragmentation mechanism, while the hard plate is perforated due to plugging mechanism and adiabatic shear band and cracked.
Introduction
Ballistic resistant materials (armor) are material that can withstand the pace of projectile fired from guns. It's developed and used in military or civil as the main construction and additional surface protection [1], and metallic armor is the most mature class of armor materials and is still widely used for ballistic protection today [2]. Armor material has long been developed and has been made to contour maps for ballistic performance of thick metallic armor as reported [3]. One of the constructions that require ballistic resistant material is Armored Fighting Vehicle (AFV) with main material of steel plate [4]. The success of AFV is predicated on the alignment of their capabilities with mission requirements [5], the mission can be chase, attack and defense. The more thickness of the plate in use then ballistic impact become higher, but causes the vehicle increasingly heavy, consequently declines the efficiency and agility as reported [6].
Ballistic resistant steel has been created and developed through quench temper and has been investigated in [7][8][9], bainitic quench temper in [10,11] to improve the strength and hardness. The more hardness of the steel increases, then ballistic resistance increases up to a certain value will decreases and finally a failure occurs due to the process of shearing and cracked so perforated from ballistic impacts has been reported in [12]. Failure mechanism up to the perforated on the plate shapes brittle fracture, radial fracture, ductile hole growth, plugging, fragmentation and petaling has been reported [13], ductile hole formation, soft plugging, hard plugging and target shatter has been reported in [14]. Ballistic resistance of the steel is complex function from mechanical properties, such as yield strength, tensile strength, hardness, ductility and charpy. Only one of mechanical properties unable to be used to predict the ballistic resistance, an optimum combination of strength, hardness and toughness is essential for good ballistic performance [15]. Besides the hardness, the thickness of the steel plate also affects the ballistic impact as stated in [8,9].
In the case of projectile impact against hard steel plate, it will form Adiabatic Shear Band (ASB) and makes the plate cracked, broken, and eventually perforated, as reported in [16]. The higher the material hardness, the easier the formation of ASB. Furthermore, the ASB triggers plugging as a result of shear stress [17]. As in [18] summarizes that ASB are formed in the process of dynamic deformation at a high strain rate, heat generated by the localized shear plastic deformation is hardly emitted outside due to lack of time, so the temperature rises suddenly in the local area. Heat on a narrow area and the acceleration of plastic deformation are main factor of the establishment of the band so that causes damage, has been reported [19]. ASB formation of high strength steel depends on the hardness, thickness and percentage of the hole made in the plate has been reported [9].
As in [20] reports performance of monolithic and double-layered shields against projectile impact with four types of projectiles of different weight and nose shape. The simulations with finite element prove that the double-layer configuration is able to improve the ballistic resistance by 8.0 %-25.0 % for the flat-nose projectile, compared to the monolithic plate of the same weight. Whereas conical-nose projectile does not indicates significant differences. As in [21] also reports that the best configuration double layered obtain the upper layer of high ductility and low strength material and the lower layer of low ductility and high strength material.
As in [22] has also been conducting experiments ballistic performance with blunt and ogival-nosed projectiles of double-layered steel plates of different materials. Result shows that the double-layered plates of the upper layer with high strength and low ductility material and the lower layer of low strength and high ductility have higher ballistic limit velocities than the configuration of the opposite layering order. The ogival-nosed projectiles are significantly smaller than those of blunt nosed projectiles ballistic limit velocities.
Study to compare monolithic with layered plate using several plates with the same total thickness and manufactured from the same material and different materials has been many reported. Construction of layered plates that are mutually made fixed/permanent resulted in the impact energy of the projectile will proceed directly to the next plate layer. So in this method has been presented manufacturing to one plate in front side that has the free (nonfixed/permanent) is slightly reported.
In this paper will describe and analyze the ability and the characteristics determine the ballistic characteristics of the two steel plates with different hardness levels and mix in the form of layered with manufacturing to one plate in front side that has the free (non-fixed/permanent) by macro and micro observation.
Materials and methods 2.1 Sample preparation
Steel plate of 6 mm thickness with different hardness was obtained from un-heat treatment steel for soft steel plate and heat treatment steel for hard steel plate. Heat treatment was carried out by austenitizing it at the temperature of 950 for 21 min, followed by water spray quenching, and then tempered at the temperature of 250 for 21 min. Table 1 shows the chemical compositions of each steel plate and Table 2 shows the mechanical properties. Standard test method for Brinell hardness of metallic materials with ASTM E10, tension testing of metallic materials with ASTM E8, notched bar impact testing of metallic materials with ASTM E23.
Ballistic testing
Soft and hard plates were made for a panel (150 × 150 mm) of ballistic test with single and double configuration shows in the Table 3. In the double configuration, soft plate was arranged on the back (back plate) with the aim of ease in manufacturing and application. The front plate is not fixed/permanently made on the back plate with a loose tab system. Each panel plate was shot at a distance of 15 m by using projectile caliber 5.56 × 45 mm M-193 deformed full metal jacket with a normal attack angle (90 to the plate) in accordance with NIJ Standard 0108.01. The average velocity projectiles were 989 m/s measured with the chronograph Prochrono®. The witness plats were made of aluminum sheet with a thickness of 0.2 mm. Fig. 1 is the usage of scheme panel and shooting positions as well as the projectiles.
Analyzed preparation
The results of the tests were based on macro observation by using a macro camera and the micro samples were polished mechanically, and it was etched for 7 s with a 2 % nital solution. Afterwards, this was observed and analyzed under the aid of a microscope optic.
Results and discussion 3.1 Macro observation
Projectiles can penetrate single soft and hard plate (S and H configuration) after ballistics tests and both formed craters. Craters characteristic that formed on both plates were different because it affected the level of hardness of each plate. Around the crater hole in the soft plate, formed petals on both sides which are the front face and back face of the projectile's direction. The hard plate did not appear petal but formed broken lips around the crater hole on the front face and petal with crack on the back face. Fig. 2 shows Macro observation on single soft plate (S) and hard plates (H). The crater formed with a high petal on the front face due to plastic deformation soft plate as shown in the macro deformed around the petal (Fig. 2 a) front face). High velocity ogival/tapered-nosed projectile (Fig. 1 b)), so that the projectile can be easily pierced to a soft plate. Due to a puncture, then the plate was deformed to a round of the hole and formed a high petal on the front face. While on the rear face of the petals appeared small (Fig. 2 a) rear face) with a different shape from the front face. Petals on the rear face occurred due to former fault and deformation by projectile impact.
The petals were not formed on the front face of the hard plate due to the plate character of the hard, brittle and not easily deformed, so that the pointed projectile was not capable of piercing the plate (Fig. 2 b) front face). Petal formed irrespective because the plate was very brittle by projectile impact so that left the former fault around hole which were called broken lips. Piercing projectile when impact the surface of the plate, the projectile became blunt and due to the high thrust force, the plate can be broken up to penetrate the plate. The rear face plate formed petals with cracks that indicated the material were brittle and low ductility (Fig. 2 b) rear face).
Characters of the cross-section hole formed in the soft plate enlarged on the back face (Fig. 2 a) cross section) this was the case beside deformation also occurred fracture when the projectile inside the plate, and the fault was pushed to the back face. At the hard plate did not occur flakes so that the holes formed from the front face to the back were relatively similar (Fig. 2 b) cross section). Fig. 3 shows plate fault due to projectile impact and hole formation in the witness plate on the single plate test. It proved that the soft plate besides occurred due to deformation also large fracture and small flake fracture that spread (Fig. 3 a) and Fig. 3 c)). The Crater formed beside due to projectiles puncture and deformation also due to the fracture on the plate on the back face (Fig. 3 a)). Fault plate and projectiles spread so that caused large hole and small holes around it on the witness plate (Fig. 3 c)).
The hard plate appeared single fracture was pushed to the back (Fig. 3 b)) the fault also appeared relatively flat, this proved that the projectile was not able to puncture plate but it projectile deformed be blunt. The single fracture penetrated the witness plate a form single hole (Fig. 3 d)). Fig. 4 shows macro observation on the petals, broken lips and crater wall. The soft plate was produce hot roll forming layered petals with residual jacket attached to petals (Fig. 4 a)). Besides the deformation, the plate also broke from the projectile impact. Fracture occurred on the plate were ductile fracture was seen along the crater wall (Fig. 4 b)). Petals were not formed on a hard plate because fracture around the hole due to the brittle plate happened, thus formed a broken lip (Fig. 4 c)). Single fracture in a cylindrical shape pushed to the back face of the plate, to form a sliding groove on the crater wall (Fig. 4 d)).
In the double plate (SS configuration and configuration HS) obtained the ballistic test results as shown in Fig. 5. Crater formed on the front and back side of the front plate on soft and hard plate similar to the single plate.
On the SS configuration (Fig. 5 a)) were soft plate -back plate, the projectile had completely perforated the front plate. Front plate formed petals on the front face and inner side plate. The residual velocity of the projectile and plate spall was still able to push the back plate so it deformed. This deformation caused the back plate on the rear face to form a bulge. The dominant failure model of the front plate SS configuration were petalling and fragmentations, while on the back plate was bulging.
Meanwhile the sample with HS configuration (Fig. 5 b)) were hard plate -back plate, on the front face seen spalls on the crater lip and cylinder plugging shape were pushing back plate. High hardness and high strength plate were able to resist the velocity and broke the projectile tip better, although this hard plate had lower impact energy than the soft plate. The petals did not visible on the front face and slightly visible on the inner side. Plugging that are formed by the impact of projectiles able to push slight back plate and smooth bulge on the rear side of the back plate appeared. Failure model of the front plate in HS configuration was plugging which spall in lips crater on front face. There were differences in the ballistic resistance between SS and HS configuration. On the SS configuration, projectile with an ogival/tapered-nosed ( Fig. 1 b)) was able to stab plate and plate deformed so it formed a petal on the front and back face. As in [9], have described the same as formed petal and smooth crater on the low strength and high strain steel plate due to the high temperature tempering. The petal formed in the front and rear on the mild steel plate due 7.62 mm AP projectile through numerical simulations and experiments [23]. Low strength on the plate caused plate easily penetrated by a projectile although the high toughness and impact energy. Plate was unable to withstand projectiles at high velocity with an ogival/tapered-nosed. High strain on the plate caused the plate easily deformed thus formed petal on both the front face and back face in the first layer plate.
Furthermore, on the H configuration, the petals on the front face were not found. Hard material with high strength and low strain trends brittle. The projectiles cannot pierce plate. High velocity projectiles were causing fractures around the crater lip and then broken plate formed a cylinder plugging. Plugging mechanism occured because the end of projectile was blunt due to impact. Projectile caused failure by plugging which involved shearing used hemispherical-nosed projectiles [24]. This plugging shape as formed in high strength armor steel 10 mm of thickness impacted by 7.62 mm deformable projectiles [15]. Real perforation did not appear in this configuration because the cylinder plugging restrained by the back plate. HS configuration (high strength, low strain) will significantly be a superior ballistic resistance than the S configuration because of its resistance capable of breaking the tip of the projectile. It has also been submitted in [25], where the high strength and low ductility material was Armox 560T of the upper layer, while the low strength and high ductility material was Weldox 700E to used blunt-nosed projectile [22] the target plates were made of 45 steel and Q235 steel used blunt and ogival-nosed projectile.
Crater hole diameter formed in the plate larger than the diameter of the projectile are used, it was due to plate occur plastic deformation during projectile impact at the high velocity. Table 4 shows the complete dimensions of craters formed. The hole formed in the soft plate (S) formed a different diameter with diameter projectile used 6.26 mm and 5.56 mm. Diameter that formed larger than the diameter of the projectile. This proved to occur of plastic deformation after projectile through the soft plate. Diameter on the backside much larger (9.46 mm) than the diameter on the front side as shown in cross-section holes are formed (Fig. 3 a)). The greater diameter on the back side was due to fracture in the plate when the projectile passed through the plate. This was evidenced in the witness plate (Fig. 4 c)) forming the main large hole and small holes around it. Large hole due to the projectile passed through a large plate and fracture, while small holes around the main hole showed projectile or plate fragments separated and penetrated the witness plate. Failure on the soft plate so that induced the formation of craters was a combination of process petaling and fragmentation reported in [13], which evidenced by the formation of petals and appear small fracture or soft plugging process as reported in [14] were the formation of holes with petals on the front side and then formed the main fracture driven by a projectile.
Crater hole diameter formed in the hard plate (H) was 6.88 mm (Table 4), much larger than the projectile diameter of 5.56 mm was used. This was due to the deformed projectile while hitting the plate so that the projectile diameter was great before the perforation plate. Diameter formed on the front side and the back side of the same (Fig. 3 b)), and did not visible small pieces pushed to the back as evidenced in the witness plate was formed only a single hole (Fig. 4 d)). A large impact force cannot be detained by the hard plate so that the plate of failed and perforated with a plugging mechanism reported in [13] or hard plugging in [14]. Plugging is the formation of holes due to a single fracture and sliding, while hard plugging is hole formation due to fracture that shifted that are preceded the formation of a small fracture on the side face.
The character of the front plate hole on SS configuration was similar to the S configuration that was high petal formation with a large diameter hole. It was also due to the deformation of the plate due to prick of the projectile's tip. So did with the character of the hole on the front plate HS configuration was the same with a hole on the Fig. 6 shows the microstructure on the soft plate, visible structure of ferrite and pearlite. Grains of Ferrite and pearlite in a fairly remote area of the crater wall appeared relatively round (Fig. 6 b)), while in the area close to the crater wall of the seen oval (Fig. 6 c)) with an oval shape in line with projectile direction. This formation due to deformation proses while projectile penetrates into the plate. It proved that perforated of soft plate occurs plastic deformation besides the fault occurred by projectile impact. Fig. 7 shows martensitic structures on a hard plate as quench temper process. Martensitic structure both in areas far from the crater walls (Fig. 7 a)) and in the area near the crater walls (Fig. 7 b)) did not have a significant difference. This proved that no or little deformation structure on a hard plate. Therefore, the hole was not as dominated by deformation but due to a fault on a plate driven by a projectile impact. Microstructure of hard plate show in Fig. 8, crack in hard plate show in Fig. 9 and ASB formation showed in Fig. 10.
Micro observation
Another area in the crater wall on the hard plate seen ASB (Fig. 8), crack (Fig. 9) and ASB induced cracking (Fig. 10) as has been reported in [9]. The ASB appeared due to high a) crater hole ballistic impact, b) ASB on the holes edge curved to toward the rate of projectile and c) ASB which appears in the fracture strain occured in the hard material so that the ASB did not appear clearly on the soft plate which easily deformed.
ASB formations with a curved shape with the arch direction the same of projectile direction seen in almost all the crater holes (Fig. 8 b)). ASB formation on the hard plate cause crack trigger along band and produced fracture. This showed before the fracture, hard palate which was impacted in weak areas will deform into a white band and then further cracked and broken [17]. ASB induced cracking formation (Fig. 8 c) and Fig. 10).
Cracks on the inside were also seen in hardened steel with an elongated shape in the direction of the rate of the projectile (Fig. 9). It did not found ASB around the crack, so the crack was not preceded by the formation of the band as it occurred in areas of direct contact with the projectile. Cracks were not appear on the soft plate that were ductile because soft plate easily deformed so that the impact energy when exposed to impact from projectiles.
Conclusion
The results of ballistic testing that used projectile caliber 5.56 × 45 mm M-193 deformed full metal jacket with a distance of 15 m normal angle of attack on the soft plate hard plate and double plate with a non-fixed/permanent arrangement on the back plate can be concluded into.
The projectiles can penetrate both single plates while on double plate, only the front plate that is penetrable while the rear plate is formed a bulge.
At the soft plate both on single and double plate appears petal, on the front side due to punctured piercing projectile and plastics deformation. Perforations mechanism occur the fragmentation and petaling process.
The Bulge formed on the back plate caused by the remained projectile impact energy on the front plate. Bulge on soft plate composition on front side is greater than the composition of the hard plate on the front side, so that hard plate can resist projectile better than soft plate.
At the hard plate, the diameter crater hole formation is greater than the diameter of the projectile due to deformed projectile before penetrate plate, and hard plat was not occur plastic deformation. Perforations mechanism occur the plugging.
At the hard plate appears Adiabatic Shear Band and cracking due to projectile impact.
Acknowledgement
This work is the financed and supported by the Ministry of Research, Technology and Higher Education of the Republic of Indonesia. | 5,182.8 | 2020-07-17T00:00:00.000 | [
"Materials Science"
] |
Crystal structure of caesium dihydrogen citrate from laboratory X-ray powder diffraction data and DFT comparison
The crystal structure of caesium dihydrogen citrate has been solved and refined using laboratory X-ray powder diffraction data, and optimized using density functional techniques.
The crystal structure of caesium dihydrogen citrate, Cs + ÁH 2 C 6 H 5 O 7 À , has been solved and refined using laboratory X-ray powder diffraction data, and optimized using density functional techniques. The coordination polyhedra of the nine-coordinate Cs + cations share edges to form chains along the a-axis. These chains are linked by corners along the c-axis. The un-ionized carboxylic acid groups form two different types of hydrogen bonds; one forms a helical chain along the c-axis, and the other is discrete. The hydroxy group participates in both intra-and intermolecular hydrogen bonds.
Chemical context
In the course of a systematic study of the crystal structures of Group 1 (alkali metal) citrate salts to understand the anion's conformational flexibility, ionization, coordination tendencies, and hydrogen bonding, we have determined several new crystal structures. Most of the new structures were solved using powder diffraction data (laboratory and/or synchrotron), but single crystals were used where available. The general trends and conclusions about the 16 new compounds and 12 previously characterized structures are being reported separately (Rammohan & Kaduk, 2017a). Ten of the new structures -NaKHC 6 H 5 O 7 , NaK 2 C 6 H 5 O 7 , Na 3 C 6 H 5 O 7 , NaH 2 C 6 H 5 O 7 , Na 2 HC 6 H 5 O 7 , K 3 C 6 H 5 O 7 , Rb 2 HC 6 H 5 O 7 , Rb 3 C 6 H 5 O 7 (H 2 O), Rb 3 C 6 H 5 O 7 , and Na 5 H(C 6 H 5 O 7 ) 2 -have been published recently (Rammohan & Kaduk, 2016a,b,c,d,e, 2017bRammohan et al., 2016), and two additional structures -KH 2 C 6 H 5 O 7 and KH 2 C 6 H 5 O 7 (H 2 O) 2 -have been communicated to the CSD (Kaduk & Stern, 2016a,b). ISSN 2056-9890
Structural commentary
The asymmetric unit of the title compound is shown in Fig. 1. The root-mean-square deviation of the non-hydrogen atoms in the Rietveld-refined and DFT-optimized structures is 0.387 Å (Fig. 2). This agreement is at the upper end of the range of correct structures as discussed by van de Streek & Neumann (2014). Re-starting the Rietveld refinement from the DFToptimized structure led to higher residuals (R wp = 0.1287 and 2 = 26.43). Accurate determination of the positions of C and O atoms in the presence of the heavy Cs atoms using X-ray powder data might be expected to be difficult. This discussion uses the DFT-optimized structure. Most of the bond lengths, bond angles, and torsion angles fall within the normal ranges indicated by a Mercury Mogul geometry check (Macrae et al., 2008), but the torsion angles involving the central carboxylate and hydroxyl group are flagged as unusual; the central portion of the molecule is less-planar than usual. In the refined structure, the O8-C1 and O10-C6 bonds, as well as the C3-C2-C1 angle, were flagged as unusual. The citrate anion occurs in the trans,trans conformation, which is one of the two low-energy conformations of an isolated citrate. The central carboxylate O10 and the terminal carboxylate O12 atoms chelate to the Cs + cation. The Mulliken overlap populations and atomic charges indicate that the metal-oxygen bonding is ionic. The Bravais-Friedel-Donnay-Harker (Bravais, 1866;Friedel, 1907;Donnay & Harker, 1937) morphology suggests that we might expect a platy morphology for cesium dihydrogen citrate, with {020} as the principal faces. A 4th-order spherical harmonic texture model was included in the refinement. The texture index was 1.183, indicating that preferred orientation was significant for this rotated flat-plate specimen.
Supramolecular features
The nine-coordinate Cs + cation (bond-valence sum 0.96) share edges to form chains along the a axis (Fig. 3). These chains are linked by corners along the c axis. The O7-H20Á Á ÁO8 hydrogen bonds (Table 1) form a helical chain along the c axis, and the O11-H21Á Á ÁO10 hydrogen bonds are discrete. The Mulliken overlap populations in these hydrogen bonds are 0.064 and 0.095 e, respectively. By the correlation in Rammohan & Kaduk (2017a), these hydrogen bonds contribute 13.8 and 16.8 kcal mol À1 to the crystal energy. The hydroxy group O13-H16 acts as a donor in two hydrogen bonds. The one to O10 is intramolecular, with a graph-set symbol S(5). The one to O9 is intermolecular, with a graph set symbol S(7). These hydrogen bonds are weaker, contributing 11.2 and 9.1 kcal mol À1 to the crystal energy. The asymmetric unit, with the atom numbering. The atoms are represented by 50% probability spheroids.
Figure 3
Crystal structure of CsH 2 C 6 H 5 O 7 , viewed down the c-axis.
Database survey
Details of the comprehensive literature search for citrate structures are presented in Rammohan & Kaduk (2017a). A reduced-cell search of the cell of cesium dihydrogen citrate in the Cambridge Structural Database (Groom et al., 2016) (increasing the default tolerance from 1.5 to 2.0%) yielded 60 hits, but combining the cell search with the elements C, H, Cs, and O only yielded no hits.
Synthesis and crystallization
H 3 C 6 H 5 O 7 (H 2 O) (2.0766 g, 10.0 mmol) was dissolved in 10 ml deionized water. Cs 2 CO 3 (1.6508 g, 5.0 mmol, Sigma-Aldrich) was added to the citric acid solution slowly with stirring. A white precipitate formed in about two minutes, and the colourless solution was evaporated to dryness at ambient conditions.
Refinement details
Crystal data, data collection and structure refinement details are summarized in Table 2. The powder pattern (Fig. 4) (2), b = 20.5351 (2), c = 5.1682 (5) Å , V = 927.17 (9) Å 3 , and Z = 4. The peak list from a Le Bail fit in GSAS was imported into Endeavour 1.7b (Putz et al., 1999), and used for structure solution. The successful solution used a citrate, a Cs atom, and two oxygen atoms from water molecules. Initial Rietveld refinements moved the oxygens close to the Cs site, so they were deleted from the refinement.
Pseudo-Voigt profile coefficients were as parameterized in Thompson et al. (1987) with profile coefficients for Simpson's rule integration of the pseudo-Voigt function according to Howard (1982). The asymmetry correction of Finger et al. (1994) was applied, and microstrain broadening by Stephens (1999). The structure was refined by the Rietveld method using GSAS/EXPGUI (Larson & Von Dreele, 2004;Toby, 2001).
All C-C and C-O bond lengths were restrained. The C-C bonds were restrained at 1.54 (1) Å , and the C3-O13 bond at 1.42 (2) Å . The C-O bonds in the carboxylate groups were restrained at 1.26 (2) Å . All angles were also restrained; the restraints were 109 (3) for the angles around tetrahedral carbon atoms, and 120 (3) for the angles in the planar carboxylate groups. The restraints contributed 3.0% to the final 2 . The hydrogen atoms were included at fixed positions, which were recalculated during the course of the refinement using Materials Studio (Dassault Systè mes, 2014).
DFT calculations
A density functional geometry optimization (fixed experimental unit cell) was carried out using CRYSTAL09 (Dovesi et al., 2005). The basis sets for the C, H, and O atoms were those of Gatti et al. (1994), and the basis set for Cs was that of Prencipe (1990
Figure 4
Rietveld plot for the refinement of CsH 2 C 6 H 5 O 7 . The vertical scale is not the raw counts but the counts multiplied by the least squares weights. This plot emphasizes the fit of the weaker peaks. The red crosses represent the observed data points, and the green line is the calculated pattern. The magenta curve is the difference pattern, plotted at the same scale as the other patterns. The row of black tick marks indicates the reflection positions. | 1,881.6 | 2017-01-10T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Physics"
] |
Leveraging Renewable Energies in Distributed Private Clouds
The vast and unstoppable rise of virtualization technologies and the related hardware abstraction in the last years established the foundation for new cloud-based infrastructures and new scalable and elastic services. This new paradigm has already found its way in modern data centers and their infrastructures. A positive side effect of these technologies is the transparency of the execution of workloads in a location-independent and hardware-independent manner. For instance, due to higher utilization of underlying hardware thanks to the consolidation of virtual resources or by moving virtual resources to sites with lower energy prices or more available renewable energy resources, data centers can counteract their economic and ecological downsides resulting from their steadily increasing energy demand. This paper introduces a vector-based algorithm for the placement of virtual machines in distributed private cloud environments. After outlining the basic operation of our approach, we provide a formal definition as well as an outlook for further research.
Introduction
Cloud infrastructures and the underlying virtualization technologies are building the foundation of modern data centers.These paradigms also offer potential for reducing the energy consumption of data centers representing most of their ongoing operational costs.In this paper, we seize an opportunity to increase the energy-efficiency of data center operation by introducing a vector-based algorithm to support virtual machine placement decisions.After a brief introduction of related work in Section 2, we outline the basic operation followed by the formal definition of our algorithm.Further, we evaluate our approach and discuss the impact of migration costs in Section 4. Finally, we give an outlook on future work in Section 5.
Related work
The placement of virtual resources in modern cloudbased environments is subject of current research.A project called CAESARA [1], introduced an algorithm for the energy efficient placement of virtual machines by estimating the server's energy consumption based on the running virtual machines' characteristics.The cost of virtual machine migration operations and their energy consumption itemized by the different type of data center equipment, is described in [2].In [3], a utility is described that allows to distribute virtual machines considering the migration cost and also a basic analysis of migration cost and the impact of live-migration on the running application is outlined.A distributed algorithm for placing virtual machines in large cloud environments is outlined in [4].The basic approach is that each server knows the CPU load of the other physical servers and each server tries to comply with an upper and lower threshold for the CPU load and initiates the migration of virtual machines when these thresholds are violated.Also, the underlying mathematical challenges like the setpartitioning [5], [6] and bin-packing [7]- [9] problems are still subject of current scientific studies and research.There also exist vector-based approaches for VM placement, but these mostly focus on intra data center placement of VMs on physical machines (PM).In [10] a vector-based methodology to model VM resources and to place VMs on PMs is introduced.Another vector-based constraint programming approach is described in [11].A routing centric placement algorithm is introduced in [12] and describes a combined optimization approach for data center traffic and VM placement.Furthermore, the communication demand is focused in [13] for VM placement.In contrast to the listed publications, our approach also incorporates the use of renewable energies and their fluctuating characteristics concerning availability and pricing.
Inter-DC energy-aware placement of virtual machines
This section outlines the basic idea and functionality of our algorithm.In this context the iterative approach of the algorithm causes its complexity to be relatively low compared to bin-packing or set-partitioning algorithms.Thus, each single iteration will lead to a better overall topology.The algorithm is run continuously, though in reality a delay or pause between individual runs might be reasonable, especially, if a predefined threshold of migrations across multiple runs has not been reached.This limits the resources and management traffic generated by the execution of the algorithm.The algorithm consecutively considers the optimal placement for each virtual machine with respect to its network flows, corresponding relationships among them and connections to external clients.
Energy-efficient placement considering renewable energies
The algorithm used in this paper is a vector-based approach to optimize scheduling and placement decisions in private clouds.In this context the dimensions of the used vector space specify the characteristics of virtual resources as entities in distributed data centers.To illustrate the basic operation of the algorithm, we will just use an example with three dimensions here.The dimension ݔ and ݕ depict the geographical location of the entity and the dimension ݖ the availability of renewable energy sources for this data center site or, more generally speaking, location.The example shown in Figure 1 shows the data center ݓ ௗ representing a site with available wind energy and ௗ depicting a site with available photovoltaic energy.The shown arrows indicate the modification of the positional vector of the data centers ݓ ௗ and ௗ over the time span from summer to autumn.Figure 2 illustrates the computation of a destination vector.Thereby ݀ ଵ , ݀ ଶ , ݀ ଷ and ݀ ସ represent data centers and ܿ ଵ , ܿ ଶ und ܿ ଷ clients with a uniformly distributed volume of communication.The destination vector in this example will be computed for a virtual machine currently executed in data center ݀ ସ .In this case the destination data center ݀ ଶ is chosen for the migration since this data center location has the shortest distance to the destination vector ݖ ⃗ .The algorithm can be adapted to consider the network location instead of a geographical location, e.g., by including weights regarding the latency or other quality of service metrics between the data center and the clients.
Definitions
Let ܸ be a finite-dimensional vector space over ℝ with the dimension ݒ and ܲ the set of properties with ߚ : ܲ → ܸ the function to map properties of ܲ to the vector space ܸ.Furthermore, let ݉ be a virtual machine defined by the tuple ݉ = (݅ , ܲ ) with ݅ a unique identifier of the virtual machine and ܲ ⊂ ܲ the set of the associated properties.Moreover, let ܦ = {(݅ ௗ , ܿ ௗ ∈ ℕ ା , ܯ ௗ )} be the set of available data centers with ݅ ௗ the unique identifier of the data center, ܯ ௗ the set of virtual machines currently executed at this data center and ܿ ௗ the capacity of this data center, so it holds ܯ| ௗ | ≤ ܿ ௗ .The complete set ܯ of all virtual machines is defined as follows Of course, each virtual machine ݉ ∈ ܯ is restricted to only be executed at one data center for any moment in time.This means so that ݒ ⃗ ௫ is the positional vector of the data center ݔ ௗ .One more function ߚ ெ : ܯ → ܸ is used to map a virtual machine to the positional vector of the data center within it is executed and is defined by Also, we define ܰ as the set of network flows by ܰ = ݏ({ ∈ ܲ, ݀ ∈ ܲ, ݊ ∈ ℕ ା )} so that ݏ is a property of the communication source, ݀ a property of a communication destination and ݊ the metric over a defined observation time span.Furthermore the euclidian distance for ܸ is used for the distance function ߜ: ܸ × ܸ → ℝ as follows: (1) Moreover, we define the following helper functions: x The function ߪ: ܯ → ܦ maps each virtual machine to the data center it is currently executed in and is defined by: x The function ߠ: ܯ → ܰ maps each virtual machine to the set of network flows with matching properties: x The function ߶: ܯ → ℕ ା defines for a virtual machine the sum of metrics of the associated network flows.It is defined by: x The function ߚ : ܲ → ܸ maps a property to a vector.
Respectively, for a property of a virtual machine the positional vector of the associated data center is given by:
Continuous placement algorithm sequence
For the correct operation of the algorithm at least one data center of the set of data centers ܦ must have spare capacity available.This means: The sequence of the algorithm is as follows: 1) We define ܶ = ܯ 2) While ܶ ≠ ∅: a) We choose an arbitrary ݐ ∈ ܶ with ݐ = (݅ , ܲ ) and define ܶ = .ݐ\ܶFurthermore, we set ݐ ௗ = )ݐ(ߪ with ݐ ௗ = (݅ ௗ , ܿ ௗ , ܯ ௗ ) and define For the case ܦ ௧ = ݐ{ ௗ } we can examine the next virtual machine and start over with step 2 of the algorithm.
b) Now we can compute the destination vector We choose the destination data center ݖ ௗ = ൫݅ ௗ , ܿ ௗ , ܯ ௗ ൯ ∈ ܦ ௧ with the shortest distance to the destination vector ݖ ⃗ so that For the case ݖ ௗ = ݐ ௗ the virtual machine is already executed in the optimal data center, so we can examine the next virtual machine and start over with step 2 of the algorithm.
d) For the case ܯ| ௗ | = ܿ ௗ , for the given destination data center ݖ ௗ = (݅ ௗ , ܿ ௗ , ܯ ௗ ) ∈ ,ܦ we choose the virtual machine with the lowest communication amount and move this machine to the nearest data center with available capacity: i) We determine a virtual machine ݔ ∈ ܯ ௗ so that ݔ = (݅ ೣ , ܲ ೣ ) which holds iii) We define ܯ ௗ = ܯ ௗ ݔ\ and ܯ ௗ ೞ = ܯ ௗ ೞ ∪ }ݔ{ e) Now, ܯ| ௗ | < ܿ ௗ , so we move the virtual machine ݐ to the destination data center ݖ ௗ .Finally, we alter the sets ܯ ௗ = ܯ ௗ ݐ\ and ܯ ௗ = ܯ ௗ ∪ .}ݐ{ 3) The algorithm has now examined each virtual machine of ܯ .After a new representative set of network flows is collected, we can start over with step 1 and a new iteration.As outlined above, the algorithm starts over after all virtual machines were processed.Due to the online/iterative nature of this algorithm new network flows, historical data or even changes in the availability of renewable energy sources will be taken into account.This means, that this approach optimizes the overall topology continuously over time.It is obvious that the temporal resolution of the available data and its rate of change has to be considered when running the algorithm in a real private cloud environment to limit the amount of resources necessary to run the algorithm.Also, oscillations of virtual machines between data centers (e.g., due to similar clients or properties) should be prevented, e.g., using a hysteresis based on former placements of the virtual machine. (
Evaluation of optimal VM placement and migration costs
The algorithm to optimize the placement of virtual machines described in the previous section is based on the minimization of operational characteristics of the VMs.In the outline of the algorithm described above, energy costs for the data centers hosting the VMs and the distance between the data centers and the clients are minimized.However, regarding the energy efficiency of the approach, the additional costs for the implementation of the algorithm have to be taken into account.While the energy consumption of the algorithm itself can be limited, e.g., by decreasing its resolution using longer intervals, the migrations resulting from the execution of the algorithm lead to additional energy costs.These migrations costs can be divided in direct and indirect costs.Direct migration costs arise from the effort to carry out the migration, i.e. using compute, storage and network resources across multiple data centers and links between them.Indirect costs are formed by the consequences of the migration and impacts on the operational characteristics of the virtual machine.For example, such indirect costs can result from a higher latency for some clients after the migration.The algorithm presented above does not consider possible service level requirements regarding the latency of all clients in favor of its primary goal, to benefit from the lowest energy price.Such constraints could be integrated in the optimization by using them as additional metrics of the virtual machines.This can be described as a the problem to minimize whereas ܥ are the overall energy costs to be optimized, ܥ ܯ( ) the operational costs of the data centers based on the the properties and contraints of the virtual machines defined in ܲ , ܥ ܯ( ) the costs to run the algorithm and the direct (݉݅݃ ௗ ) and indirect (݉݅݃ ) costs for the resulting migration and placement decisions of the algorithm ܥ ܯ( ) = ܥ ܯ( ) + ܥ ܯ( ) .The minimization of ܥ is subject of several projects and related work regarding the energy efficiency and power management of data center infrastructures.As stated above, ܥ can be limited by increasing the interval between iterations of the algorithm and its implementation.ܥ is primarily minimized by the defined constraints.ܥ is influenced by the migration technique used to transfer the virtual machines' resources between data centers.Based on related work in this area, in [2] we presented results from a simulation to leverage fluctuating renewable energies in northern, southern and central Germany.Also, we implemented an extension for OpenStack to use migrations to enhance the energy efficiency in distributed private clouds.Besides leveraging renewable energies, the testbed also consolidated the virtual machines across different data center sites to lower ܥ .By integrating the algorithm described in this paper in this testbed and the previously presented simulation studies, also the placement of new virtual machines can be included in the optimization.This also includes the possibility to spawn multiple instances of a service provided by a virtual machine, e.g., to address the constraints regarding the latency between the service and the clients, hence increasing ܥ in favor of a reduced ܥ and resulting lower ܥ .This can solve situations in which the algorithm would migrate a virtual machine to a new site that is too far away from some connected clients and hence would violate predefined service level constraints (as, e.g., implemented by content delivery network and cloud providers).
Conclusions and future work
The algorithm described in this paper can be used to evaluate the use of renewable energies to enhance the overall energy efficiency across multiple data centers.Based on our previous research, we are evaluating to use the algorithm in simulations as well as integrate it in an extension for the OpenStack Nova scheduler, that we developed to leverage renewable energy sources and power management in distributed private cloud infrastructures.The scheduling extension can also consider the costs for placing or migrating virtual resources across multiple public or community clouds.Such hybrid cloud environments, can benefit from the energy efficiency and low energy prices in distant, supposedly public, clouds to lower the energy cost for workloads that are safe to be transferred to a third-party cloud provider.To benefit also from short-term fluctuations in the availability of renewable energy sources, e.g., on a daily basis, new live-migration and placement techniques for virtual resources have to be developed.We're currently working on container-based migration and service placement.This lightweight virtualization solution facilitates the transfer of the current state of the virtual resource and the underlying storage.Initial tests have shown that the amount of data that needs to be transferred is far less compared to fullsized virtual machines.However, the network virtualization, especially the data plane performance, and the effort to checkpoint and restore containers under load is still a challenge compared to existing and well-tested virtual machine live-migration techniques.
Figure 1 .
Figure 1.Example of the 3-dimensional vector space V. | 3,661 | 2016-01-01T00:00:00.000 | [
"Computer Science"
] |
Safety helmet detection method based on semantic guidance and feature selection fusion
Safety helmet detection is a hot topic of research in the field of industrial safety for object detection technology. Existing object detection methods still face great challenges for the detection of small-scale safety helmet object. In this paper, we propose a safety helmet detection method based on the fusion of semantic guidance and feature selection. The method is able to consider the balance between detection performance and efficiency. First, a multi-scale non-local module is proposed to establish internal correlations between different scales of deep image features as well as to aggregate semantic context information to guide the information recovery of decoder network features. Then the feature selection fusion structure is proposed to adaptively select deep features and underlying key features for fusion to make up for the missing semantic and spatial detail information of the decoding network and improve the spatial location expression capability of the decoding network. Experimental analysis shows that the method in this paper has good detection performance on the expanded safety helmet wearing dataset with 5.12% improvement in mAP compared to the baseline method CenterNet, and 6.11% improvement in AP for the safety helmet object.
1
In the production process, due to the worker's lack of safety 2 awareness and other reasons caused by the failure to wear 3 safety helmets caused by the casualty accident is common. 4 Considering the problem of inefficient manual supervision, 5 active research on automatic worker safety helmet detection 6 methods in the operating environment based on object detec-7 tion technology has important theoretical significance and 8 practical application value for ensuring workers' personal 9 safety [1] as well as achieving safe production [2, 3]. 10 Traditional safety helmet detection methods are based on 11 manual collection of features. Wu et al. [4] then used hierarchical support vector machines for safety hel-14 met classification. Yue et al. [5] used HOG to extract object 15 features and constructed random queer classes in the fea- 16 ture domain space for safety helmet detection using random 17 binary test. With the rapid development of deep learning, tra-18 ditional methods no longer meet the development needs of 19 today's technology. Numerous scholars currently use deep 20 learning-based target detection techniques to achieve safety 21 helmet object detection. Li et al. [6] proposed a method to 22 extract safety helmet object features using lightweight net-23 works in SSD. Zhou et al. [7] proposed to add a channel 24 attention module to the backbone network to enhance the 25 feature extraction capability. Cheng et al. [8] proposed SAS-26 YOLOv3-Tiny safety helmet detection method for embedded 27 devices and for practical application scenarios. Gu et al. 28 [9] proposed a helmet wearing detection method based on 29 posture estimation, which combines the human posture to 30 detect the helmet wearing condition. Sun et al. [10] embed 31 the attentional mechanism in the YOLOv5 backbone, while 32 compressing the model to accommodate real-time detec-33 tion of safety helmets in mobile devices. Zhang et al. [11] 34 designed the anchor using K-means based on the YOLOv5s 35 method, while adding a prediction layer to improve the net-the features obtained from the backbone network and select 85 key feature information to be fused, further helping to recover 86 image spatial detail information. 88 The deep features have only a fixed receptive field, result-89 ing in poor long-range dependence. This leads to the loss 90 of important contexts information. Inspired by SPP [14] and 91 non-local networks [15,16], this paper proposes a multi-92 scale non-local module, as shown in Fig. 2. The module 93 uses pooling layers of different kernel sizes to extract seman-94 tic and spatial detail features from the input image features 95 at different scales, while using non-local modules to estab-96 lish correlations of internal features between different scales, 97 and further uses the correlation information of multi-scale 98 features to generate rich semantic contexts information for 99 guiding the information recovery of subsequent image fea-100 tures in the decoding process.
101
Specifically, for the input features of x ∈ R C×H ×W , fea-102 ture mapping is first performed using pooling kernels k ∈ 103 {5, 9, 13} of different sizes, and due to the padding opera-104 tion, features m i ∈ R C×H ×W that do not change the feature 105 size and contain semantic and spatial information of different 106 scales can be obtained separately. Then, convolutional trans-107 formations are performed using 1 × 1 convolutional layers 108 W θ ,W φ and W g to obtain θ (m) = W θ m,φ(m) = W φ m and 109 g(m) = W g m, respectively. Then θ and φ are matrix multi-110 plied to obtain similarity attention matrices A ∈ R N ×N and 111 N = H ×W , normalized using softmax to obtainà ∈ R N ×N , 112 andà and g are matrix multiplied again to obtain V ∈ R N ×C , 113 and transformed using a 1 × 1 convolution W z to sum the 114 elements with the initial input features x ∈ R C×H ×W to 115 establish the correlation of internal features between differ-116 ent scales.
121
To enhance the semantic and spatial detail information of 122 image features in the decoding process. Inspired by the 123 attention mechanism [17][18][19], this paper proposes a feature 124 selection fusion structure, which consists of two modules: dimensions, as shown in Fig. 3 ∈ R 1×H ×C are obtained using maximum pool-144 ing and average pooling processes. The output feature X 1 ∈ 145 R C×H ×W is then obtained by convolving the 7 × 7 layers 146 as well as the BN layer, while using Sigmoid to obtain the 147 attention weights and weighting them toF 1 and rotating them 148 clockwise by 90 • along the H-axis, as expressed in Eq: For the second branch across C and W . The input feature 151 F se ∈ R C×H ×W is rotated 90 • counterclockwise along the 152 W-axis to obtainF 2 ∈ R H ×C×W . After that, the same oper-153 ation as the above process is done and the weighted feature 154 obtained is rotated 90 • clockwise along the W-axis to obtain 155 the output feature X 2 ∈ R C×H ×W , as expressed in Eq: × 1 convolution layer, as shown in Fig. 3(b). The formula is 169 expressed as: [21]. Some images of the dataset are 185 shown in Fig. 4. In this paper, it is divided into training and 186 test sets according to 8:2, with 6464 images in the training 187 set and 1617 images in the test set.
Ablation experiments 209
To verify the effectiveness of the modules proposed in this 210 paper, the input image is set 512 × 512, and the pre-trained 211 ResNet-50 is used as the backbone for ablation experiments, 212 and the results are shown in Table 1. Where FSF denotes fea-213 ture Selection Fusion Structure, NLM denotes Non-Local 214 module, and MSNM denotes Multi-Scale Non-local Mod-215 ule proposed in this paper. mAP is calculated using an IoU 216 threshold of 0.5.
217
As seen in Table 1, the addition of the feature selection 218 fusion structure to the baseline method was able to improve 219 Fig. 6 Comparison of the effect of the method and baseline detection in this paper The comparison between the heat map effect of this paper 240 and the baseline CenterNet is shown in Fig. 5, where the 241 Table 2. 256 As can be seen from Table 2, the method in this paper 257 achieves the best performance when the input image is 258 512 × 512, with mAP able to reach 87.21% and for the 259 safety helmet AP value of 85.55%. The method in this paper 260 improves 16.89%, 6.16% and 6.24% compared to SSD, RFB-Net and RefineDet methods for the same input size, and 261 11.75%, 8.31% and 4.52% for the safety helmet AP values, 262 respectively. And compared with the YOLOv3 and YOLOv4 263 methods with an input size of 416 × 416, the mAP of the 264 method in this paper is improved by 1.55% and 0.66%, 265 respectively, and the AP values for the safety helmet object 266 are improved by 2.7% and 2.46%, respectively. In addition, 267 compared with the FCOS, YOLOv5-m, and YOLOX-s meth-268 ods with an input size of 640 × 640, the mAP values were 269 improved by 3.63%, 1.22%, and 0.04%, respectively, while 270 the AP values for the safety helmet objects were improved 271 by 7.82%, 1.99%, and 0.97%, respectively, further verify-272 ing the effectiveness of the method in this paper. In addition, 273 the FPS of the method in this paper is 49.2 when the input 274 image is 512 × 512, which is slightly lower compared with 275 other methods, but has obvious advantages for the detection 276 accuracy of safety helmet objects.
277
The detection effect of this paper's method in the test set 278 is shown in Fig. 7, and it can be seen that this paper's method 279 can have good detection effect for small-scale safety helmet 280 objects at the far end of the image. | 2,165.8 | 2023-05-20T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
A new multi-level algorithm for balanced partition problem on large scale directed graphs
Graph partition is a classical combinatorial optimization and graph theory problem, and it has a lot of applications, such as scientific computing, VLSI design and clustering etc. In this paper, we study the partition problem on large scale directed graphs under a new objective function, a new instance of graph partition problem. We firstly propose the modeling of this problem, then design an algorithm based on multi-level strategy and recursive partition method, and finally do a lot of simulation experiments. The experimental results verify the stability of our algorithm and show that our algorithm has the same good performance as METIS. In addition, our algorithm is better than METIS on unbalanced ratio.
efficient heuristic algorithm for 2-BGP with time complexity O(n 2 log n). Then, Fiduccia and Mattheyses [10] developed a linear heuristic algorithm. Spectral method [11] is also an important method to solve BGP. This method divides the given graph into two parts, by using their eigenvalues and eigenvectors of its adjacency matrix or Laplacian matrix. At present, there are many graph partition algorithms based on spectral method [12,13], which can solve 2-BGP or general k-BGP iteratively.
On the other hand, with the increasing of the problem scale and improvement of the computing power, the size of the graph to be partitioned is becoming larger and larger, and the number of vertices of the graph reaches 100,000,000 or more. Thus, it is impractical to use the previous algorithms to solve large scale graph partition problem. Therefore, researchers proposed multi-level method and streaming algorithms to solve this problem. The main idea of multi-level method is to convert the original graph into a small scale resulting graph by multiple contraction firstly, then divide the new graph into k-parts, and finally back map and modify the partition of the contracted graph to become a partition of the original graph. The popular software and software package of graph partition, METIS [14] and KaHIP [15] were designed based on this method. The main idea of the streaming algorithm is to assign each vertex of the graph into the suitable part one by one, through a specific potential function. The advantage of streaming algorithm is fast and memorysaving, and it is very suitable for large-scale graph partition problem. The graph partition software FENNEL is based on streaming algorithm [16].
Although a lot of theoretical results and algorithms on graph partition have been obtained, there are still some problems that have not been explored. The first problem is partition on directed graph. Most of the previous works are on undirected graphs, but for some practical applications, such as multi-subject coupling problem, the corresponding models should be directed graph. Therefore, it is necessary to study the partition on directed graphs. The second one is about the objective function. In the past, researchers often considered the vertex-weight and the edge-weight separately, that is, to optimize some edge-weight objective functions under some vertex-weight constraints. There are few works on objective functions combining the two weights together. Based on the above two points, we study the directed graph problem with combined weight function.
The organization of this paper is as follows. Some basic conceptions of graph theory and the mathematical modeling of this problem will be presented in Section 2. In Section 3, we introduce the main idea and process of our algorithm. The experimental results are exhibited in Section 4. In detail, we will verify the stability of our algorithm, determine some parameters and compare our algorithm with METIS. Finally, the conclusion and future work are given in Section 5.
Basic conceptions and mathematical modeling
In this section, we will introduce some conceptions in graph theory and develop the mathematical programming for the new balanced graph partition problem.
A (undirected) graph G is an ordered pair (V (G), E(G)) consisting of a set V (G) of vertices, and a set E(G) of edges. Each edge of G is an unordered pair of vertices. If an edge e joins vertices u and v, then u and v are called the ends of e. A directed graph D is an ordered pair (V (D), A(D)) consisting of a set V (D) of vertices, and a set A(D) of arcs (directed edges). Each arc of D is an ordered pair of vertices. If an arc a joins vertices u to v, then u is the tail of a, v is the head of a, and u and v are the ends of a. For any graph, if we regard each edge e = uv as two arcs (u, v) and (v, u), then this graph becomes a directed graph. Thus, undirected graphs can be considered as a special class of directed graphs. For any vertex v in D, the notation A − D ({v}) is the sets of arcs whose heads are v, and the notation A + D ({v}) is the sets of arcs whose tails are v. Furthermore, for any vertex subset X, A − D (X) (A + D (X)) is the sets of arcs whose heads (tails) are in X, but tails (heads) are not in X. A set M of independent arcs (no common ends) in a digraph D is called a matching.
Given a specific k-partition P, for any part j, we define its load w(a). Let L P M and L P m be the maximum load and minimum load among all parts in P, that is, Thus, we model the balanced graph partition problem as the following unconstrained two-objective programming, where P is the set of all k-partitions of G and ρ P is the unbalanced ratio of the partition P.
As mentioned in Section 1, our problem differs from the one in METIS in two points. The first is that METIS only deals with undirected graphs, but our problem is defined on directed graphs. The second is the different objections. The optimization problem of METIS is as follows, where E C is the set of edges whose ends are in distinct parts, and ρ ≥ 1 is the unbalanced ratio of the vertex weights. That is to say, the model of METIS considers vertices and edges separately, but we consider them together.
Algorithm
Since the scale of the graphs we're going to deal with is very large (up to 100,000,000 vertices), and the number of parts is also large (up to 100,000), our algorithm is designed by combining the classical multi-level method and the recursive partition method.
Multi-level stage
Recently, the popular method to partition the large scale graph is the multi-level method. The multi-level method contains three phases: iterative contraction, initial partition and modification, and backward mapping. We will introduce the detail of each phase in the following. PHASE 1: Iterative Contraction. In this phase, we will construct a sequence of directed graphs To do this, we use the standard strategy for any current graph D i . We compute a maximal matching M i and contract every arc of M i into a new vertex to obtain the next graph D i+1 . In detail, for any arc a = (u, v) of M i , the process of contraction is removing a and a = (v, u) and identifying u and v as a new vertex x so that it is incident with whose arcs (other than a and a ) that were originally incident u or v or both. The weight of new vertex x is the sum of weights of vertices u and v, and the weight of This phase ends when one of the following occurs: (i) the number of vertices of the current graph is less than ck, where k is the number of parts of the partition and c = 90 is the contracted parameter chosen by our experiments in the next section; (ii) the ratio of contraction |V To compute the maximal matching, we will use the following two random methods.
Random Maximum Weight Matching (RMWM).
This classical method is used in METIS [14] and other multi-level algorithms [15]. The process of RMWM is as follows. The vertices of the graph are chosen by a random order. For a chosen vertex u, if u is already matched by other vertex or its in-neighbors are all matched, we choose the next vertex. Otherwise, u is matched with its unmatched in-neighbor v with the maximum When all vertices are chosen, we can obtain a maximal matching. Random Maximum Ratio Matching (RMRM). The motivation to use this matching is the new objective functions. The only difference between the processes of RMRM and RMWM is the way to choose a vertex to match a vertex u, from its in-neighbors. Since the objective function considers the weights of vertices and arcs together, u is matched with its unmatched in-neighbor v with the maximum ratio of arc-weight to vertex-weight, that is,
PHASE 2: Initial Partition and Modification.
After iterative contraction, the final graph D m has at most ck vertices. Thus, we can fast obtain a good initial partition by greedy strategy. In detail, we will use the best fit decreasing (BFD) algorithm similar to that of solving the bin-packing problem. Firstly, we set every part P j = ∅ for any j = 1, 2, . . . , k and reordering the vertices with decreasing vertex-weight. For each stage, if we put the current vertex v into the j-th part, then the load of the j-th part will become and the load of other part i ( = j) will become Thus, we put v into the part so that the maximum load is minimum. When all the vertices are visited, the initial partition P is obtained.
The aim of modification is to make the initial partition a local optimum. The main strategy is local search, that is, move a vertex of the maximum load part into another part to reduce the maximum load, iteratively. In detail, for current iteration, we firstly choose a part P j with the maximum load. Then, for any vertex v in P j , we calculate its in-arc-weight Now, if we move vertex v from part P j into part P i , then the load of any part other than P i and P j has not changed, and the new loads L j and L i become For every pair (v, P i ), we can calculate the maximum load and the sum of loads of the swapped partition.
If there exist some swapped partitions whose maximum load is less than that of the current partition, then we choose the swapped partition with minimum maximum load to replace the current one, and repeat this operation. Otherwise, if there are some swapped partitions whose maximum load is equal to that of the current partition, but the sum of loads is less than that of the current partition, we choose the partition with minimum sum of loads instead of the current one, and repeat this operation; else, the current partition achieves a local optimum, and the process of modification is finished.
Recursive partition stage
As stated in the former subsection, the phase of iterative contraction ends when the number of vertices of contracted graph D m is less than 90k, where k is the number of parts of desired partition. This implies that if k is large, the scale of D m is also large, which can result in bad performance and long running time. Thus, we use the recursive partition strategy to avoid this. The main idea of the recursive partition method is as follows. At the beginning, we factorize k into several small numbers, say, k = k 1 k 2 · · · k t , with k i ≤ 20. This can often be accomplished, because in practice k is often chosen to be a number with many factors. In the first step, we use the multi-level method to obtain a k 1 -partition P of the original graph. Since k 1 is small, we can guarantee good performance and short running time. Based on the partition P, the whole graph is decomposed into k 1 subgraphs, and each is induced by a part in P. Note that the weight of arcs in the subgraphs is the same as that in the original graph, but the weight of every vertex v needs to be changed as follows, Fig. 1 The unbalanced ratios of the two types of maximal matching where P[ v] is the part which v belongs to P. The purpose of changing vertex-weight is to ensure that the objective value for each subgraph sums up to the one for the whole graph. In the second step, we will divide every subgraph into k 2 parts, and obtain k 1 k 2 new subgraphs by decomposing all old subgraphs. Hence, in the last step, we have k 1 k 2 · · · k t−1 subgraphs and obtain a k t -partition of every subgraph. That is, we obtain a partition of the original graph with k 1 k 2 · · · k t = k parts.
How to choose a recursive partition strategy? Based on our experiments in the next section, we find that there is little difference between different strategies. Thus, if k is a power of some integer b ≤ 20, that is k = b t , then we divide k into b × b × · · · × b.
Experimental results
In this section, our experiment is mainly divided into two parts: design of algorithm and comparison with other algorithms. In the part of design of algorithm, we will test the performance of the two random matching methods, verify the stability of random method, and determine the contracted parameter c and strategy of recursive partition. In the comparison part, we will compare our algorithm with the k-way partition algorithm in METIS on unbalanced ratio, maximum load and running time to evaluate the performance of our algorithm.
The directed graphs used in the experiment consist of two classes, theoretical and practical models. We use the grid graph as the representative of the theoretical model, which can also be regarded as the inner dual graph of the square grid of a plane. We consider grid graphs of three sizes, namely, Grid-1 with 1,000,000 vertices and 3,996,000 arcs, Grid-2 with 10,890,000 vertices and 43,546,800 arcs, and Grid-3 with 100,000,000 vertices and 399,600,000 arcs, each of which has a random vertex-weight of 120-150, and the weight Fig. 2 The ratio of the max-load of the RMRM to that of RMWM. Bars above the baseline indicate that the performance of RMRM is worse than RMWM Table 1. All the experiments were performed on a Dell T7610 graphics workstation with Intel Xeon 2.6GHz CPU (6 cores) and 1866mhz DDR3 32 GB memory.
Matching comparison
The aim of the subsection is to test the performance of the two matching contraction methods, RMWM and RMRM mentioned in Subsec. 3.1. We do the experiment on five graphs, Grid-1, Grid-2, MDual, FEM-1 and FEM-3. The small-scale graphs (Grid-1, Fig. 4 The ratios of best and worst max-load and running time to the relative average results. The baseline is the relative average value Table 2, Figs. 1 and 2. Figure 1 illustrates that the unbalanced ratios of RMWM are better than that of RMRM, except for the maximum unbalanced ratio of 100-partition on MDual. Figure 2 implies that in term of max-load, while the performance of RMWM is better than that of RMRM, the gap is very small and the maximum ratio is less than 1.012. Hence, we use the method in the following.
Stability verification
In this subsection, we will test the stability of the algorithm, that is, determining whether randomness brings a large deviation to the output. The same graphs with same parts are used in the experiment. We compare the experiment results from three aspects: unbalance ratio, max-load and running time. The detail can be seen in Table 3.
From Fig. 3, we can see that the gap between the best and the worst result is very small and does not exceed 0.70%. Furthermore, the unbalance ratio in every test case is quite small, less than 2.00% except the worst result of 10000-partition on FEM-3. Figure 4 illustrates the max-load and the running time, where the baseline is average values. For each example, the worst max-load is almost equal to the best one; the difference of running time is also very small, and the maximum ratio is about 1.10. Hence, the randomness of our algorithm does not bring much deviation, and it is very stable.
Determining parameters
In our algorithm, there is a parameter and a strategy that need to be determined. Firstly, we determine the parameter, contracted parameter c mentioned in Subsec. Figure 5 shows the unbalanced ratios with different contracted parameter c. Figure 6 and Fig. 7 exhibit the ratios of results of other parameters at maximum load and running time to results of c = 90, respectively. From these figures, we can see that the unbalanced ratio will basically decrease with the increase of the contracted parameters, on the contrary, the max-load and the running time will often rise with the increase of the parameters. Overall, good performance occurs when the parameter is selected as 70, 90, 110. Thus, we will choose the parameter c = 90.
For the recursive partition strategy, by dividing the number k and doing corresponding experiments, we find that there is little difference between these results. The deviations of unbalanced ratio and ratio of max-load are at most 0.5% and 0.2%, respectively. Hence, we choose the simplest strategy, that is, divide k into a power of some integer b ≤ 20. For example, if k = 1000, our algorithm is divided into three stages, and each stage does 10-partition.
Comparison with METIS
In this subsection, we will compare the performance of our algorithm (Graph Partition) with the k-way partition in METIS by carrying out the experiments on the 11 graphs of Table 1. Since METIS can only deal with undirected graphs, we transform each directed graph in Table 1 into an undirected graph, by modifying the weight of every edge uv as . Then, the resulting undirected graphs are partitioned by the k-way partition. Finally, we calculate the unbalanced ratio and max-load of each graph with respect to the partition. The experimental results can be seen in Table 4, and the comparison can be seen in the following figures. Note that since the graph Grid-3 is huge (100,000,000 vertices and 399,600,000 arcs), METIS does not calculate a feasible result. Figure 8 illustrates the unbalanced ratios of partition results of the two algorithms. From the figure, we can see that the unbalanced ratio of small part is better than that of big part for each graph. This is a very natural phenomenon. Most of unbalanced ratios by our algorithm are less than 2%, and most of the results by METIS are between 6% and 9%. Clearly, our algorithm is better than METIS on unbalanced ratio. All unbalanced ratios of graph Copter are worse, and the reason is the average degree of Copter is much larger than others. Figures 9 and 10 show the ratios of max-load and running time of our algorithm to that of METIS. Figure 9 illustrates that most of all ratios of max-load are between 0.94 and 1.06. This implies that there is little difference between the two algorithms in terms of maximum load. Moreover, we can see that the ratio increases with the number of parts, and the main reason is that we do not use mutli-level modification in back mapping phase. And this is also a key direction in our future work. From Fig. 10, we can see that for the small k, our algorithm often runs longer than METIS; conversely, our algorithm often runs less time than METIS for large k. This difference is related to the number of iterations and the average number of vertices in each part. Fig. 9 The ratios of max-load of our algorithm to that of METIS
Conclusions and future work
In this paper, we consider the balanced partition problem on large scale directed graphs. Firstly, we present a new mathematical modeling with new objective functions for this problem. Then, we combine multi-level strategy and recursive partition method to design an algorithm to solve it. Finally, by a large number of experiments, we determine the parameters, verify the stability of the algorithm, and compare with k-way partition in METIS in unbalanced ratio, maximum load and running time three aspects. The experimental results show that comparing with METIS, our algorithm is better in unbalanced ratio and has the same quality in maximum load. Furthermore, our algorithm can deal with some graphs with huge scale, which METIS can not return a feasible result.
There are two possible directions for future work. The first one is adding modification in back mapping phase, that is, map the partition of D m back to that of D 0 level by level, and modify the partition of each level to be a local optimum. The second one is to ensure the connectivity of each part. Furthermore, finding a new good and efficient graph contraction method is also a meaningful work. | 4,979.8 | 2021-08-05T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Evaluation of Virulence Factors In vitro, Resistance to Osmotic Stress and Antifungal Susceptibility of Candida tropicalis Isolated from the Coastal Environment of Northeast Brazil
Several studies have been developed regarding human health risks associated with the recreational use of beaches contaminated with domestic sewage. These wastes contain various micro-organisms, including Candida tropicalis. In this context, the objective of this study was to characterize C. tropicalis isolates from the sandy beach of Ponta Negra, Natal, Rio Grande do Norte, Brazil, regarding the expression of in vitro virulence factors, adaptation to osmotic stress and susceptibility to antifungal drugs. We analyzed 62 environmental isolates and observed a great variation among them for the various virulence factors evaluated. In general, environmental isolates were more adherent to human buccal epithelial cells (HBEC) than C. tropicalis ATCC13803 reference strain, and they also showed increased biofilm production. Most of the isolates presented wrinkled phenotypes on Spider medium (34 isolates, 54.8%). The majority of the isolates also showed higher proteinase production than control strains, but low phospholipase activity. In addition, 35 isolates (56.4%) had high hemolytic activity (hemolysis index > 0.55). With regard to C. tropicalis resistance to osmotic stress, 85.4% of the isolates were able to grow in a liquid medium containing 15% sodium chloride. The strains were highly resistant to the azoles tested (fluconazole, voriconazole and itraconazole). Fifteen strains were resistant to the three azoles tested (24.2%). Some strains were also resistant to amphotericin B (14 isolates; 22.6%), while all of them were susceptible for the echinocandins tested, except for a single strain of intermediate susceptibility to micafungin. Our results demonstrate that C. tropicalis isolated from the sand can fully express virulence attributes and showed a high persistence capacity on the coastal environment; in addition of showing high minimal inhibitory concentrations to several antifungal drugs used in current clinical practice, demonstrating that environmental isolates may have pathogenic potential.
Several studies have been developed regarding human health risks associated with the recreational use of beaches contaminated with domestic sewage. These wastes contain various micro-organisms, including Candida tropicalis. In this context, the objective of this study was to characterize C. tropicalis isolates from the sandy beach of Ponta Negra, Natal, Rio Grande do Norte, Brazil, regarding the expression of in vitro virulence factors, adaptation to osmotic stress and susceptibility to antifungal drugs. We analyzed 62 environmental isolates and observed a great variation among them for the various virulence factors evaluated. In general, environmental isolates were more adherent to human buccal epithelial cells (HBEC) than C. tropicalis ATCC13803 reference strain, and they also showed increased biofilm production. Most of the isolates presented wrinkled phenotypes on Spider medium (34 isolates, 54.8%). The majority of the isolates also showed higher proteinase production than control strains, but low phospholipase activity. In addition, 35 isolates (56.4%) had high hemolytic activity (hemolysis index > 0.55). With regard to C. tropicalis resistance to osmotic stress, 85.4% of the isolates were able to grow in a liquid medium containing 15% sodium chloride. The strains were highly resistant to the azoles tested (fluconazole, voriconazole and itraconazole). Fifteen strains were resistant to the three azoles tested (24.2%). Some strains were also resistant to amphotericin B (14 isolates; 22.6%), while all of them were susceptible for the echinocandins tested, except for a single strain of intermediate susceptibility to micafungin. Our results demonstrate that C. tropicalis
INTRODUCTION
The quality of water and sand of recreational beaches are directly linked to the conditions of sanitation, but water resources are frequent targets of clandestine sewers (dos Fernandes Vieira et al., 2013). Therefore, the pollution of beaches is a condition often associated with poor sanitation, where untreated sewage systems are dumped into the sea, and may contaminate the sand, due to the ebb and flow of tides. This could be an important risk factor for microbiological contamination for those who attend beaches. In tropical regions, the climatic conditions of heat and high humidity, together with the contact with sand, animals and poor hygiene may be related to a high incidence of superficial mycoses in recent years (Pelegrini et al., 2009).
The microbiological quality of beaches has long been evaluated only by fecal coliform bacteria found in seawater, whereas the impact that fungi may cause during environmental contamination has always been neglected (Maier et al., 2003). Candida tropicalis is a commensal yeast of the gastrointestinal tract of birds such as seagulls and terns, as well as fishes (Buck et al., 1977). C. tropicalis has also been isolated from polluted wastewater (Phaff et al., 1960), sandy beaches and coastal waters of Miami. In addition, this yeast belongs to the normal human microbiota, and has been isolated from both superficial and systemic infections (Basu et al., 2003).
C. tropicalis has been considered the second most frequently isolated species from episodes of candidemia in several Latin America multicentric studies and its importance as an etiologic agent of candidemia in north hemisphere countries has increased recently (Godoy et al., 2003), but the expression of virulence attributes may vary among different isolates. Adhesion to host cells is considered the first step necessary for the establishment of infection, being mediated by proteins and polysaccharides found on the cell wall of different Candida strains of each species (Cannon and Chaffin, 2001). According to the most recent data found in the literature, C. tropicalis has been described as more adherent to epithelial cells than other Non-Candida albicans Candida (NCAC) species (Lyon and de Resende, 2006;Biasoli et al., 2010).
Another important virulence factor of this species is the ability to form hyphae, this morphological transition is directly associated with pathogenicity (Thompson et al., 2011). C. tropicalis can also form biofilms. Bizerra et al. (2008) defined it as a well developed dense network of yeast cells and Abbreviation: ATCC, American type culture collection; C. albicans, Candida albicans; C. tropicalis, Candida tropicalis; CLSI, Clinical and Laboratory Standards Institute; MALDI-TOF/ MS, Matrix-assisted laser desorption time-of-flight mass; OD, optical density. filamentous forms. Studies performed by several authors have reported an increased biofilm production in clinical isolates of C. tropicalis (Paiva et al., 2012;Pannanusorn et al., 2013;Udayalaxmi et al., 2014).
A previous study suggested the existence of a family of secreted aspartic proteinases, encoded by SAPT genes in the genome of C. tropicalis. However, only an enzyme called Sapt1p was purified from culture supernatants and biochemically characterized (Zaugg et al., 2001). C. tropicalis is still able to produce satisfactory amounts of phospholipase, which catalyzes the hydrolysis of phospholipids in host cells membranes Mutlu Sariguzel et al., 2015). Hemolysins are another group of enzymes involved in Candida spp. virulence. Hemolytic activity significantly contributes to the pathogenesis of disseminated candidiasis, especially facilitating hyphal penetration (Luo et al., 2004;Tsang et al., 2007), because hemolytic factors result in the release of hemoglobin from the erythrocytes of the host for later use as an iron source (Giolo and Svidzinski, 2010).
Several virulence attributes are expressed or have their expression modulated in response to stress conditions promoted by the environment (Brown et al., 2014). C. tropicalis is also able to grow above 10-15% sodium chloride, which explains the reason why this species is often isolated from saline environments (Butinar et al., 2005). Halotolerance allows the prolonged survival of C. tropicalis at the maritime ecosystem.
Resistance of clinical isolates of C. tropicalis to the azoles has been extensively reported (Santhanam et al., 2013;Guinea et al., 2014;Liu et al., 2014). However, there are fewer studies related to the resistance of this species to other antifungal drugs, such as amphotericin B . C. tropicalis resistance to echinocandins has also already been described, but it is currently of low significance because of the high efficacy of these drugs and their recent adoption (Garcia-Effron et al., 2008;Eschenauer et al., 2014).
Despite the large number of investigations performed in different parts of the world with regard to microbiological aspects of coastal environments and the growing interest of society toward environmental issues, there are no current studies investigating the ability of environmental strains of C. tropicalis to express different virulence factors in vitro and susceptibility to antifungal drugs. Therefore, the present study aimed to characterize isolates of C. tropicalis obtained from the sand of Ponta Negra Beach, Natal, Rio Grande do Norte state, Brazil, regarding to: adhesion to human buccal epithelial cells, proteinase and phospholipase activity, biofilm formation, production of hemolysins and hypha formation. In addition, we also investigated the susceptibility of these isolates to high salt concentrations and to the following antifungal compounds: fluconazole, voriconazole, itraconazole, amphothericin B, caspofungin, micafungin, and anidulafungin.
Strains and Culture Conditions
We evaluated a total of 62 C. tropicalis isolates obtained from the sand of Ponta Negra beach, Rio Grande do Norte state, Brazil, belonging to the culture collection of the Medical and Molecular Mycology Laboratory, Department of Clinical and Toxicological Analyses, Federal University of Rio Grande do Norte. Of note, strain collections were conducted in different periods: two in the summer (March, 2012 and) and a single one in the winter season (July, 2012), at six different points of the beach. The isolates were stored at −80 • C in YPD liquid medium (dextrose 20 g/L, peptone 20 g/L, yeast extract 10 g/L) containing 20% glycerol. The cryotubes of 2 mL of capacity (Cralplast) were thawed on ice and 100 µL of cells suspension of each strain were added to 5 mL of YPD liquid medium and incubated in a shaker afterward (Tecnal, TE-420, São Paulo, Brazil), at 35 • C for 48 h for the reactivation and verification of viability. Subsequently, 100 µL of each cell suspension was inoculated on the surface of Sabouraud Dextrose Agar (SDA; Oxoid, UK) containing 300 µg/mL of chloramphenicol (Park-Davis), using a Drigalsky loop. The plates were incubated at 37 • C for 48 h. Yeast colonies were plated on CHROMagar Candida (CHROMagar Microbiology, Paris, France) to check for purity and screening for different color colonies. Species identification was based on the characteristics of the cells observed microscopically after cultivation on corn meal agar containing Tween 80, as well as classical methodology (Yarrow, 1998) and ID32C System (bioMerieux R Marcy l'Etoile, France), whenever it was necessary. Of note, a control strain of C. tropicalis was used as a reference strain of the species for all the virulence attributes tested in vitro. In addition, two control strains of C. albicans (SC5314 and ATCC90028) were added for all the virulence experiments tested in vitro, because this species is still considered the most virulent species of the Candida genus (Moran et al., 2002). In addition, we randomly selected (blinded screening) 5 isolates of C. tropicalis obtained from patients with candidemia for comparisons.
MALDI TOF MS Identification
Candida tropicalis isolates were seeded on the surface of SDA supplemented with chloramphenicol (0.05 mg/mL) at 35 • C for 24 h. Proteins were extracted with formic acid according to an adapted protocol (Santos et al., 2011;Oliveira et al., 2015). Six hundred microliter of yeast cells in a concentration of 10 6 were combined with 7 µL of formic acid 70% in a 1.5 mL micro centrifuge tube. The suspension was vortexed for 20 s and immediately transferred to a reading plate (Bruker Daltonics -USA). After evaporation, 0.5 µL of a matrix solution (10 mg/mL acid alpha-cyan-4-Hydroxycinnamicin ethanol: water: acetonitrile [1:1:1]; Sigma -USA) with 0.03% trifluoroacetic acid were added and gently mixed. The crystallization step occurred at room temperature and the isolates were analyzed in triplicate. Protein readings were performed with a Microflex LT mass spectrometer using the FlexControl 3.0 tool (Bruker Daltonics, USA). For protein profiles acquisition, we considered a mass range of 2.000 to 20.000 Da obtained in the linear mode with 40 nitrogen laser shots with variable speed rates reaching up to 60 Hz per well. Six ribosomal proteins of Escherichia coli were used for external calibration of protein masses analyzed, as follows: 4365.30; 5096.80; 5381.40; 6255.40; 7274.50 and 10300.10 Da. Profiling generation was performed using Biotyper 3.0 and Biotyper Real Time Classification softwares (Bruker Daltonik GmbH).
Inoculum Standardization for Candida tropicalis Virulence Factors Evaluated In vitro
For all the virulence factors evaluated in vitro, the samples were initially grown in NGY medium (Difco Neopeptone 1 g/L, Dextrose 4 g/L; Difco yeast extract 1 g/L). C. tropicalis cells were incubated for 18-24 h in a rotatory shaker (Tecnal, TE-420, SaoPaulo, Brazil) at 30 • C, 200 rpm. This culture medium produces an inoculums size of about 2 × 10 8 cells/mL. Cultures were spectrophotometrically measured at a wavelength of 600 nm ranging from 0.8 and 1.2 (Biochrom Libra S32). Subsequently, C. tropicalis cells were diluted to obtain the specific inoculum needed for each attribute of virulence evaluated in vitro (Chaves et al., 2007).
Candida tropicalis Adherence to Human Buccal Epithelial Cells (HBEC)
Candida tropicalis cells were grown overnight to stationary phase in NGY (0.1% Neopeptone [Difco], 0.4% glucose and 0.1% Yeast Extract [Difco]) at 30 • C and were mixed with human buccal epithelial cells (HBEC) from healthy volunteers at a ratio of 10 yeast cells per HBEC. The mixtures were incubated at 37 • C for 1 h with shaking; then cells were vortexed, formalin-fixed and transferred to a microscope slide. The number of C. tropicalis cells adhering to 150 HBEC was determined with the operator blinded to the nature of the material on the slide. Tests were done in triplicate (Bates et al., 2006).
Candida tropicalis Biofilm Formation
Biofilm formation assays were performed according to Jin et al. (2003) adapted by Melo et al. (2007). At first, 100 µL aliquots of a standardized cell suspension (10 7 cells/mL) were transferred to flat bottom 96 well microtiter plates and incubated for 1.5 h at 37 • C in a shaker at 75 rpm. As controls, eight wells of each microtiter plate were handled in an identical fashion, except that no Candida suspensions were added. Following the adhesion phase, cell suspensions were aspirated and each well was washed twice with 150 µL of PBS to remove loosely adherent cells. A total of 100 µL of YNB medium ("Yeast Nitrogen Base", DifcoTM) with 50 mM of glucose (D-glucose monohidratada P.A., Cinética) was added to each of the washed wells and incubated at 37 • C in a shaker at 75 rpm. Biofilms were allowed to develop for 66 h and quantified by the crystal violet assay. Briefly, the biofilm-coated wells of microtiter plates were washed twice with 150 µL of PBS and then air dried for 45 min. Subsequently, each of the washed wells was stained with 110 µL of 0.4% aqueous crystal violet solution for 45 min. Afterward, each well was washed four times with 350 µL of sterile distilled water and immediately distained with 200 µL of 95% ethanol. After 45 min, 100 µL of destaining solution was transferred to a new well and the amount of the crystal violet stain in the referred solution was measured with a microtiter plate reader (SpectraMAX 340 Tunable Microplate Reader; Molecular Devices Ltda.) at 570 nm. The absorbance values for the controls were subtracted from the values for the test wells to minimize background interference. Interpretation of biofilm production was according to the criteria described by Stepanovic et al. (2007).
Morphogenesis of Candida tropicalis on Solid Media
For induction of hypha formation on solid media the cells were grown in NGY. The cells were centrifuged at 3,000 g at room temperature and resuspended in dH2O followed by three washing steps. The inoculum size was 1 × 10 9 cells/mL. From that suspension, 5 µL was spotted on the surface of Spider medium (Nutrient agar 10 g, Mannitol 10 g, KH 2 PO 4 2 g, agar 14.5 g, Distilled water 1000 mL- (Liu et al., 1994) and YPD medium containing 20% FBS fetal bovine serum (Silva-Rocha et al., 2015) (Sigma). The plates were incubated at 30 • C for 7 days for subsequent observation of macromorphological aspects of the colonies. The assay was performed in triplicate. Colonies were considered fluffy if filaments could be visually observed, including at the edge of colonies, whereas wrinkled colonies showed wrinkles, but no filamentation was macroscopically observed. Smooth colonies were macroscopically absent of any kind of wrinkles or filamentation. Colony micromorphology was also observed with optical microscopy. The reference strain of Candida parapsilosis ATCC22019 was used as a negative control for true hyphae formation.
Candida tropicalis Proteinase Production
Proteinase activity was determined by a method of Macdonald and Odds (1980). Fifty-microliter samples from NGY cultures were grown in 5 mL YCB + BSA medium (11.7 g/L Yeast Carbon Base [Difco]; 10 g/L glucose; 5 g/L bovine serum albumin, fraction V [Sigma-Aldrich]) rotated in a rotator shaker at 30 • C for 72 h, 200 rpm. Proteolytic activity was determined by measuring the increase in trichloroacetic acid soluble products absorbing at 280 nm in triplicate after 1 h incubation of the culture supernatant with BSA substrate at 37 • C. Specific activity was expressed as OD 280 nm /OD 600 nm of the culture. OD readings equal to or below 0.02 were considered below the limit of detection of the technique and were represented as negative.
Candida tropicalis Hemolysin Production
In order to evaluate hemolysin production, we followed the methodology proposed by Luo et al. (2001) with some adaptations. C. tropicalis cells were initially cultured on SDA at 35 • C for 18 h. Strains were grown overnight in NGY broth. Ten microliters of cell culture were seeded in triplicate on the surface of SDA containing 7% fresh sheep blood (Ebe-Farma) and 3% glucose, contained in Petri dishes of 155 mm of diameter. The plates were incubated for 48 h at 37 • C in an atmosphere with 5% CO 2 . After the incubation period, the presence of a clear halo around the inoculum indicated positive hemolysis. The diameter of colonies and zones of hemolysis were measured in order to obtain the hemolysis index (HI) for each strain. HI was determined by dividing the colony diameter by the precipitation zone plus colony diameter, which allowed classification of isolates in highly producers, moderate producers and low producers, according to Linares et al. (2007). As a positive control we used a beta hemolytic strain of Streptococcus pyogenes (Group A). The reference strain of Candida parapsilosis ATCC22019 was used as a negative control (Luo et al., 2001).
Candida tropicalis Phospholipase Production
For detection of the phospholipase activity, the method of Price et al. (1982) was used. Overnight NGY cultures were diluted and standardized to a concentration of 2 × 10 5 cells/mL; and the suspension of cells was inoculated in triplicate on the surface of Phospholipase agar (10 g peptone, 40 g dextrose, 16 g agar, 80 mL Egg Yolk Emulsion [Fluka] was added to 1000 mL of distilled water 1000 mL). The plates were incubated at 30 • C for 72 h. After the incubation period, the diameters of the colonies and the halo formed around them were measured. The Pz (phospholipase zone) was determined by dividing the colony diameter by the precipitation zone plus colony diameter. The isolates were classified as follows, according to tertiles distribution: Pz = 1 as negative phospholipase activity; 0.82 ≤ Pz ≤ 0.88 as weak; 0.75 ≤ Pz ≤ 0.81 as moderate; 0.67 ≤ Pz ≤ 0.74 as strong phospholipase producers.
Sensitivity of Candida tropicalis to Osmotic Stress in Sodium Chloride
The method of Chaves and da was used with some modifications to determine the sensitivity of C. tropicalis to NaCl. Ten microliters volumes of NGY-grown yeast cells was transferred to 100 µL of Sabouraud Dextrose broth, with addition of NaCl (0.03-30%) in 96 wells microtiter plates [TPP, 92096], incubated at 35 • C for 48 h. Growth determination was visually observed trough the turbidity perceptible within each well. The inoculum of all strains tested were obtained from 2 h cultivation in SDA at 35 • C and an initial suspension prepared with 90% transmittance determined spectrophotometrically at 530 nm. Then, two serial dilutions were made, the first in saline solution (1:100) and the second in RPMI (1:20), in order to obtain final concentration of 10 3 cells/mL. Susceptibility to antifungal agents was evaluated by broth microdilution, as recommended within document CLSI M27-A3 (CLSI, 2008a). Aliquots of 100 µL of the final inoculum solution were dispensed in microtiter plates of 96 wells containing 100 µL of various concentrations of the tested drugs. Finally, the plates were incubated at 37 • C and test reading taken after 24 h incubation for echinocandins and fluconazole, and after 48h for the other azoles and AMB. Of note, we have performed readings for voriconazole at 48 h of growth, as recommended by the document M27-S4 of CLSI when 24 h growth of control is insufficient (CLSI, 2008b(CLSI, , 2012). All strains were tested in duplicate. MIC was defined for azoles and echinocandins to the lowest drug concentration which showed about 50% reduction in turbidity as compared to the positive control well. For AMB, the MIC was defined as the lowest concentration able to inhibit any growth visually perceptible (CLSI, 2012). In addition to the environmental isolates selected for this study, reference strains C. tropicalis ATCC13803, C. parapsilosis ATCC22019 and C. krusei ATCC6258 were included as control micro-organisms. The isolates were classified as resistant as the following cutoff points: ≥1 µg/mL for ITC (as recommended by the document M27-S3 of CLSI (CLSI, 2008b), VOR and echinocandins, MIC ≥ 2 µg/mL for AMB and ≥8 µg/mL to FLU (CLSI, 2012).
Statistical Analysis
Data were analyzed using the statistical software "Graph Pad, Prism" version 6.0 and "Stata" version 11.0. Results were presented as mean + standard deviation, and differences were analyzed by the One-sample t-test, while the Spearman coefficient was used to assess the correlation between virulence factors. For all the analyses, P-values less than 0.05 were considered significant and the confidence interval of 95% was selected.
Microbiological Profiling
We randomly selected 62 strains of C. tropicalis, in a manner that half of the strains were collected in the first period of the study, while the others were collected in the second period of the study (2012 and 2013, respectively). Of note, all the strains showed pure cultures and had blue color on CHROMagar Candida. They also presented blastoconidia, pseudo-hyphae and true hyphae on Cornmeal agar slides which contain tween 80 agar and a pattern of auxanogram and zymogram compatible with C. tropicalis.
MALDI TOF MS Identification
Mass spectral fingerprints demonstrated that all C. tropicalis isolates were correctly identified at the species level by MALDI-TOF MS with log (score) values higher than 2.0.
Adherence of Candida tropicalis to Human Buccal Epithelial Cells (HBEC)
The ability of the isolates of C. tropicalis to adhere to HBEC was determined by the number of blastoconidia of each strain adhered to 150 HBEC, observed with optical microscopy. All the strains were able to adhere to HBEC. However, the isolates showed variable expression of this specific virulence factor in vitro. The numbers ranged from 107.7 ± 5.9 (strain LMMM859) to 194 ± 6.9 adhered cells/150 HBEC (strain LMMM840) and 194.3 ± 5.5 (strain LMMM824). All the strains tested showed an increased ability to adhere to HBEC than the control strain of C. tropicalis ATCC13803 (96 ± 10.0 adhered cells /150 HBEC; Figure 1). This difference was considered statistically significant. Most of the isolates were less adherent to HBEC than both C. albicans reference strains (ATCC90028; 192.7 ± 3.1 and SC5314; 217 ± 11.4; Supplementary Table S1). The bloodstream clinical isolates showed variable ability to adhere to the buccal epithelia. The average value for adhesion was 134.2 ± 65.4, which was considered similar to the results found for most the environmental isolates (Figure 1; Supplementary Table S1).
Evaluation of Biofilm Formation in Candida tropicalis
Biofilm formation was induced in microtiter plates and quantified by spectrophotometry at 570 nm, after crystal violet staining. All the strains were able to form biofilm on polystyrene plates. Of note, a remarkable variation was observed, with the values ranging from OD570 nm of 0.23 ± 0.02 (strain LMMM810) to OD570 nm of 3.57 ± 0.00 (strain LMMM863). Forty-one strains (66.1%) were high biofilm producers (OD570 nm > 0.85), while four strains showed extremely high values of biofilm formation of OD570 nm > 3.0. Only three strains showed low production of biofilm (OD570 nm of 0.21-0.42), (LMMM810, LMMM856, and LMMM860). However, the OD570nm reading presented by C. tropicalis ATCC13803 (0.21 ± 0.01) was still below the lower producing isolates of our study. A statistically significant difference was observed for biofilm production of each isolate separately compared to the reference strains (C. albicans ATCC90028 and SC5314; C. tropicalis ATCC13803), except for strain LMMM810, compared to C. tropicalis (Figure 2; Supplementary Table S1). The mean value of bloodstream isolates optical density for biofilm formation was similar to the average obtained for the environmental isolates (1.11 ± 0.73 versus 1.45 ± 0.03, respectively), meaning that our strains are able to form biofilms as well as clinical strains isolated from episodes of candidemia.
Evaluation of Candida tropicalis Morphogenesis on Solid Medium
In order to induce filamentation of C. tropicalis strains on solid medium, all the isolates were grown on the surface of plates containing Spider medium and incubated at 30 • C for 7 days. As shown in S1, the majority of the isolates were classified as wrinkled (35 isolates, 56.5%). Among the others, 19 (30.6%) strains were considered of fluffy phenotype, while 8 strains (12.9%) presented a smooth phenotype, where filamentation was not observed. Microscopically, colonies classified as fluffy showed well-developed thick hyphae and pseudo-hyphae and very few blastoconidia. Wrinkled colonies also showed true hyphae, but they were thinner and shorter, while shorter pseudohyphae and blastoconidia were found in higher amounts. More than 90% of the cells of smooth colonies were blastoconidia, with a very few and short pseudohyphae found (Figure 3).
The same trend was found when the induction was performed on YPD + 20% FBS, although the induction of filamentation was generally more pronounced. Even colonies classified as smooth on Spider medium showed microscopically slightly more numbers of pseudohyphal cells when grown in the presence of serum (data not shown). Control strains for both species and the clinical isolates were all filamentous (Supplementary Table S1).
Determination of Proteinase Production in Candida tropicalis
Proteolytic activity was determined by the increase in trichloroacetic acid (TCA) soluble products absorbing at 280 nm in triplicate, after 1h incubation of culture supernatant with BSA substrate at 37 • C. Specific activity was expressed as OD 280 nm /OD 600 nm of the culture. The isolates tested showed widely varying results, with LMMM836 and LMMM839 as negative producers (OD 280 nm /OD 600 nm equal to 0.02) while 22 isolates (35.5%) showed increased proteinase activity (OD 280 nm /OD 600 nm equal to 0.09; Figure 4; Supplementary Table S1). When each strain was compared to the reference strains, the amount of proteinase produced was significantly higher than for the results obtained for the control strains of both species for most of the clinical isolates. For the clinical isolates, the mean value of OD 280 nm /OD 600 nm was 0.04 ± 0.01, including two isolates that did not produce the enzyme. This was also considered lower that the mean value of the proteolytic activity found for the environmental isolates (Figure 4; Supplementary Table S1).
Determination of Production of Hemolysins in Candida tropicalis
In order to assess hemolytic ability, the standardized inoculum was seeded on the surface of SDA supplemented with 7% sheep blood added 3% glucose. All strains analyzed produced beta hemolysis, where HI ranged from 0.33 ± 0.03 (LMMM805, greater hemolytic activity) to 0.70 ± 0.0 (LMMM813, lower hemolytic activity). Thirty four isolates (54.8%) presented strong hemolysin production (HI ≤ 0.55), while 28 (45.2%) presented moderate production (≤0.56 HI ≤0.85). When each strain was compared to the reference strains, most of them showed similar HI of C. albicans ATCC90028, but were considered statistically significant different than C. albicans SC5314. Our environmental strains were also generally less hemolytic than C. tropicalis ATCC13803 and the bloodstream clinical isolates (mean value of 0.38 ± 0.06; Figure 5; Supplementary Table S1).
Evaluation of Resistance to Osmotic Stress (Halotolerance) in Candida tropicalis
To evaluate resistance to osmotic stress, C. tropicalis cells were inoculated in Sabouraud dextrose broth with gradually increasing concentrations of NaCl. Of note, 53 strains (85.4%) were able to grow at 15% NaCl of concentration, including the reference strain C. tropicalis ATCC13803. The other nine environmental strains, C. albicans SC5314 and the clinical isolates were able to grow at 7.5% NaCl. All the strains were more resistant to osmotic stress than C. albicans ATCC9028, which was able to grow only in 3.75% of NaCl concentration (Supplementary Table S1).
Correlation of the Virulence Factors Tested In vitro for All the Strains Analyzed in the Present Study
In order to verify a possible correlation among the various virulence factors studied, we have used a statistical test for obtaining the Spearman correlation coefficient, which measures the degree of linear relationship between two quantitative variables. Only a weak negative correlation between biofilm formation and the HI was observed (HI; P = 0.0027), meaning that the lower the biofilm production, the lower the HI, therefore the greater the hemolytic activity of these strains. For all the other attributes of virulence evaluated, no statistically significant correlation was obtained. The only other interesting finding was that 75% of filamentous colonies were also strong phospholipase producers (Supplementary Table S1).
Antifungal Susceptibility Profiling of Candida tropicalis Isolates
Twenty-six environmental isolates of C. tropicalis (43.5%) were resistant to FLU that is, presented MIC values greater than 8 µg/mL after 24 h incubation. It was possible to observe the occurrence of trailing growth ("low-high" phenomenon) in five environmental isolates (8%); they were susceptible to fluconazole after 24 h incubation, but residual growth was observed at 48 h incubation, but they were considered susceptible to this drug. Another interesting phenomenon was observed for the environmental isolates against FLU; at low antifungal concentrations (0.5-2 µg/mL), some isolates showed 50% inhibition of growth compared to the positive control. However, these isolates started to grow again in the next wells, which contained higher concentrations of the antifungal agent (from 4 to 16 µg/mL) with similar growth to the positive control. This behavior is thus, similar to the paradoxical growth that occurs for echinocandins. This fact was observed in 20 environmental isolates (32.2%), and they were classified as FLU susceptible. Interestingly, a similar phenomenon also occurred FIGURE 5 | Hemolytic activity of the environmental and clinical isolates of C. tropicalis, C. albicans SC5314, ATCC90028 and C. tropicalis ATCC13803 reference strains. Yeast cells were grown on the surface of Sabouraud Dextrose Agar added fresh sheep blood and 3% glucose for 48 h at 37 • C in an atmosphere of 5% CO 2 . The hemolytic index (HI) was determined by the ratio between the diameter of the colony and the diameter of the colony plus hemolysis zone.
Frontiers in Microbiology | www.frontiersin.org FIGURE 6 | Phospholipase activity of the environmental and clinical isolates of C. tropicalis, C. albicans SC5314, ATCC90028 and C. tropicalis ATCC13803 reference strains. Yeast cells were grown on the surface of the phospholipase agar, after 72 h incubation at 30 • C. The phospholipase zone (PZ) was determined by the ratio between the diameter of the colony and the diameter of the colony plus the precipitation halo.
for the same isolates grown in the presence of VOR. For this antifungal drug, 38 isolates (61.3%) were resistant. Regarding to ITC, resistant isolates totaled 36 (58%). It is worth to mention that 24 isolates (38.7%) were found to be susceptible dosedependent (SDD). The atypical growth phenomenon in higher concentration of this drug previously described for FLU also occurred against ITC but only for two isolates. The number of strains resistant to the three azole tested (cross-resistance) was 15 (24.2%). Fourteen isolates (22.6%) were resistant to AMB. With respect to the echinocandins, all the strains were susceptible to the three antifungal drugs belonging to this class. Only a single isolate showed intermediate susceptibility to MCF (MIC = 0.5 µg/mL). Multidrug resistance (resistance to at least two antifungal drug classes) was observed in 12 isolates (19.3%; Table 1; Supplementary Table S2).
DISCUSSION
The present work investigated the pathogenic potential of isolates of C. tropicalis obtained from the Ponta Negra beach, Natal city, Rio Grande do Norte State of Brazil. Most of all the isolates tested were more adherent than C. tropicalis ATCC13803. Nevertheless, they were generally less adherent than C. albicans ATCC90028. These results corroborate the most current data found in the literature, where C. albicans is cited as the most adherent species, followed by C. tropicalis (Lyon and de Resende, 2006;Biasoli et al., 2010). The highly adhesive nature of the environmental isolates of the present study is consistent with data found for isolates of C. tropicalis in the study performed in Rio Grande do Norte by Chaves et al. (2013). Of note, the strains were isolated from the oral cavity of kidney transplant recipients, reinforcing the idea that environmental isolates may express the ability to adhere to HBEC in vitro as much as clinical isolates of the same species. In addition, we found in the present study that some of the environmental strains can be more adherent to the buccal epithelia than the clinical isolates from patients with candidemia.
Most of the isolates of the present study were considered strong biofilm producers. The same trend for high biofilm production in C. tropicalis was already reported by Tumbarello et al. (2007) in a study performed with clinical isolates obtained from hematogenic infections.
It has been reported that the fact C. albicans may switch between different morphologies is related to virulence (Whiteway and Bachewich, 2007). In the present study, we found a positive correlation between the ability to form hyphae and increased secretion of phospholipase. In fact Vidotto et al. (1999) verified the same correlation for isolates obtained from the oral cavity of HIV individuals, but not for the strains isolates from other body sites.
Until the present moment, there are no publications about proteinase secretion by C. tropicalis isolated from coastal environments. Studies report that C. albicans produces high levels of proteinases in vitro, while NCAC species show low enzymatic activity (Vidotto et al., 1999;Zaugg et al., 2001;Silva et al., 2012). This is contradictory to the results found in the present study, where the strains tested showed in general higher proteinase activity than C. albicans ATCC90028 and SC5314, besides the clinical isolates. The environmental stress could (with temperatures above 40 • C) have somehow stimulated proteinase production. In fact, others have described that under stress, such as the presence of antifungal drugs, C. albicans cells affect Sap2 and Sap9 expression (Copping et al., 2005). Therefore, the high proteinase activity of the isolates of C. tropicalis from the beach sand is again an important finding to emphasize the possible ability of the strains to express virulence factors in vitro.
All the isolates analyzed in this study presented a degree of hemolytic activity, and some strains had an HI close to the values obtained for the clinical isolates, corroborating the results reported elsewhere for clinical isolates of this species (Luo et al., 2001;Rossoni et al., 2013;Riceto et al., 2015). On the other hand, most of them had lower hemolysin production than the clinical isolates. This result was expected because the clinical strains were recovered from the blood. It is in agreement with the results obtained by Favero et al. (2014) analyzing clinical isolates of C. tropicalis obtained from bloodstream infection.
All environmental isolates tested showed phospholipase production, contradicting what has been previously reported by Samaranayake et al. (1984). Other authors have also reported significant phospholipase activity in clinical isolates of C. tropicalis. According to Deorukhkar et al. (2014), such inconsistencies may be due to biological differences between isolates tested. Of note, our isolates presented low phospholipase production, unlike what was found for the reference strains and clinical isolates. It is possible that the environmental conditions did not stimulate phospholipase production, differently to what was observed for most of the virulence factors tested.
The current literature also describes that C. tropicalis is able to grow in culture media with a concentration above 10-15% NaCl (Butinar et al., 2005). The majority of the strains tested were resistant to a concentration up to 15% NaCl. Osmoregulation mechanisms in C. tropicalis are still largely unknown. It has been shown the role of ion efflux pumps in this process, emphasizing transport systems such as Na + / K + and Na + /H + (Rodriguez et al., 1996;Garcia et al., 1997;Krauke and Sychrova, 2008). Therefore, it is possible that efflux pumps are over expressed by micro-organisms in the coastal environment, in response to the adaptation to stress conditions and may have somehow influenced resistance to some of the antifungal drugs tested by overexpression of efflux pumps.
In this study, we observed a remarkable number of C. tropicalis strains from environmental sources resistant to the azoles tested, mainly FLU. Vijaya et al. (2014) obtained very similar results, where 42.9% of isolates obtained from vaginal swabs tested were resistant to this antifungal drug. An increased number of clinical isolates of Candida spp. resistant to FLU have been recently reported (Figueiredo et al., 2007;Chang et al., 2013).
It is noteworthy that environmental isolates hardly have previously been exposed to antifungal compounds. However, this possibility cannot be completely ruled out, since these micro-organisms may have been derived from human fecal contamination in the coastal environment.
Some of our isolates showed the low-high phenomenon, a type of growth that is low (<2 µg/mL) after 24 h incubation, but much higher (>64 µg/mL) after 48 h (Revankar et al., 1998;Marr et al., 1999). In vivo studies have demonstrated that cells are actually susceptible to FLU (Rex et al., 1998). We also verified a phenomenon similar to the paradoxical growth described for Candida cells treated with echinocandins in our environmental strains against FLU. For echinocandins the paradoxical effect occurs as an adaptive response to the cellular damage on the fungal cell wall structure, in order to compensate for the inhibition of glucan production (Walker et al., 2010;Chen et al., 2014). Future studies using electron microscopy and ultrastructure analyses are mandatory to elucidate this phenomenon in C. tropicalis.
The level of resistance of the environmental isolates of C. tropicalis to the three azoles tested was very notorious. Jiang et al. (2013) investigating resistance to 52 clinical isolates of C. tropicalis, demonstrated that 18 isolates (34.6%) were resistant to FLU, while 21 (40.4%) were resistant to ITC, but only 4 (7.7%) to VOR. The authors suggest that VOR is more effective against clinical isolates of C. tropicalis than the other two drugs tested, which is not in agreement to our results, where 38 isolates (61.3%) are resistant to this antifungal compound. All the environmental isolates were susceptible to the echinocandins tested, even those resistant to the azoles and AMB (except for a single strain that showed intermediate susceptibility to MCF). Similar findings were observed by Castanheira et al. (2014), who reported 100% susceptibility of C. tropicalis blood isolates against CPF, ADF and MCF and Pfaller et al. (2015) in a surveillance study which analyzed isolates obtained from several laboratories in the Asia-Western Pacific (APAC) region Pfaller et al. (2015).
CONCLUSION
This study contributed to the knowledge about the expression of virulence factors in vitro from C. tropicalis, a yeast of great prevalence in the coastal environment found in an important tourist town in northeastern Brazil. To the best of our knowledge, this was the first study to investigate the virulence of C. tropicalis obtained from beach sands. The significant expression of some virulence attributes and resistance to osmotic stress, leading to survival at coastal environments, suggest the potential pathogenicity of yeasts. In addition, environmental strains presented significant resistance to antifungal drugs, some with multi-drug resistance to azoles and amphotericin B. Further investigations are mandatory to elucidate the process of adaptation of this pathogen to the coastal environments and its possible correlation to the ability of colonization and infection.
AUTHOR CONTRIBUTIONS
DZ isolated the strains used in this study, and identified them by the classical method, made phenotypic analysis of virulence factors and prepared the manuscript. SdM greatly contributed to the experimental part. LdS did the statistical analysis. WS-R conducted the test evaluation of resistance to osmotic stress. EF and AM identified the isolates by MALDI-TOF MS. RL-N and RN did susceptibility testing to antifungal agents. GC designed all tests. All authors approved the final manuscript. | 9,436 | 2016-11-15T00:00:00.000 | [
"Biology"
] |
Diagnostics of HNSCC Patients: An Analysis of Cell Lines and Patient-Derived Xenograft Models for Personalized Therapeutical Medicine
Head and neck squamous cell carcinomas (HNSCC) are very frequent worldwide, and smoking and chronic alcohol use are recognized as the main risk factors. For oropharyngeal cancers, HPV 16 infection is known to be a risk factor as well. By employing next-generation sequencing, both HPV-positive and negative HNSCC patients were detected as positive for PI3K mutation, which was considered an optimal molecular target. We analyzed scientific literature published in the last 5 years regarding the newly available diagnostic platform for targeted therapy of HNSCC HPV+/−, using HNSCC-derived cell lines cultures and HNSCC pdx (patient-derived xenografts). The research results are promising and require optimal implementation in the management of HNSCC patients.
Introduction
HNSCC represents a heterogenous group of tumors, including cancer of the oropharynx, oral cavity, pharynx, and larynx. The recognized risk factors for HNSCC are smoking, chronic alcohol use, lack of oral hygiene, and HPV 16 in oropharyngeal cancers. Recent meta-analyses have confirmed smoking as a risk factor for HNSCC: Alotaibi et al. found that smoking is a negative prognostic factor for overall survival in patients with hr-HPV + [1], and Ference et al. found that current smoking during treatment is associated with the greatest reduction in survival [2]. Interestingly, Skoulakis et al. found in their meta-analysis that smoking is less common in HPV-positive groups than in HPV-negative groups [3]. The role of HPV in HNSCC was confirmed by a meta-analysis which included 148 studies and 12,163 cases of HNSCC from 44 countries, and the authors found HPV16 was present in more than 80% of all HPV DNA-positive cases [4]. Updated information regarding incidence, prevalence and mortality are available on the Cancer Today website by the International Agency for Research on Cancer (IARC). The estimated age-standardized incidence rates, in 2020, for lip, oral cavity, oropharynx, nasopharynx, hypopharynx cancers, both sexes, all ages, were 7.4 in USA, 12.7 in France, 12.9 in Romania, 6.5 in Brazil, 4.8 in China, 9.0 in Namibia and 9.8 in Australia [5]. The estimated numbers of prevalent cases (5-year) as a proportion in 2020 for hypopharynx, lip, oral cavity, nasopharynx, and oropharynx (both sexes, all ages) were 39.2 in USA, 65.9 in France, 59.6 in Romania, 20.6 in Brazil, 20.1 in China, 12.0 in Namibia and 51.0 in Australia [6]. The estimated age-standardized mortality rates (World) in 2020 for hypopharynx, lip, oral cavity, nasopharynx, and oropharynx (both sexes) all ages were 1.4 in USA, 3.0 in France, 6.7 in Romania, 3.2 in Brazil, 2.5 in China, 5.5 in Namibia and 1.6 in Australia [7].
A recent multicenter study concluded that in some populations in the United States, more than 90% of OPSCCs are produced by HPV [9]. The updated data available on the International Agency for Research on Cancer (IARC) Cancer Today website underline the importance of this health issue and raise some questions regarding the risk factors for developing different types of head and neck squamous cell carcinoma (HNSCC), depending on age and gender. HPV 16 was recognized as a risk factor for oropharyngeal cancers, besides smoking and chronic alcohol use [10]. In a recent meta-analysis, Mariz Bala et al. analyzed the data of more than 6000 patients to reveal accurate information about the global prevalence of human papillomavirus (HPV) in oropharyngeal squamous cell carcinomas (OPSCC). Compared to the overall HNSCC prevalence, which is different for male and females, the authors identified a similar 45% pooled prevalence of HPV-driven OPSCC, for both genders, and they also suggested that double p16/HPV-DNA/RNA testing is the optimal method in regard to specificity and prognostic accuracy [11].
The treatment of HNSCC includes surgery, chemotherapy, and radiotherapy alone or combined. In some cases, resistance to therapy with recurrences and metastasis and/or side effects appear. This underlines the need for a new direction of research for targeted cancer therapy. Phosphoinositide 3-kinase (PI3K)/mammalian target of rapamycin (mTOR) pathway components are key therapeutic targets in cancer, immunity, and thrombosis. In normal cells, the PI3K/mTOR pathway has regulatory roles in cell survival, proliferation, and differentiation. However, aberrant variants of activation of this pathway frequently occur in human cancers [12]. PI3K is believed to be one of the key therapeutic targets for cancer treatment, based on the observation that hyperactivity of PI3K signaling is significantly correlated with human tumoral progression, an increase in tumor micro vessel density and enhanced chemotaxis and invasive potential of cancer cells. Enormous efforts have been dedicated to the development of drugs targeting PI3K signaling, many of which are currently employed in clinical trials evaluation. PI3K inhibitors are subdivided into dual PI3K/mTOR inhibitors, pan-PI3K inhibitors and isoform-specific inhibitors [13].
The most used inhibitors in the treatment of solid tumors are the pan-PI3Kis (Buparlisib-BKM120; Pictilisib-GDC-0941 and Copanlisib-BAY 80-6946), which target each of the four catalytic isoforms of class I PI3K; therefore, they have the potential for broad activity in several tumors types, with a range of different molecular alterations. However, such broad inhibition of this molecular pathway may lead to a potentially higher risk of adverse events, which could limit the use of such agents in therapeutic doses. BEZ235 is a potent, oral, ATP-competitive dual inhibitor of the four class I PI3K isoforms and the downstream effectors mTORC1/2. Alpelisib-BYL719 isoform-specific PI3Kis have the narrowest profile and may require careful patients' selection based on potential biomarkers of sensitivity and resistance [12].
The novelty of these targeted therapies meant that besides having the promise of potentially serving as new treatment strategies, several lessons had to be learned from early studies. The findings to date suggest that PIK3CA and PTEN alterations are relatively weak biomarkers of clinical activity. However, PIK3CA mutations appear to be more promising as predictive factors for p110a catalytic isoform-specific inhibitors, with PTEN alterations possibly associated with resistance. Secondly, it is increasingly evident that single-agent targeting of the PI3K pathway has limited activity. Therefore, the identification of appropriate biomarkers of efficacy and the development of optimal combination therapies and dosing schedules for PI3Kis are likely to be required for the broad acceptance of this class of compounds in clinical practice [13].
The acquired amplification and mutation of PIK3CA and PIK3CB, which resulted in a marked upregulation of the PI3K signaling itself, has been shown to cause resistance to selective PI3K inhibitors [12].
Overall, PI3K inhibition is being investigated as a potential strategy to develop novel therapeutics for cancer management. Although different researchers are moving forward with the clinical development of PI3K inhibitors, maximizing the utility of these agents in the treatment of cancer patients remains challenging. Certainly, understanding the precise mechanisms of PI3K signaling and PI3K inhibition will be critical. Optimization of the patient selection strategies and combination approaches will help increase the practical efficacy of these agents. Continued work to clarify the resistance mechanisms and the novel strategies to overcome resistance will also be important [12].
HNSCC Patients PI3K Inhibitors Clinical Trials
Over the last 5 years, five clinical trials were published (three from the USA, one from Canada and one from France), which evaluated the PI3K targeted therapy in recurrent or metastatic HNSCC patients, heavily pre-treated HNSCC patients, or locoregionally advanced SCCHN (LA-SCCHN) patients.
Chronologically, the clinical trials analyzed the synergistic effects of the combination of temsirolimus with low-dose weekly carboplatin and paclitaxel [14]; assessed the maximum tolerated dose (MTD) of the PI3K inhibitor buparlisib given concurrently with cetuximab [15]; evaluated the addition of BYL719 to cetuximab and radiation [16]; and assessed the effects of alpelisib, a class I α-specific PI3K inhibitor in combination with concurrent cisplatin-based chemoradiation [17] and a combination of copanlisib, an intravenous, pan-class I PI3K inhibitor, with the anti-EGFR monoclonal antibody cetuximab [18]. Tumor regressions and benefit from the given PI3K therapy was reported for combining mTORC1 inhibitors with carboplatin and paclitaxel chemotherapy, buparlisib at 100 mg daily plus cetuximab, BYL719 associated with cetuximab and radiation, Alpelisib in combination with cisplatin-based CRT (where the three-year overall survival was 77.8%), and Axitinib, a potent inhibitor of vascular endothelial growth factor receptor [14][15][16][17]19]. The most recent trial [18] studied the novel drug copanlisib combined with cetuximab and demonstrated unfavorable toxicity and limited efficacy, and the trial was stopped earlier than initially planned.
It is more than obvious that nowadays, there is a growing amount of important research and discoveries in the field of developing specific inhibitors and in the field of technology for assessing the efficiency of these cell lines treatment with specific inhibitors. At the same time, medical specialties develop practices separately from other specialties, sometimes without taking into consideration the discoveries of other medical fields. The results obtained from laboratory should be transmitted and applied in clinical practice for the optimization of the cancer patients' follow-up. For example, it would be necessary to know the cytotoxic effect needed for each patient. With selected antibiotics, it is possible to determine the lowest dose of antibiotic needed to kill a bacterium. We might try to come up with a similar approach for tumor target therapy, as for the moment, it is not sure if oncologists measure the levels of anti-tumoral drugs, while the oncologic patients continue therapy despite moderate side effects. For example, patients could have to tolerate pneumonitis and leg edema from high doses of everolimus. One needs uninterrupted, high levels of a certain drug when using prolonged therapy (antibiotic or likely anticancer); resistance develops with stops/starts or lower doses. Another important aspect to be taken into consideration is if the absorption of oral drugs may be affected by food intake. To delay or avoid resistance, we might have to use a combination of multiple drugs that attack the same target in a similar matter to how we avoid resistance when administering antibiotics, by combining two drugs that act on the same target, e.g., the cell wall. By looking at how other medical specialties deal with similar negative outcomes, such as resistance and establishing minimum effective doses, we may develop better strategies for the treatment of cancer patients [20].
Aim: to analyze the availability, sensitivity and specificity of the new diagnostic platform for targeted therapy of HNSCC HPV+/−, using HNSCC-derived cell lines culture and HNSCC pdx (patient-derived xenografts).
Materials and Methods
Literature search and study selection: a systematic search of the PubMed and the EM-BASE databases was carried out for all the published studies on HNSCC in the last 5 years, using the following search algorithm: HNSCC HPV-positive PI3K-positive targeted therapy, HNSCC cell lines PI3K-targeted therapy, pdx xenograft HNSCC HPV PI3K-targeted therapy. We performed a systematic analysis for the studies that were published in English, from 1 January 2017 to 1 March 2022, and that described and analyzed the methods used for optimal targeted therapy of HNSCC patients. The 14 studies were blinded and analyzed by two persons (Figure 1). We excluded review papers and studies that have tested cell lines to other therapies except PI3K inhibitors. tolerate pneumonitis and leg edema from high doses of everolimus. One needs uninterrupted, high levels of a certain drug when using prolonged therapy (antibiotic or likely anticancer); resistance develops with stops/starts or lower doses. Another important aspect to be taken into consideration is if the absorption of oral drugs may be affected by food intake. To delay or avoid resistance, we might have to use a combination of multiple drugs that attack the same target in a similar matter to how we avoid resistance when administering antibiotics, by combining two drugs that act on the same target, e.g., the cell wall. By looking at how other medical specialties deal with similar negative outcomes, such as resistance and establishing minimum effective doses, we may develop better strategies for the treatment of cancer patients [20]. Aim: to analyze the availability, sensitivity and specificity of the new diagnostic platform for targeted therapy of HNSCC HPV+/−, using HNSCC-derived cell lines culture and HNSCC pdx (patient-derived xenografts).
Materials and Methods
Literature search and study selection: a systematic search of the PubMed and the EMBASE databases was carried out for all the published studies on HNSCC in the last 5 years, using the following search algorithm: HNSCC HPV-positive PI3K-positive targeted therapy, HNSCC cell lines PI3K-targeted therapy, pdx xenograft HNSCC HPV PI3Ktargeted therapy. We performed a systematic analysis for the studies that were published in English, from 1 January 2017 to 1 March 2022, and that described and analyzed the methods used for optimal targeted therapy of HNSCC patients. The 14 studies were blinded and analyzed by two persons (Figure 1). We excluded review papers and studies that have tested cell lines to other therapies except PI3K inhibitors.
Results
a. We identified five studies that approached different HPV-positive or HPVnegative HNSCC cell lines, with or without PI3K mutation, and tested the effects of different PI3K inhibitors, alone or in combination with other drugs (e.g., cisplatin and docetaxel). In addition to PI3K, the authors identified other molecular targets, such as HRAS and HER3. All these studies detected that the tested HNSCC cell lines were sensitive to the selected drugs, and they suggested the continuation of these studies provides a rationale for the clinical evaluation of targeted therapy for the treatment of HPV+ HNSCC patients. Therapeutical effect was evaluated using specific and sensitive diagnostic methods, including evaluation of viability, proliferation, cytotoxicity, and apoptosis [21][22][23][24][25] (Table 1).
Results
a. We identified five studies that approached different HPV-positive or HPV-negative HNSCC cell lines, with or without PI3K mutation, and tested the effects of different PI3K inhibitors, alone or in combination with other drugs (e.g., cisplatin and docetaxel). In addition to PI3K, the authors identified other molecular targets, such as HRAS and HER3. All these studies detected that the tested HNSCC cell lines were sensitive to the selected drugs, and they suggested the continuation of these studies provides a rationale for the clinical evaluation of targeted therapy for the treatment of HPV+ HNSCC patients. Therapeutical effect was evaluated using specific and sensitive diagnostic methods, including evaluation of viability, proliferation, cytotoxicity, and apoptosis [21][22][23][24][25] (Table 1). b. We identified nine studies that used different pdx HPV-positive HNSCC models. The majority of the analyzed studies were performed in the USA, and all authors looked for the successful establishment of pdx models from HNSCC, including preserving the genomic profile (e.g., HPV status, p16, PI3K). The established pdx were treated with specific drugs: PI3K inhibitors alone or in combination with cetuximab, pan-HER inhibitors, and Spleen tyrosine kinase (SYK) inhibitors. The results of this research clarify the basic profile of HNSCC, the molecular mechanisms of resistance to the treatment, and of course, their potential for the development of novel molecular therapy [26][27][28][29][30][31][32][33][34] ( Table 2). The available clinical characteristics of patients from which the pdx were derived can be seen in the Table 3.
Discussion and Conclusions
In our descriptive literature review, we have analyzed the recent studies that evaluated the applicability of two modern platforms of diagnostics: cell lines and pdx derived from HNSCC and their response to PI3K inhibitors. In the last 5 years, comprehensive studies were published, which focused on the preparation and validation of these diagnostic platforms. Validation was achieved by preserving the histology and genomic profiling of the original tumors. Most of the tested cell lines were sensitive to the PI3K drugs, but a synergistic effect was seen in case of the association between novel targeted therapy together with known oncologic treatment or with inhibitors of other molecular targets.
HNSCCs are different regarding HPV involvement as an etiologic factor in comparison to cervical cancer. If for cervical cancer there are very well-established guidelines to select women that present a high risk for carcinogenesis, progression, and invasion [35], and there is hope for the eradication of cervical cancer [36], in the case of HNSCC, only one single high risk HPV type, 16, is recognized as a risk factor, and only for OPSCC (oropharyngeal cancers), beside smoking and chronic alcohol use [10].
For HNSCC, the current standard of care, for most patients with head and neck squamous cell carcinoma, remains a combination of surgery, radiation and/or cytotoxic chemotherapy [37].
One research direction is to identify the HPV-driven HNSCC cases by using a very strict algorithm of diagnosis, similar in manner to the HPV-AHEAD project, which was realized by an international team of researchers, under the guidance of Infections and Cancer Biology Group, International Agency for Research on Cancer, Lyon, France. The algorithm used in this study was a very strict and rigorous one: FFPE HNSCC samples were analyzed using HPV DNA testing, HPV RNA testing, and p16 analysis. Archived HNSCC tissue samples-189 from northeastern Romania, 364 from the central region of India, 696 from Italy and 772 patients from Belgium-were tested using this algorithm. In all the four HPV-AHEAD studies, the highest rate of HPV DNA prevalence was detected for OPSCCs, with similar values in the studied areas (50% in Romania, 18.9%% in India, 40.4% in Italy and 36.4% in Belgium). HPV 16 was the most prevalent viral type in all the samples analyzed by these studies. HPV-driven HNSCCs were defined by the presence of both viral DNA and RNA, and the highest prevalence of this double positivity was also found in OPSCC samples [38][39][40][41]. The utility of detection of HPV in HNSCC cases will lead to the opportunity to prevent these cancers by available HPV vaccines.
Another research direction on HNSCC is to identify predictive biomarkers or targetable mutations, employing the use of advances in precision medicine, e.g., next-generation sequencing (NGS). HPV-positive oropharyngeal cancers have a better clinical outcome than HPV-negative cases when given radiotherapy (RT) alone and subsequent surgery if needed [42]. Most HPV-positive HNSCC patients may not need intensified chemotherapy or hyper fractionated radiotherapy, and less intensive treatment would be a better option, in order to avoid side effects. The patients who are identified as HPV positive can benefit from a de-escalation of therapy; thus, more specific diagnosis assays can be applied directly in clinical practice. In a Swedish study, when hotspot mutations in 50 cancer-related genes were analyzed by NGS, PI3KCA and FGFR3 mutations were frequently detected in HPVpositive but not in HPV-negative TSCC/BOTSCC [43]. Continuation and complementary studies were realized for patients with HPV + TSCC/BOTSCC and wild-type FGFR3, and researchers found that the overexpression of FGFR3 was correlated with better disease-free survival (DFS) [44][45][46]. These findings are supported by recent studies, such as one published in March 2022: PIK3CA gene mutations were present in almost 40% of HNSCC samples, and the authors considered that these patients could benefit from therapies targeting the PI3K pathway, using further methodological standardization [47].
Previous research (2018) from Queensland, Australia considered that PIK3CA mutations may serve as predictive biomarkers for therapy selection. Therefore, the authors developed an allele-specific technology for the detection of PIK3CA alterations in circulating tumoral DNA (ctDNA). ctDNA holds promise as a potential biomarker in HNSCC [48]. Janecka-Widła et al. identified differences regarding the expression and prognostic potential of proteins involved in PI3K signaling between HPV 16-positive and HPV-negative HNSCC patients, using immunohistochemistry and qPCR [49].
In this review, we presented the updated results regarding different diagnostic platforms for guiding targeted therapy of HNSCC, both HPV positive and negative. In addition to the PI3K pathway [22,23], the analyzed studies identified other molecular targets, such as HER3 [24,25] and HRAS [21], which could be effective therapeutic targeting strategies in HNSCC cell lines, either HPV positive or negative. The multiple therapeutical targeting (e.g., TP53, CDKN2A, CCND1, EGF receptor-EGFR) in HPV-positive and negative HNSCC is supported by the findings of other authors [50][51][52].
Patient-derived xenografts employed as models for head and neck cancer are recent and modern platforms to optimize and discover new targeted therapy for head and neck tumors. PDX are offering the opportunity of "personalized" treatment of HNSCC patients, as they have the ability to predict clinical outcomes of the same patient undergoing [53]. PDX are offering more comprehensive data regarding targeted therapy, in comparison with different HNSCC cell lines, which are monolayer cells. The PDX are offering the possibility to simulate the heterogeneity of clinical HNSCC, with histological, pathological and genetical similarities, and therefore, they are considered relevant models for precision medicine in HNSCC [54].
The promising results here presented, using patient-derived xenografts and patientderived cells for HNSCC, were validated in a recent review regarding their preclinical application in evaluating some personalized medicine strategies as a response to the need for new targeted therapy of HNSCC [55,56].
Genomic alterations and key pathways involved in the formation of HNSCC and the clinical presentation. Recently, many groups of researchers are focusing on the identification of new possible therapeutic targets in HNSCC. Chen et al. (2021) identified four novel HNSCC susceptibility loci (CDKN1C rs452338, CDK4 rs2072052, E2F2 rs3820028 and E2F2 rs2075993) as genetic alterations in the cell cycle pathway that are common in HNSCC [57]. Zhang et al. analyzed for the first time the 2-hydroxyisobutylated modification proteomic for OSCC, which is significantly concentrated in the actin cytoskeleton regulatory pathway, suggesting that this pathway may mediate the oncogenesis or exacerbation of OSCC [58]. Li et al. studied the role and molecular mechanism of cyclin-dependent kinase 5 (CDK5) in regulating the growth of tongue squamous cell carcinoma (TSCC). The authors found that increased levels of CDK5 expression in TSCC tissues was established as an independent risk factor affecting TSCC growth and patient prognosis. CDK5 was proven as an oncogene in TSCC and is considered as a molecular marker for use in the diagnosis and treatment of TSCC [59]. Another research team identified a promising therapeutic target and prognosis marker for human OSCC: actin-like protein 8, which was detected to play an oncogenic role in the pathogenesis of OSCC [60]).
Screening and prevention of HNSCC. Many research studies are focusing on identification of the optimal biomarker for the early detection of HNSCC, as the routine clinical evaluation includes clinical examination and radiological assessment. Huang et al. identified in a meta-analysis that circular RNAs showed high accuracy in the diagnosis of OSCC and could be used as prospective biomarkers for optimal diagnostic [61]. Another possible biomarker for early detection is hypermethylated DNA in saliva and oral swabs for OSCC, which proved to have optimal accuracy and raised hope for the optimization of fast detection of these cancers' evolution, given its non-invasive sampling procedures [62]. Gaw et al. supported non-invasive hypermethylation markers using saliva and oral swabs for OSCC diagnosis, together with other biomarkers, to optimize the sensitivity and specificity of screening [63]. One possibility to prevent HPV-associated HNSCC is HPV vaccination program implementation for males also [64].
Many studies are analyzing the implications of PI3K pathway alteration for the EGFR pathway in HNSCC. Zaryouh et al. are considering that co-targeting EGFR and the PI3K/Akt pathway could have a synergistic drug effect, improving sensitivity to EGFR and clinical efficacy. Optimal selection is important for patients who could benefit from this targeted therapy [65]. The anti-EGFR monoclonal antibody cetuximab was evaluated in recurrent and/or metastatic HNSCC patients in a phase I dose-escalation trial, and the authors stopped the trial because of the unfavorable toxicity profile [18].
Genetic alterations in epidermal growth factor receptor (EGFR) and PI3K are common in HNSCC: Mock et al. found that more than half of HPV-negative HNSCC showed a pathway activation in EGFR or PI3K [66].
A clinical trial published in Lancet Oncology by Soulières et al. evaluated if the addition of buparlisib to paclitaxel could optimize clinical outcomes compared with paclitaxel and placebo in patients with recurrent or metastatic HNSCC. After 2 years of follow up of more than 150 patients, the authors observed improved clinical efficacy with a manageable safety profile, and they suggested that buparlisib in combination with paclitaxel could be an effective second-line treatment for recurrent or metastatic HNSCC patients [67]. One year later, the same research team published another clinical trial in which patients with TP53 alterations, HPV-negative status, and low mutational load had a better overall survival from combination therapy of buparlisib and paclitaxel [68]. Brisson RJ et al. evaluated on 12 patients the maximum tolerated dose of buparlisib given concurrently with cetuximab in recurrent and metastatic HNSCC. In this pilot study, the authors concluded that buparlisib at 100 mg daily plus cetuximab proved to be well-tolerated [15]. Lenze et al. considered that buparlisib, a class I pan-PI3K inhibitor, with an optimal tolerated toxicity, could improve the actual 5-year overall survival for HNSCC, which is only at around 50-66% [69].
Some group researchers have used PDX formation to predict the increased risk to HNSCC. Facompre et al. established well-characterized PDXs and organoids from HPV+ HNSCCs, and they proved the retaining of PIK3CA mutations, TRAF3 deletion, and the absence of EGFR amplifications and NOTCH1 mutations. The authors have identified, in a PDX model, reduced E7 and p16INK4A levels, which were associated with recurrent HPV+ HNSCCs and with lethal outcome. For the prediction of disease recurrence risk, the authors have also analyzed on PDX the E2F target gene expression as a useful biomarker [27]. PDX models have been used to predict the risk to HNSCC and, in an evaluation of potential biomarkers, such as Remodelin, an inhibitor of NAT10 (one of the most promising prognostic risk gene). In their study, the authors demonstrated it significantly suppressed the growth of HNSCC in a PDX model, indicating that Remodelin may be an optimal candidate drug for HNSCC treatment [70]. In a very recent study, the authors aimed and succeeded at developing a prediction score for locoregional failure and distant metastases in OSCC that incorporates PDX engraftment, beside the known clinicopathological risk factors [71].
In a comprehensive study, Karamboulas et al. have studied the molecular profiling of 64 engrafters and 48 non-engrafters, which were tested for DNA mutations or a number of copies alterations with a custom hybrid capture targeted sequencing panel of 112 genes. The authors found no statistically significant associations between any single gene mutation and engraftment. They identified that CDKN2A mutation or deletion, a CCND1 amplification, or both were present in 83.4% in case of rapid engrafters, while there were only 18.2% in case of non-engrafting/slow engrafters, which suggest that genomic deregulation of the G1/S checkpoint pathway correlates with engraftment [72]. Using the Illumina Cancer Hotspot Panel, Strüder et al. analyzed the molecular profile (TP53, KDR, KRAS, SMARCB1, EGFR) of 13 patient samples and corresponding PDX, and they found that molecular pathology is preserved in PDX, with the mention regarding the importance of intratumoral heterogeneity [26].
An impressive multidisciplinary research team from the Princess Margaret Cancer Centre, Toronto, Canada, raised the level of using PDX models from recapitulating many of the features of their corresponding clinical cancers, including histopathological and molecular profiles, to preclinical assessment of CDK4 and CKD6 inhibition, using abemaciclib, on a large collection of 243 HPV-negative HNSCC patient-derived xenograft models. The authors are underlying the necessity of using this type of CDK4 and CDK6 inhibitors in case of HNSCC patients with the poorest prognosis, and even more, they are mentioning the necessity of further studies for the identification of mechanisms of resistance of abemaciclibin, alone or in association with ionizing radiation [72].
Our study is a descriptive review of the recent findings regarding the latest research studies that are focusing on the evaluation of the most optimal targeted therapy of HNSCC, using cell lines and HNSCC-derived pdx as models. As a future research direction, we will be developing patient-derived cancer organoids that bridge the conventional gaps in PDC and PDX models [73]. In addition, a more important approach will be to assist clinicians on how to appropriately incorporate all this preclinical research in the optimal management of HNSCC [37]. | 6,409.2 | 2022-04-25T00:00:00.000 | [
"Medicine",
"Biology"
] |
WiFi Localization Based on IEEE 802.11 RTS/CTS Mechanism
Location Based Services are providing one of the fastest growing market segments today. While the most common technique for location determination is GPS, several alternative approaches have been proposed for Wi-Fi environments, based on time of flight, signal strength, etc. Time based techniques not only require accurate timestamping mechanisms, but also precise and synchronized clocks, which is quite difficult and expensive in industry. On the other hand, signal strength based methods need a lot of ground truth data. These method also require time consuming work and efforts before the system comes into use. In considerations of costs and time consumption, we present in this paper an approach for determining the location of a general Wi-Fi device combining RTS/CTS and TDoA techniques. The proposed model is deployable in various environments and contains two different methods, with clock mapping functions and asynchronized clocks. We also explain limitations of current round trip time (RTT) based RTS/CTS systems. Extensive experiments have been conducted and demonstrated how an accuracy of about one foot can be obtained and also the assumption of RTT measurements have been verified.
INTRODUCTION
One of the fastest growing market segments for computer and smart-phone based application are location based ser-vices.Knowing the place of a device can be used for a variety of purposes including navigation of a car on the road, locally relevant advertising, social networking, geo-tagging of pictures, asset tracking, shopping mall guidance, etc.Additional applications continuously appear as new technology gives higher accuracy, flexibility and compatibility.Even though nowdays, GPS is the most commonly used technique for Location Determination and becoming more popular with steadily decreasing cost of this approach, there are still many instances in which GPS does not work properly.Therefore a continuing interest in non-GPS based techniques is still active.
With ever increasing availability and deployment of Wi-Fi coverage, several approaches for non-GPS based location determination have been proposed which use time of flight or signal strength measurements.In the noisy Wi-Fi environment, signal strength based approach has limitations in accuracy as well as high set-up cost as it often requires a development of signal strength fingerprinting for the area which may have to be repeated at regular intervals of time [3,17].Time-based techniques determine the distance by measuring time of flight of the packets transmitted between nodes [13].With radio signal traveling at the speed of light, achieving high accuracy(below one foot) requires time measurements within 100 ps range.
In order to obtain timestamps with this degree of accuracy, it is not only required to have high resolution for timestamping, but also accurate clocks which should also be synchronized.High precision of synchronization across multiple clocks in distributed environments is recognized to be a rather difficult problem and costs much.While slight drift in one clock may not impact measurements significantly, when multiple clocks have drifts, localization results can be dominated by the clock characteristics.
The goal of our work is to develop techniques which permit localization of ordinary Wi-Fi enabled devices including smart phones and Access Points without the need for clock characteristics.With this kind of technique, it is easy to estimate location among different independent clocks.Several researchers have proposed time-based location methods that do not require perfect synchronization among different clocks.Youssef [23] presented a distributed algorithm which determines propagation delays among a set of n nodes.
It deals with general crystal oscillator clocks which does not require an infrastructure of accurate clocks.Mah [19] improved precision of timestamps using off-the-shelf wireless network cards.Generally, synchronization can be achieved relatively to a master clock or the average of clocks.Network Time Protocol [21] and IEEE 1588 [18] physically adjust offsets and frequencies to a master clock.Consensus clock synchronization [19] and joint distributed synchronization [7] adjust local clock to an average.There are also some papers that have focused on RTS/CTS based localization.Most of them measured Round-Trip-Time (RTT) of RTS/CTS in order to determine the pairwise propagation delay between sending node and Access Point.Hoene [16] presented a software-based trilateration algorithm with the measurement of RTT of a sequence of wireless MAC packets (e.g., RTS/CTS, DATA/ACK, etc).It overcomes the low clock resolution constraints of off-theshelf IEEE 802.11 cards and achieves an accuracy of four meters.Prieto and Bahillo [2,22] added a low-cost hardware to the existing system and applied statistical linear regression estimates to ToA computation.An external time counter for measuring RTT is used and in line-of-sight (LOS) scenarios it can achieve one meter accuracy.All of the above mentioned methods have utilized RTT measurements for ToA or TDoA computation, in which high accuracy cannot be achieved due to uncertainty of characteristics on wireless devices.In addition, complete scheme including RTS, CTS, DATA and ACK has been used, which increases wireless network traffic and costs much energy.
The steps which convert ToA and TDoA measurements to location have been studied as two optimization problems for a long time, i.e., trilateration [20] and hyperbolic location [6,11].These two types of determination are bases of almost all modern time-based localization systems, in which trilateration uses ToA and hyperbolic location uses TDoA measurements.Also, many efficient algorithms that solve optimization problems have been proposed, such as iterative gradient descent method for least squares [5].One important extension is stated in [4] and [8] for sensor network localization.
In this paper, two TDoA based methods for localization in IEEE 802.11 wireless networks are presented.These two methods use RTS/CTS handshake without RTT measurements.With the help of three customized digital transceivers (which are also called SMart Integrated Localisation Extension (SMiLE board)) [10] from Austrian Academy of Sciences, one can locate arbitrary AP that is within transmitting range of the boards as long as we know the MAC address of the AP.Specifically, we keep transmitting RTS packets from one SMiLE board to AP and get CTS reponded by AP.At the same time, receiving timestamps of RTS and CTS at the second and third boards are recorded.Based on only receiving time differences, we can compute the TDoA difference for AP and listening nodes.As stated above, DATA packets have lengths much larger than RTS/CTS.Thus it costs more energy and time to send but only a small part of the packets is useful for localization, i.e., timestamps.On the other hand, RTS/CTS mechanism reduces frame collisions won't cause a lot of traffic load increases.So it is a better idea that we only use RTS/CTS packets, which are shorter in length and can be sent automatically.
The remainder of this paper is as follows.New time-based location system design using only RTS/CTS is described in Section II, where mathematical formulations and assumptions will also be given.It turns out that in one method we can neglect the effects of drift ratio in some sense which will be shown in experiments, while in the other method we do not need scheduling of sending RTS.Current use of RTT measurements are explained in Section III.Then we implement the theoretical model described.Experimental results are discussed in Section V, where RTT verifications are described first, as well as statistical distributions and explanations.Then location estimation performances for both indoor and outdoor are explained while error distributions are also discussed.Finally, we conclude our work and give an overview of current and future steps in related research.
OUR APPROACH
Originally, RTS/CTS is an optional mechanism used to reduce frame collisions introduced by hidden node problem, as well as virtual carrier sensing in CSMA/CA.Our approach is to make use of the RTS/CTS scheme to build up a TDoA based localization model which relies on timestamping exchanging wireless RTS/CTS packets.In this section, we first describe the model analytically and then explain the mathematical formulations of the system, which verifies the feasibility of this model.
Model Description
As discussed earlier, we aim to obtain TDoA values for localization.This requires one packets received by multiple nodes that can be timestamped.The infrastructure setup consists of at least three anchor nodes and one unknown node, all of which are within transmitting range of each other in the same coordinate system.Generally in ideal WiFi environment, this range is at most 40 to 50 meters.Specifically, anchor nodes are in the known locations and unknown node(UN) can be any wireless devices, such as APs, Smartphones.UN can respond to RTS automatically if its status is idle according to 802.11 protocol.Packets exchanging scheme is stated in details below.
In the beginning, one of the anchor nodes sends RTS packets at a certain time of interval and gets CTS back from UN, and all other anchor nodes are in listening mode, i.e, they both receive RTS and CTS from that anchor and UN.In a single round of exchanging RTS/CTS messages, there are 2n timestamps on the anchor side, where n is the number of anchors.One anchor has RTS sending and CTS receiving timestamps, and each of the other anchor nodes has timestamps received for both RTS and CTS.The explicit 3-node example illustration is shown in Fig. 1.
Based on these timestamps, as well as pairwise distances between anchor nodes, TDoA values can be computed related to UN.There are two slightly different methods that we aim to propose according to the mechanism stated above.One method is based on one single round of RTS/CTS and the other is related to each anchor taking turns sending RTS/CTS.For ease of notation and explanation, we summarize notations in Table .1.In the next, we will look for more details in three anchor node case, which is the small-
Mathematical Formulations
In this part, mathematical formulations are derived and explained separately for two methods.We first define some common representations in ease of understanding: Unknown Node(UN) is any wireless device we aim to locate and anchors are devices that can timestamp and their locations are known to us.t 1 through t U 3 in figure 1 are defined as sending or receiving time.And τ 1 through τ U 3 are corresponding timestamps read from local clocks.Specifically, superscript stands for sending node and subscript is receiving node.U represents UN. β stands for clock drift and α is offset.
Anchors Taking Turns Sending RTS
In this scheme all anchors send RTS and get CTS back consecutively.This method provides more timestamps and thus does not necessarily require synchronization across multiple clocks.
Let's first recall general linear clock model with drifts and offsets, which is stated in Pinpoint [23] system.Here we use the representation that global time is t, then local clock reading related to drift rate and offset is Note that for a typical crystal oscillator clock, value for clock drift is in the order of 10 −7 .And t 1 is RTS sending time from anchor 1 and t U is CTS sending time from UN.If three anchors take turns sending RTS, we get 6 × 3 = 18 timestamps.Assume D(i, j) is pairwise distance between node i and j, and c is speed of light in the air, we have Eq.( 2) gives us the relationship between actual distances and measurements through packet exchanging with timestamps.This is also the foundation of many synchronization and location methods, since generally speed of light is assumed to be constant.When we get d(i, j), pairwise distance is known to us.Suppose ToA values among anchor nodes are known.Apply Eq.( 1) to τ 1 through τ U 3 separately and substitute different t terms: Here l.h.s are clock readings from independent clocks, and they tend to be thousands of millions (10 9 ∼ 10 10 ) per second when nano-or sub-nanosecond readings are used.Such resolution of clock is required if we want to achieve distance measurement accuracy of one foot or lower.However, with a typical clock drift of one part in 10 −7 , an error of 100 ns may be introduced, resulting in a distance error of about 100 feet.When we subtract Eq.(1.2) from Eq.(1.1) and Eq.(1.4) from Eq.(1.3), to get From our empirical experiments with various access points, iPhones and Android smartphones, r.h.s in Eq.3 and 4 without drift rate term of β3 and β4 are around (400 ± 1) μs.When multiply this by drift ratio (typically 1 ± 10 −7 ), error is within a few inches.So we take out drift ratio terms in the equations and make the approximation Subtract Eq.( 5) from ( 6) and eliminate common terms to have l.h.s of Eq.( 7) are timestamps that can be obtained easily from reading local clocks.And r.h.s has the following expression: T DoA(U, 3, 2) − T DoA (1,3,2).Terms in the first parenthesis of Eq.( 7) can be explained as T DoA(U, 3, 2), which stands for TDoA value between AP, 3 and 2. Similarly, second term is T DoA (1,3,2).If TDoA values among anchor nodes can be either estimated through time-based method, such as parts of trilateration and hyperbolic location stated in [13], or computed from physical measurements, TDoA values of UN can be derived without difficulty.
As stated above, in this method without drift compensation, one hyperbola can be generated from RTS sent by one anchor without synchronization.That's the reason why anchors need to take turns to send RTS, which will provide three hyperbolas among 3 nodes.
Single Round RTS/CTS Method
While only one anchor is sending RTS instantly, we have one single round of 6 timestamps.In order to utilize only these timestamps, clocks have to be synchronized first.We aim to synchronize all anchors with respect to the one sending RTS, with the help of timestamps related to RTS, and determine TDoA with CTS timestamps.Here we use linear mapping functions described in [15].The idea is to convey timestamps for events taking place at one node to the other node(s
Figure 2: Synchronization Using Two Consecutive Rounds
We suppose anchor 1 is sending and synchronize anchor 2 and 3 with respect to 1.Here for ease of explanation, we take anchor 2 for example.Anchor 3 works in the same way.For ith round of RTS, we denote timestamps as τ 1 (i).We use two consecutive rounds of sending and receiving timestamps to estimate relative drift ratio (Fig. 2).So we have Once we have obtained relative drift, we can calculate offset difference by Here d (1,2) is ToA between anchor 1 and 2 and drift ratio is determined in Eq.8.Generally, drift rate and offsets are not constant as time goes by, so we utilize two consecutive rounds of timestamps to determine drift and offset in between.CTS is sent during two RTS, and in this way, receiving time of CTS packets at anchor 2 can be mapped to anchor 1 with linear function of appropriate parameters l.h.s is the timestamp τ U 2 after mapped to anchor 1.We map all timestamps to anchor 1 to make them in a single clock scale.Thus CTS sent from UN and received at all anchors can be used for TDoA determination While only one anchor is sending, two TDoA values are obtained and UN can be located with two hyperbolas.The main point we make here is to map timestamps from multiple anchors to one, in the sense that they are synchronized.Thus no more turns of RTS are needed.
Error Discussion
In the method that anchors taking turns sending RTS, assumption that drift ratio is equal to 1 has been made.Actually, drift rate fluctuates within 1±10 ( −7) range most of the time.This assumption will give us an error of several inches in the TDoA measurements, which is reasonable.Drift rate terms can be eliminated in some sense that we differentiate TDoA values between anchor and UN.
In While RTT measurements make computation easier, it is not always possible that both sides can be timestamped precisely.According to the specifications of 802.11 protocol [1], a node must respond to an RTS with a CTS within SIFS (Short Inter Frame Space) which is specified to be 10 or 16 μs.When we are interested in measuring the distance with accuracies below one foot we need the time measurements with accuracy in sub nanoseconds.The variability in SIFS does not lend itself to yielding such precision.Other than that, many delays from UN side are supposed to exist, i.e., delay between receiving packets and recording clocks, delay between time-stamping and sending CTS, delay between starting to send CTS and recording clocks, etc.
In the scheme proposed here we do not rely on the time delay between RTS and CTS at one node but utilize this mechanism to get UN to send a CTS which is received at multiple other nodes.We can then utilize the Time Difference of Arrival techniques to determine the location of UN, both with and without synchronization.
IMPLEMENTATIONS
Our proposed RTS/CTS location system contains multiple anchor nodes and one unknown node(UN).Here for implementation, we use three customized SMiLE boards from Oregano Systems which are designed by researchers in Austria Academy of Sciences [9] as stated in the introduction as anchor nodes.Each of these boards contains one Altera Cyclone3 FPGA chip that acts as Central Processing Unit, tranceivers which can transmit WiFi signals in different channels within 2.4 GHz band frequency in 802.11b protocol and multiple clock sources.These boards use a 25 MHz local oscillator to generate independent clocks.The board can also timestamp ingress and egress frames at a resolution of 88.78 ps (this is the unit of one clock tick).Other experiments have been conducted on these boards and the noise term is shown to have a standard deviation of 60 ps [15].The software used is ANSI C code based on Altera NIOSII platform, where we can define sending time interval, power level, clock sources, frequency channels, as well as managing messages exchanging schemes of the boards in the system.
In order to estimate locations using different proposed methods and characterize RTT measurements, a series of tests were carried out both inside the office building and outside.We tested RTT empirical distribution with one SMiLE board and other general APs.And for location estimation, three boards and one Cisco LinkSys AP (model WRT54GL) were used for both indoor and outdoor experiments.
EXPERIMENTAL RESULTS
There are two series of experiments conducted, RTS/CTS based RTT measurements and TDoA location estimation based on model in section II.We first give the configurations of the experimental setup and then discuss results for different experiments, respectively.
Configurations
Since SMiLE boards do not have MAC layer, packet formats have to be generated manually.We tested communications between anchor nodes and UN before experiments every time.Channel 6 with freq = 2437MHz is used and verified to have the most responding ratio.This channel is also typically non-overlapping within 802.11b DSSS channels.RTS sending time interval was chosen to be 9 ms, which is large enough compared with RTT of single RTS/CTS round.All readings of local clocks are made after processing.Therefore, RTT measurements will contain the interval of transmission and reception.The default sending speed is 1 Mbit/s and length of standard RTS and CTS packets [1] are 20 and 14 bytes, respectively.So RTS interval is 1Mbit/s × (20 × 8)bits/packet = 160 μs and CTS interval is 112 μs.Compared with the microsecond processing time, propagation delay is within 1%.Detailed structure of RTT is shown in Figure .3.So the total RTT is approximately 160 + 2 × 112 + SIF S = 394 μs, where SIFS is 10 μs in 802.11b protocol and we have receiving interval for RTS and both sending and receiving intervals for CTS.
In the complete RTS/CTS mechanism, four-way handshaking also includes DATA and ACK packets.In our setup only RTS/CTS is used in our experiment for the packets have fixed lengths and formats.Also time-stamping process takes less time.
RTT Measurements
We set one anchor sending RTS continuously every 9 ms whether it gets CTS from UN or not.The measurements took place along the corridor out of an office after 8:00 pm, when there was less network traffic and no movement of humans.The corridor has about 10 feet in width and 10 feet in height.When anchors received CTS, it read clocks and reported the packets along with corresponding RTS timestamps to a laptop via wire connection through a switch.
Anchors and UN were put on top of 4-feet boxes to eliminate floor reflection.In Fig. 4, histograms and distributions b) and (c) respectively, with actual distances of 10, 20 and 30 feet.The offset has been eliminated according to the explanation above.All the nodes were in Line of Sight (LOS) to each other and pairwise distances were measured by a tape.1000 consecutive readings for each single experiment were collected.It shows how RTT in inches grew with distance increases, as similar results in [22], in which a nonparametric method was used and tested.Gaussian distribution error was also assumed in [12], which matches our experimental results.As already mentioned in Section II, variances in SIFS, sending and receiving process time all contribute to the instability of RTT values.Resolutions in 802.11WLAN techniques are in microseconds, which cannot be enough for distance estimates if we aim to achieve precision of several feet.
Location Estimation
The purpose of developing a time-based system for TDoA determination is to use it for location.Hyperbolic location techniques manage distances to survey planar coordinates of unknown positions.In our experiments, three SMiLE boards were deployed as anchors.The same Cisco Linksys AP in RTT measurement was used here as UN for location estimation.As explained in Section II, TDoA values can be generated either through sending RTS from only one anchor or in turns by anchors.
Also, TDoA values among anchor nodes are used for location in both methods, and it is easy to measure by either making physical measurements or exchanging packets to obtain ToA values in the same way that was mentioned in [13].Here for accuracy purposes, physical measurements were applied.In a set of experiments, three similar tests only differed in sending nodes, the reason why we manually made three independent experiments not schedule anchors taking turns sending RTS is that the packet loss ratio of RTS/CTS are much higher than normal data packets, and it always occurs that there is no CTS respond to RTS.So it is more difficult to get all the packets of a single round in scheduling scheme than only one anchor sending.Two sets of localizations for indoor and outdoor environments were conducted and results of two methods will be discussed and compared in the following part.For synchronized method, only one anchor sending data was used.
Outdoor
The outdoor experiments were conducted on a Saturday morning outside a building on the campus.The environment was quite clean and there were no obstacles around.In modern WLAN networks, APs were distributed typically within 50 meters apart, so geometry of nodes in our experiments were between 10 and 20 feet apart in line of sight to each other.
The topology is a convex quadrilateral which is almost a rectangle, except that UN is a little out of the rectangle.Angles between 1, 2 and 2, 3 are 90 degree.Nodes are distributed in 2-D plane to apply Euclidean distance in location estimation.One anchor was organized so that only one of them sent RTS with an interval of 9 ms, and as long as it received CTS from UN, the timestamps were reported to the laptop through switch, which is the same process as in RTT measurements.The other two anchors were only listening to both RTS and CTS packets, and as stated in Section II, six timestamps were collected for a single round.Table 3 contains actual pairwise distance measured by tape.We first give the plot of relative clock drift and offset for the synchronized RTS/CTS method in Fig. 6.
Table 3: Outdoor Pairwise distance by physical measurements
Most of the time, drift and offset are quite stable within a small range.When there are adjustments or deviations in the clock, they will jump a lot.This is shown in Fig. 6.That's the reason why we need to determine drifts and offsets in different rounds, but not a single constant across time.The experiment took place on a windy day when there were winds blowing around.Flow of air caused changes in the surrounding environment, which may cause instability of temperature and pressure.These factors may result in differences in speed of light and in turn introduce errors in timestamps and TDoA measurements.This is another error term besides wireless channel noise during transmission, which cannot be eliminated.
Indoor
In the indoor environment, we deploy the nodes so that pairwise distances were within 10 feet.This is mainly because higher precision requirements for indoor location.All the other settings were the same as outdoor experiments.Drift and offset measurements have similar distributions, which have not been shown here.Actual distance in one experiment is listed in
Discussions
From Table 4 Noise through wireless transmission process is one major error term in 802.11WLAN.The error can be greatly mitigated through wired transmission but it will decrease the mobility of the system.
Here we compared TDoA values generated by proposed methods with measurements by tape for both methods.The distribution in Fig. 7 shows that errors for TDoA estimation are like Gaussian distributions.It is also not surprising if the error follows independent identically distribution(i.i.d).This matches our assumption but we have to confirm it with more verifications.If so, the distributions of TDoA are i.i.d, then we can minimize variances by averaging over repeated rounds of measurements, which is also shown to be a significant improvement since we can get 100 rounds within one second and lower the variance by 10.
Another error comes from physical measurements by tape for pairwise distances.Differences between measured and actual distances contribute to that error.We make approximation in Eq. ( 7) such that clock drift does not make much difference.In fact, clock drift may change according to temperature because oscillators used in SMiLE board are Temperature-Control-Oscillator (TXCO).Anchor geometry is also related to the final location error of the UN, which is discussed both in [19] and [14].Results in this paper have rather small errors for indoor localization.Actually, normal indoor environment can be much more complicated due to walls all around, reflections, refractions and movement of human bodies.So we may get such results with error around one foot in only restricted environments.
CONCLUDING REMARKS
In this paper a novel TDoA location model based on RTS/CTS mechanism in IEEE 802.11 is introduced.In section V we have shown that the precision of location estimation of about 1 foot has been achieved with both synchronized and asynchronized methods.Taking advantage of customized FPGA extension boards which has sub-nanosecond accuracy is an key part for our experiments.Furthermore, short packets sending intervals may allow easily implementation of real time location determination.
Instead of traditional four-way message exchanges, we propose to use only RTS/CTS.This makes it easy for our processing of timestamps because RTS and CTS both have fixed lengths.The first method only needs timestamps that one anchor sends RTS, and uses linear mapping functions to synchronize clocks.In the second method, it does not necessarily require clock synchronization or drift compensation.We make the assumption that approximation of drift ratio to be 1, which is shown to be reasonable in our system according to our experimental results.
We also evaluate RTS/CTS based RTT characteristics for different mobile devices.RTT measurement is widely used in time-based location systems and services.Its simpleness and popularity prompt many ToA methods, as well as research topics.While people benefit a lot from this point of view, strict requirements may not be always satisfied.That's why the new methods based on TDoA come to appear.Also we give some considerations and limitations of traditional RTT measurements in different situations.
Though we presented good location estimate results for both indoor and outdoor environments, it is common that in the indoor environment we may encounter multipath dominated effects, as well as other unpredictable matters.This may be another step forward towards indoor localization.In the test environment that experiments were conducted, we assume all nodes are within range of each other.Generally this is always the case unless long distance estimation, which has to exploit other messages other than typical WiFi signals.Other extensions may include characterization of different Access Points, indoor multipath propagation model and statistical models to the process of timestamps, which may increase the accuracy furthermore.
Figure 5 :
Figure 5: RTT histogram for standard AP and phone hot spots (inches)
Figure 6 :
Figure 6: Relative Clock Drift and Offset
Table 1 : Notations Representation
single round case, mapping independent clocks to one anchor makes all timestamps synchronized into one clock scale.It frees anchors from sending RTS in turn, but every clock has its own drift and offset.Since it is not possible to get accurate global clock, we cannot get rid of the effects of drift and offset.It is interesting to us that errors caused by clock characteristics can be reduced through averaging of multiple measurements.More details will be discussed in experiments.Round-trip time (RTT) are classical measurements that have been well studied and applied tremendously in clock synchronization and location systems.Definitions of RTT is length of time it takes for a signal to go from some node A to another node B and come back.Clearly node B takes a finite time to receive the signal from A and then sends a response back to A. If both nodes timestamp sending and receiving time of the signals and clocks are synchronized to real time, RTT can be calculated easily and accurately.After knowing RTT values, we can then calculate the distance between two nodes and use such measurements for determining locations of a set of nodes.If, on the other hand, node B is not timestamping, then node A can only determine actual RTT plus "turnaround" time at node B. In IEEE 802.11 protocol, RTS/CTS mechanism may be used to estimate RTT by measuring the time from RTS sending from A to receiving of CTS at node A.
Table 2 .
Here one clock tick is 88.78 ps, and the corresponding distance is approximately one inch.We compensated for transmission and reception time and all the units have been changed to inches.The first two columns show that average of RTTs are within 10 feet of true distances.Actually in 20 feet test, the estimate result sud-denly jumped, which should be mainly caused by multipath effects.The last column shows us RTT measurements based on RTS/CTS mechanism are not that stable to use, with standard deviation over 40 feet.In typical IEEE 802.11WLAN protocol, APs are not more than 50 meters(164 feet) apart, thus 50 feet deviation will affect location estimation results significantly.Also, RTT differences between 10, 20 and 30 feet have relative errors about 20% as actual distance changes.This is not surprising considering such a large variance.
Table 2 : RTT statistics comparison (inches)
Similar tests have been conducted with respect to different kinds of UNs.Fig.5(a)plots RTT of a standard Access Point, one deployment of general WLAN network in the building.We note that most of values are within 400 μs but there are also some large terms over 700 μs.These sudden jumps were caused by scheduling of physical AP, which generates several virtual APs.Specifically, the physical AP takes turns to act as various standard APs at different short time intervals.This is good for saving WLAN resources but will cause longer delays than expected in experimental mea- surements.Fig.5(b)showsRTT histograms of a hot spot enabled iPhone5.Compared with our own general AP, this graph appears to have similar characteristics as Cisco AP test above with no sudden jumps.It confirms our assumption and also gives us visual explanations of the limitations of RTT measurements when AP's timestamps are not known.
Table 4
(1,3)res experimental results of TDoA values from two methods with corresponding ground truth in Table3.The first row of Table4is node pairs where first node is the one that generated TDoA, i.e., node pair(1,2,3) stands for T DoA(1, 2, 3) = d(1, 2) − d(1,3).Method 1 is single round of RTS/CTS with synchronization and method 2 is the one taking turns sending.Here anchors are synchronized with respect to 1.So we do not have TDoA of node pair AP, 2 and 3.Both methods have mean errors and variances of TDoA around one foot(12 inches).
Table 8 : Indoor Location Estimation Performances for
both indoor and outdoor environments.Generally, single round method needs more post-processing including clock mapping function while does not require multiple anchors sending RTS.The other method gets rid of synchronization but have to take turns for RTS/CTS exchanging.Estimation errors of two methods are similar.Both have mean error about one foot.One thing to note is that number of rounds we collected for experiments does not contribute much to the estimate performance.We tested with 10, 100, 200, 500, 1000, 2000, 3000 and 6000 samples.The results showed that as long as sample size is larger than 100, location estimate error remain almost the same.This eases us from the fact that collecting as many samples as possible.Through the restricted environment, there are still some sources of errors, which are stated below. | 7,562.8 | 2015-08-11T00:00:00.000 | [
"Computer Science"
] |
Human enterovirus 71 subgenotype B3 lacks coxsackievirus A16-like neurovirulence in mice infection
Background At least three different EV-71 subgenotypes were identified from an outbreak in Malaysia in 1998. The subgenotypes C2 and B4 were associated with the severe and fatal infections, whereas the B3 virus was associated with mild to subclinical infections. The B3 virus genome sequences had ≥85% similarity at the 3' end to CV-A16. This offers opportunities to examine if there are characteristic similarities and differences in virulence between CV-A16, EV-71 B3 and EV-71 B4 and to determine if the presence of the CV-A16-liked genes in EV-71 B3 would also confer the virus with a CV-A16-liked neurovirulence in mice model infection. Results Analysis of human enterovirus 71 (EV-71) subgenotype B3 genome sequences revealed that the 3D RNA polymerase and domain Z of the 3'-untranslating region RNA secondary structure had high similarity to CV-A16. Intracerebral inoculation of one-day old mice with the virus resulted in 16% of the mice showing swollen hind limbs and significantly lower weight gain in comparison to EV-71 B4-infected mice. None of the mice presented with hind leg paralysis typical in all the CV-A16 infected mice. CV-A16 genome sequences were amplified from the CV-A16-infected mice brain but no amplification was obtained from all the EV-71-inoculated mice suggesting that no replication had taken place in the suckling mice brain. Conclusion The findings presented here suggest that EV-71 B3 viruses had CV-A16-liked non-structural gene features at the 3'-end of the genome. Their presence could have affected virulence by affecting the mice general health but was insufficient to confer the EV-71 B3 virus a CV-A16-liked neurovirulence in mice model infection.
Background
Enterovirus 71 (EV-71) was first described in 1969 during an outbreak with central nervous system complications in California [1]. Since then, EV-71 infections have been associated with a number of outbreaks with wide clinical manifestations, ranging from mild hand, foot and mouth disease (HFMD) to severe neurological complications and deaths. These include outbreaks in Bulgaria [2], Hungary [3], Japan [4] and more recently Malaysia [5,6], Taiwan [7] and Singapore [8]. In the later three outbreaks, more than a hundred deaths in total were reported, elevating EV-71 infection as one of the most deadly virus infection to date amongst young children below the age of 3 years old in Asia. The sudden emergence of the deadly forms of EV-71 infection in Asia was puzzling, as the virus together with other human enterovirus A viruses especially coxsackievirus A5 (CV-A5), CV-A10 and CV-A16 have been noted to cause HFMD in the region for sometime [9]. During the outbreak in Malaysia, at least three different EV-71 subgenotypes were identified. The subgenotypes C2 and B4 were associated with the severe and fatal infections, whereas, mild to subclinical infections were associated with the B3 viruses [10][11][12]. Unlike the earlier two subgenotypes, the B3 virus circulated for only a brief period during the outbreak and they have since not been isolated from patients from the later outbreaks [11,12]. A recent study reported that the B3 virus genome sequences had ≥93% similarity to EV-71 at the 5' end whereas the P3 genome region and 3'UTR had ≥85% similarity to CV-A16 [13]. CV-A16 is known to be the most common causative agent for the self-limiting HFMD. It is usually characterized by mild fever, oral ulcers and vesicular lesions on palms and soles and is not known to cause severe and fatal CNS infections. It is not presently understood why EV-71 infections tend to cause the more severe form of HFMD in comparison to CV-A16. The findings that EV-71 B3 viruses had high sequence similarity to CV-16 at the 3' end of the genome and that the viruses were not associated with the severe form of HFMD, offered opportunities to examine the potential roles of the respective genes in determining virulence. Hence, the present study was undertaken to examine if there are characteristic similarities and differences between CV-A16, EV-71 B3 and the more virulent EV-71 B4 virus and to determine if the presence of the CV-A16-liked genes in the EV-71 B3 virus genome would also confer the virus a CV-A16-liked neurovirulence in mice.
Results and Discussion
The consensus amino acid sequences of the two available EV-71 B3 virus genomes (SHA63 and SHA66) were compared to other available subgenotype B4 and CV-A16/G10 genome sequences from the Genbank. Several amino acids (His 1775 , Thr 1947 , Ile 1806 , Gln 1825 , Thr 1928 , Thr 1947 , Asn 2099 , Glu 2114 and Gln 2159 ) that were characteristic of the CV-A16/G10 were found in EV-71 B3 isolates. These amino acid differences occurred only within the 3D RNA polymerase gene, suggesting that this gene is very much CV-A16 than it is EV-71. Comparisons of the EV-71 B3 amino acid sequences against all other EV-71 and CV-A16 also revealed at least 12 amino acids (Asn 1124 , Arg 1152 , Ser 1335 , Ser 1641 , Tyr 1799 , Asp 1822 , Val 1860 , Ser 1864 , Val 1997 , Ala 2039 Asp 2101 and Leu 2125 ) that were unique to the EV-71 B3 isolates. Eight of these amino acid differences occurred within the 3D RNA polymerase gene. Two of these unique mutations found were located between amino acids 176-348 genome region essential for RNA-protein interactions [14] (Fig. 1). Alignment of the EV-71 B3 (SHA66) and EV-71 B4 (UH1) isolates RNA polymerase against the threedimensional crystal structure of poliovirus 1 Mahoney strain 3D RNA polymerase (PDB: 1RDR) was performed to locate these mutations. Of these eight mutations in EV-71 B3 virus, three were located within the finger subdomain and two were located at the palm motif suggesting that the EV-71 B3 virus amino acid substitutions were mainly located within the 3D RNA polymerase functional domains. The highly 'flexible' finger domain is involves in modulating substrate recognition and oligomerization of the polymerase for binding to nucleotides [15]. In poliovirus, mutations within the 3D RNA polymerase located to the 3' end of the genome have been shown to affect neurovirulence [16,17]. Hence, this highlights the potential importance of the 3D RNA polymerase in determining the virus neurovirulence. It was also found that in addition to the presence of CV-A16 or CV-A16-liked 3D RNA polymerase gene sequences, the EV-71 B3 viruses also shared a similar predicted 3' UTR secondary structures with CV-A16/G10 at domain Z (Fig. 2), a domain reported as important in determining cardiovirulence of CV-B3 [18]. Mutations that affect the stem-and-loop structures have been shown earlier to abolish infectivity and virus RNA synthesis [19,20]. The predicted domain Y known to form a tertiary RNA 'kissing' structure with domain X of the EV-71 B3 virus, however, differed from the EV-71 B4 and CV-A16/G10 (Fig. 2).
Inoculation of one day-old newborn mice showed that all mice inoculated with CV-A16 had the typical signs and symptoms of CV-A16 infections by day two post-inoculation. The mice were lethargic, had floppy tails, tremoring, uncoordinated movement and reduced average body weight in comparison to EV-71 B3-or EV-71 B4-inoculated mice (Fig. 3a,3g). Approximately 17% (4/24) of the mice had hind leg paralysis by day three post-inoculation and one died (Fig. 3b,3e,3f, Additional file: 1). By day four post-inoculation, all the CVA16-inoculated mice had developed hind leg paralysis and subsequently died (Fig. 3b,3e,3f). A 150 bp enterovirus genome sequence were amplified and sequenced from the total RNA of the brain of all the CV-A16-inoculated mice confirming the presence of CV-A16 in the mice brain (Fig. 4). Mice-inoculated with EV-71 B3 and EV-71 B4 viruses also had significantly reduced average body weight in comparison to the control mock-infected mice (Student's t-test, P < 0.05, Fig. 3d,3g). Mice inoculated with EV-71 B3 virus, however, had significantly reduced average body weight in comparison to those inoculated with the EV-71 B4 virus (Fig. 3g). These mice appeared lethargic and uncoordinated beginning on day two post-inoculation. Of these, 16% (4/25) developed swollen hind legs and one subsequently died on day five post-inoculation (Fig. 3c,3e,3f). There were no hind leg paralysis noted and the remaining surviving mice recovered, fed well and regained balance after day six postinoculation. In contrast, about 20% (6/31) of the mice inoculated with EV-71 B4 virus developed swollen fore limbs or hind legs and of these, three died after day four post-inoculation (Fig. 3e,3f). After day eight post-inocula-tion, the B4-inoculated mice also recovered, became more active and fed well. Pairwise comparison of the clinical illness and survival probability between the virus-inoculated groups and control were significant suggesting that the three viruses, CV-A16, EV-71 B3 and EV-71 B4 viruses caused death in mice (log rank survival analysis, P < 0.05, Fig. 3e,3f) but only infection with CV-A16 lead to 100% mortality. In contrast to CV-A16 infection, no amplification of the enterovirus sequence was detected in the selected EV-71 B4-and EV-71 B3-inoculated mice brain, suggesting that EV-71 B3 and EV-71 B4 viruses perhaps did not replicate in the mice brain when introduced intracerebrally (Fig. 4). This may help to explain the absence of hind leg paralysis in all the EV-71-infected mice and the complete recovery of all the surviving mice. Death seen amongst these mice may have been caused by infection of other tissues as manifested in mice with swollen limbs and legs. Evidence suggesting that EV-71 strains isolated during the Bulgaria poliomyelitis-like epidemic had higher tropism for mouse muscle tissues than the brain tissues [2] and EV-71 neurovirulence mimicking human infection was achieved only from using a mouse-adapted virus strain but not the parental strain [21,22] support the findings from the present study that EV-71 B3 and B4 did not infect the brain. The infection, however, manifests clinically in some mice as non-specific swollen limbs and legs. Hence it is possible that, though both EV-71 and CV-A16 viruses are closely related, different receptors are utilize for the respective virus entry into the different tissues and this could be mediated through the virus structural proteins. The mutations that occurred within the 3D RNA polymerase of the EV-71 B3 virus along with the presence of CV-A16-liked 3' UTR domain Z RNA secondary structure then could contribute to virulence but by themselves did not affect EV71 neurovirulence in mice as in contrast to CV-A16, the B3 virus lacks tropism for the mice brain. Since the major differences between the EV-71 B4 and EV-71 B3 viruses occurred at the 3' end of the genome, this support the view that the structural genes of EV-71 and CV-A16 determined tissue tropisms.
Results from the present study, also did not support the possibility that acquisition of CV-A16-liked genome sequences alone is sufficient to confer the EV-71 B3 virus Structural alignment of EV-71 and CV-A16 3D RNA polymerase amino acid sequences Figure 1 Structural alignment of EV-71 and CV-A16 3D RNA polymerase amino acid sequences. EV-71 subgenotype B3, B4 and CV-A16/G10 amino acid sequences were aligned against the poliovirus 1 Mahoney 3D RNA polymerase template sequences (PDB: 1RDR). Conserved residues are indicated as (•) and each domain are boxed and labeled. Residues shared by EV-71 B3 virus and CV-A16 were highlighted in grey and residues unique for EV-71 B3 virus were highlighted in pink.
a CVA16-liked neurovirulence in mice. The significant mice weight gain differences noted between mice infected with EV-71 B3 and EV-71 B4 viruses, with the later performing much better, however, suggested that EV-71 B3 virus infection somehow did affect mice general health. As weight gain differences are the only biological parameter that differentiate between the B3 and B4 viruses, it does appears that EV-71 B3 affected mice more than the EV-71 B4 virus. It is also worth noting that in contrast to infection in mice, CV-A16 infection in human in general does not result in severe infection as oppose to EV-71, particularly the EV-71 B4 virus infection. In parallel manner, the EV-71 B3 viruses, while they affected mice, they did not cause severe or fatal infection in humans. These implied that the EV-71 B3 virus is truly different and as its genome suggested, it has to some extent features of both EV-71 and CV-A16 infection in-vivo.
Conclusion
Results from the present study suggest that EV-71 B3 virus had CV-A16-liked non-structural gene 3D RNA polymerase and 3' UTR features at the 3' end of the genome. Their presence affected virulence differently from infection with EV-71 B4 and CV-A16 by affecting the mice general health. The presence of the CV-A16-liked genes, however, was insufficient to markedly influence the neurovirulence properties of EV-71 B3 virus in mice.
Viruses
Two EV-71 isolates identified from the 1997 HFMD outbreak in Malaysia were used. The subgenotype B3 isolate, SHA66 (EMBL: AJ238457) was isolated from a HFMD patient presented with mild infection [6,23]. The subgenotype B4 isolate, UH1 (EMBL: AJ238455) on the other hand, was isolated from the brain of a patient who died of EV-71-associated neurogenic pulmonary edema [5,6,24]. The CV-A16 isolate used was previously isolated from a HFMD patient seen at the University Malaya Medical Centre. This CV-A16 isolate was identified and characterized using monoclonal antibody staining (Chemicon Cat #3323, California, USA) and amplification of partial 5' UTR gene (data not shown).
Amino acid sequence analysis
Amino acid sequences were examined after stripping the 5' UTR and 3' UTR sequences and consensus sequences of EV-71 B3 and EV-71 B4 viruses were aligned and manually edited using GeneDoc software [25]. The previously published three-dimensional crystal structure of the 3D RNA polymerase was downloaded as template for the alignment. Using the WHAT IF program [26], domains that represent the conserved regions, loops, insertion or deletions were manually visualized to generate a structural alignment.
RNA secondary structure prediction
The 3' UTR RNA secondary structure was predicted using Zuker optimal and suboptimal minimal free energy fold-ing algorithms, as implemented in RNA Structure version 3.71 software [27]. Part of the poly A tract was incorporated into the sequences. Figure 3 EV-71 and CV-A16 infections of newborn mice. One-day old newborn mice were intracerebrally inoculated with 1 × 10 3 PFU virus per mouse and monitored daily. CV-A16-infected mice had floppy tails on day two post-inoculation (a) and hind leg paralysis beginning on day three post-inoculation (arrow, b). Mice with swollen limbs were noted in EV-71 B3 virus infection (arrow, c) and the EV-71 B3-infected mice had significantly reduced body weight gain in comparison to the mock-infected mice (d, V = B3-infected mouse, C = mock-infected mouse). Mice with floppy tails, swollen limbs and paralysis (e) and death (f) were recorded. The weight gain of the surviving mice was also determined (g).
Determination of virulence in mice
A total of 24, 25 and 31 one-day old newborn ICR mice were inoculated intracerebrally with either CV-A16 or SHA66 (B3 virus) or UH1 (B4 virus) virus inoculum. The virus inoculum with infectivity of ~1 × 103 p.f.u. was injected in a volume of 10-20 µl into the mice brain. The mice were closely monitored for any clinical symptoms, paralysis and death and the weight of each surviving mouse was recorded daily up to day 11 post-inoculation. Another litter with at least 10 one-day old newborn mice was injected with comparable growth medium and used as controls. At selected intervals post-infection, some of the mice were sacrificed and the brain tissues were harvested for total RNA using the TRI Reagent™ (Molecular Research Centre, Inc., Cincinnati, USA) following the manufacturer's recommended protocols. The RT-PCR amplification for the detection of enterovirus sequence was performed using 1 µg of RNA. Access RT-PCR kit (Promega, USA) and primer pairs, EntabF (5'-TCC TCC GGC CCC TGA ATG CGG CTA AT-3'; nucleotide positions 449-474, based on MS87 strain, Genbank: U22522) and EVRR (5'-AAT TGT CAC CAT AAG CAG GC-3'; nucleotide positions 586-606) were used. Reverse transcription was performed at 42°C for an hour followed by amplification steps; 95°C-30 seconds, 55°C-30 seconds and 72°C-30 seconds for 30 cycles and finally with 5 minutes extension at 72°C using the PTC thermal cycler (MJ Research, Massachusetts, USA). When no amplicon was obtained, the number of cycle was increased to 40. Alternatively, a second step PCR using similar parameters was performed using ten-fold diluted RT-PCR product as template. The amplified DNA fragments were electrophoresed using 2% agarose gel in 0.5 × tris-acetate EDTA buffer (0.02 M Tris base, 0.5 mM EDTA pH 8.0, 0.057% glacial acetic acid) and sequence confirmation was made by sequencing the DNA fragment.
Statistics
Student's t-test was used to evaluate if the differences in weight between the virus-inoculated mice and control mice was significant. Wilcoxon signed rank test was used to compare the survival and paralysis probability between the virus-inoculated mice and control mice. All statistical analyses were implemented using SPSS for Windows version 11.5 (SPSS Inc., Illinois, USA). All tests were twosided and P < 0.05 was considered as statistically significant.
supervision, data analyses and writing of the report. Chan Y-F performed all the virological investigations, nucleotide sequencing and analyses of data. All authors were involved in the preparation of this "Research Article" and figures. | 3,938.2 | 2005-08-26T00:00:00.000 | [
"Biology",
"Medicine"
] |
Prediction of flexible/rigid regions from protein sequences using k-spaced amino acid pairs
Background Traditionally, it is believed that the native structure of a protein corresponds to a global minimum of its free energy. However, with the growing number of known tertiary (3D) protein structures, researchers have discovered that some proteins can alter their structures in response to a change in their surroundings or with the help of other proteins or ligands. Such structural shifts play a crucial role with respect to the protein function. To this end, we propose a machine learning method for the prediction of the flexible/rigid regions of proteins (referred to as FlexRP); the method is based on a novel sequence representation and feature selection. Knowledge of the flexible/rigid regions may provide insights into the protein folding process and the 3D structure prediction. Results The flexible/rigid regions were defined based on a dataset, which includes protein sequences that have multiple experimental structures, and which was previously used to study the structural conservation of proteins. Sequences drawn from this dataset were represented based on feature sets that were proposed in prior research, such as PSI-BLAST profiles, composition vector and binary sequence encoding, and a newly proposed representation based on frequencies of k-spaced amino acid pairs. These representations were processed by feature selection to reduce the dimensionality. Several machine learning methods for the prediction of flexible/rigid regions and two recently proposed methods for the prediction of conformational changes and unstructured regions were compared with the proposed method. The FlexRP method, which applies Logistic Regression and collocation-based representation with 95 features, obtained 79.5% accuracy. The two runner-up methods, which apply the same sequence representation and Support Vector Machines (SVM) and Naïve Bayes classifiers, obtained 79.2% and 78.4% accuracy, respectively. The remaining considered methods are characterized by accuracies below 70%. Finally, the Naïve Bayes method is shown to provide the highest sensitivity for the prediction of flexible regions, while FlexRP and SVM give the highest sensitivity for rigid regions. Conclusion A new sequence representation that uses k-spaced amino acid pairs is shown to be the most efficient in the prediction of the flexible/rigid regions of protein sequences. The proposed FlexRP method provides the highest prediction accuracy of about 80%. The experimental tests show that the FlexRP and SVM methods achieved high overall accuracy and the highest sensitivity for rigid regions, while the best quality of the predictions for flexible regions is achieved by the Naïve Bayes method.
Background
The flexibility of protein structures is often related to protein function. Some proteins alter their tertiary (3D) structures due to a change of surroundings or as a result of interaction with other proteins [1][2][3]. For instance, the GTP-binding proteins adopt an active conformation when binding with GTP, and shift to inactive conformation when GTP is hydrolyzed to GDP [4,5]. Motor proteins shift their structure among multiple conformations [6,7], while many carrier proteins embedded in a membrane transport small molecules by executing structural changes [8,9]. In short, the structural flexibility that allows shifting between two or more structures is a crucial characteristic for numerous proteins that are involved in many pathways [10,11]. Although proteins can shift structure among several conformations, some of their segments, referred to as conserved domains, preserve the structure in all of the conformations [12,13]. In fact, many proteins that can change their conformations can be divided into the rigid (conserved) region(s) and the flexible region(s), in some cases referred to as linkers, which serve to link and adjust the relative location of the conserved domains. Upon the arrival of an external signal, such as a change in surroundings or a binding of another molecule/protein, the flexible region allows the protein to respond by changing its conformation. In other words, the flexible linker is essential for a protein to maintain flexibility and the corresponding function [14,15].
Additionally, the flexible linker and the rigid domain should be factored in when performing 3D protein structure prediction. Protein is a complex system that can be described by an accurate energy-based model [16,17]. However, due to the large numbers of atoms involved in the protein folding, and the resulting large amount of calculations, protein structures cannot be directly calculated (predicted) based on the existing mechanical models employed by current supercomputers. A natural solution to this problem is to apply a divide-and-conquer approach, in which a large protein is divided into several structurally conserved domains, and each of the domains is predicted separately [18,19]. A number of methods can be used for the prediction of protein domains [20]. At the same time, the remaining (except the conserved domains) protein regions that are located between domain borders may be flexible, and knowledge of their flexibility would be beneficial to accurately predict the overall tertiary structure.
The knowledge of the flexible/rigid regions would also allow us to gain insights into the process of protein folding. Biological experiments and theoretical calculation have shown that the natural conformation of proteins is usually associated with the minimum of the free energy [21][22][23]. However, the overall process that leads to the final, stable conformation is still largely unknown. Udgaonkar and Baldwin propose a framework model for protein folding [24]. Their theory states that peptides of about 15 amino acids (AAs) firstly fold into helices and strands, and then these secondary structures are assembled together to form the molecule. The hydrophobic collapse model proposed by Gutin and colleagues assumes the initial condensation of hydrophobic elements that gives rise to compact states without secondary structures. The development of native-like tertiary interactions in the compact states prompts the subsequent formation of the stable secondary structures [25]. A recent paper by Sadqui and colleagues shows the detailed process of the unfolding of the downhill protein BBL from Escherichia coli, atom by atom, starting from a defined 3D structure [26]. However, the detailed process of folding of most proteins is still under investigation. The proteins with flexible 3Dstructures may provide some hints since the conserved regions should fold separately from the flexible regions to eventually get linked into a stable (and potentially susceptible to structural change) structure.
Gerstein's group has done a significant amount of work on the related subject of classification of protein motions [27,28]. They proposed two basic mechanisms of protein motion, hinge and shear, which depend on whether or not a continuously maintained interface (between different, rigid parts of protein) is preserved through the protein's motion. The shear mechanism is a kind of a small, sliding motion in which a protein preserves a well-packed interface. In contrast, hinge motion is not constrained by maintaining the interface, and this motion usually occurs in proteins with domains connected by linkers. They also defined other possible motions, and among them those that involve a partial refolding of a protein and thus result in significant changes in the overall protein structure. This paper does not study protein motion. Instead, we aim at finding protein-sequence regions that are flexible and hence which constitute the interface between the rigid regions. In our recent work, we performed a comprehensive, quantitative analysis of the conservation of protein structures stored in PDB, and we found three distinct types of the flexible regions, namely rotating, missing, and disarranging [12]. The rotating region of a protein sequence is related to the hinge motion, i.e., it usually contains a linker which is located between two domains. On the other hand, the missing and disarranging regions correspond to the types of motions that involve a partial refolding. The missing region is associated with changes in the local, secondary structure conformations, which may also lead to different tertiary structures. For instance, given two structures that share the same sequence, some regions of one structure may form a helix or a strand, while the same regions in the other structure may form an irregular coil. For the disarranging region, the overall 3D conformations of two identical underlying sequences are similar, but the packing of the residues is spatially shifted (disarranged) in some fragments of the region. We illustrate each of these three types of regions in Figure 1.
Since the regions that are missing a secondary structure are characterized by relatively small changes in the overall tertiary structure [12], in this paper we associate the flexibility of the protein structures with the two other types of flexible regions. Our aim is to perform prediction of the flexible regions using machine-learning methods that take as an input a feature based representation generated from the primary sequence. Several other research groups addressed similar prediction tasks, including prediction of regions undergoing conformational changes [29], prediction of intrinsically unstructured regions [30] and prediction of functionally flexible regions [31]. However, these contributions have different underlying goals, use different definitions of "flexible" regions (i.e., unstructured, undergoing conformation change, and functionally flexible), and apply different prediction models. Our goal is to classify each residue as belonging to either a flexible or a rigid region. The quality of the prediction is evaluated on a carefully designed (based on the rotations and disarrangements) set of 66 proteins using the accuracy, sensitivity, specificity and Matthews Correlation Coefficient (MCC) measures and an out-of-sample cross-validation test procedure. The methods section provides further details on the definition of the flexible regions and situates it with respect to the related research.
Feature-based sequence representation
Four groups of features were compared, and the best set was selected to perform the prediction. The composition vector, binary encoding and PSI-BLAST profile representations are widely used in protein structure prediction including the structural class prediction, the secondary structure prediction and the cis/trans isomerization prediction. However, since these features were not designed for the prediction of flexible/rigid regions, a new representation, which is based on frequencies of k-spaced residue pairs, was proposed and compared with the other three representations. Due to a relatively large number of features, the binary encoding and the proposed representation were processed by two feature selection methods, which compute linear correlation and information gain based on entropy between each of the features and the predicted variable, i.e., rigidity/flexibility of the residues. The selection was performed using 10-fold cross validation to avoid overfitting. Only the features that were selected by a given method in all 10 folds were kept. While in general each of the two methods selects a different set of features, among the best 95 features selected by the entropy based method, 51 were also selected by the linear correlation based method.
Using the 10-fold cross validation, the proposed FlexRP method, which applies Logistic Regression and the proposed collocation based representation, which is processed using entropy based feature selection, was compared with four other prediction methods, i.e., Support Vector Machines (SVM), C4.5, IB1 and Naïve Bayes, which apply each of the four representations and two selection methods, see Table 1. The selected methods cover the major categories of machine learning algorithms, i.e., kernel methods, probabilistic methods, instance based learning and decision trees.
The proposed FlexRP method obtained the best, 79.5% accuracy, when compared with the other four methods, four representation and application of the two feature selection methods. The results for the two worst performing prediction methods, i.e., C4.5 and IB1, show relatively little differences in accuracy when the two feature selection methods are compared. On the other hand, results for the three best performing methods (FlexRP, SVM, and Naïve Bayes) show that using the entropy based feature selection results in the best accuracy of prediction when the proposed (best performing) representation is used. The results achieved by the proposed method are 1% and 3.5% better than two runner-up results achieved for the same representation and the SVM and Naïve Bayes classifiers, respectively. The results that apply other combinations of feature representations and selection methods are on average, over the three best methods, at least 4% less accurate. Therefore, entropy based selection not only reduces the dimensionality of the proposed representation, making it easier to implement and execute the method, but also results in improved accuracy. The superiority of the entropy based selection over the linear correlation based method can be explained by the type of features that constitute the proposed representation. The features take on discrete, integer values, and thus linear correlation coefficients, which prefer continuous values, are characterized by poorer performance.
Among the four sequence representations, the lowest average (over the five prediction methods) accuracy is achieved with the composition vector, while both PSI-BLAST profile and binary encoding give similar, secondbest accuracies. The most accurate predictions are obtained with the proposed representation. Since the PSI-BLAST profile is one of the most commonly used representations, we also combined it with the features of the kspaced AA pairs to verify whether this combination could bring further improvements. The corresponding experiments with the best performing three classifiers, i.e., FlexRP, Naïve Bayes and SVM, show that using both rep-resentations in tandem lowers the accuracy. The 10-fold cross validation accuracy equals 77.13%, 76.33%, and 72.99% for FlexRP, SVM, and Naïve Bayes, respectively. Finally, similar experiments that combine all four representations show a further drop in accuracy. The proposed, k-spaced residues based representation not only gives the best accuracy but it also uses the least number of features when compared to representations that combine multiple feature sets, and therefore this representation was used to perform the predictions.
A set of features used by the FlexRP method, which were selected using the best performing, entropy based selection method from the proposed representation, is given in Table 2. A Total of 95 features were selected. They correspond to threshold IG(X|Y) > 0.03, which gives the highest prediction accuracy for FlexRP and SVM. When varying the threshold to 0.035, 0.030 and 0.025, the corresponding accuracies for FlexRP are 78.79%, 79.51% and 78.96%, and for the SVM are 77.48%, 78.46% and 77.82%. Although this set of features may seem disordered, some interesting patterns can be found. For instance, the "LL" was selected as the 0-, 1-, 2-and 4spaced AA pair, since Leucine has a strong tendency to form helices [32], and thus this pair may be characteristic for the rigid regions. The k-spaced "VI" pair is characteristic to formation of strands [32], and thus it may also be associated with the rigid regions. The k-spaced "GG" pair Examples of the three types of flexible regions Figure 1 Examples of the three types of flexible regions. 1) Pair (a1) and (a2) is an example of rotating regions. (a1) is chain A of protein 1l5e from Leu1 to Tyr100 and (a2) is protein 2ezm from Leu1 to Tyr 100. Both fragments share the same sequence, are build from two domains (colored gray and black) that also share the same structure, but the structures of the linkers (colored in light gray) are different. 2) Pair (b1) and (b2) is an example of regions with missing secondary structure. (b1) is chain A of protein 1ikx from Glu224 to Leu279 and (a2) is chain B of protein 1ikx f from Glu1224 to Leu1279. Both fragments share the same sequence. The Phe227 to Leu234 in (b1) forms a strand, while it forms a coil in (b2). 3) Pair (c1) and (c2) is an example of disarranging regions. (c1) is chain A of protein 1ffx from Ile171 to Cys200 and (c2) is chain A of protein 1jff from Ile171 to Cys200. The fragments share the same sequence, and have similar overall 3D-structure and secondary structure. At the same time, the URMSD between these two structures is larger than 0.8 since the middle region between 180Ala and 192His is disarranged. The spatial packing of the corresponding AAs is different for this region.
could indicate flexible regions since Glycine has a very small side chain (and thus it may be more flexible) and is shown to be mainly associated with coils [32]. At the same time, to the best of our knowledge, some of the other pairs cannot be currently explained. In general, the flexibility/ rigidity of individual k-spaced pairs is associated with the arrangement of the corresponding side chains in 3D structure and their quality is supported by the relatively high accuracy of the methods that use this representation. We also performed a test, in which we accept all features that are selected in at least 9 out of the 10 cross-validation folds to investigate if inclusions of additional features can improve the results. The corresponding feature sets gave slightly lower accuracies. For the proposed representation, the corresponding accuracies dropped to 77.36% for SVM and to 78.99% for FlexRP. Table 1 shows that among the five prediction methods, FlexRP, SVM, and Naïve Bayes are characterized by higher, on average by 5-8%, accuracies when compared with the remaining two machine learning methods. The three best methods achieve 76%-79% accuracy for the proposed representation that includes 95 features, while accuracy of the C4.5 and Naïve Bayes methods is below 69% and 67%, respectively. Therefore, the two worst methods were dropped, while the three best performing classifiers were optimized by exploration of their parameter space. As a result of the optimization, the FlexRP with a standard value of 10 -8 of ridge parameter for the Logistic Regression classifier, Naïve Bayes with kernel estimator for numeric attributes [33] and SVM with radial basis function based kernel and a corresponding gamma value of 0.22 (polynomial kernels of varying degrees were also considered) [34] were found as optimal. We note that the annotation of flexible/rigid regions is based on the current structures stored in PDB, and with the posting of new structures, the rigid regions could be reclassified as flexible (in contrast, the flexible regions could not be reclassified as rigid when assuming that the current data is correct). Therefore, maximization of prediction quality for flexible regions as a trade-off for reduced quality for rigid regions may be beneficial, given that the overall prediction accuracy does not decrease. This trade-off was implemented using a cost matrix with 1.0 misclassification cost for flexible regions and 0.6 cost for the rigid regions for the Naïve Bayes method. At the same time, the cost matrix was not found useful in the case of the other two methods.
Optimization of the prediction of the flexible/rigid regions
The accuracies of the optimized prediction methods equal 79.51%, 78.41% and 79.22% for the FlexRP, Naïve Bayes and SVM, respectively. To provide a more comprehensive comparison of the achieved performance, additional measures such as sensitivity, specificity, the Matthews Correlation Coefficient (MCC) and the confusion matrix values (TP, FP, FN, and TN) are reported in Table 3, which lists 10-fold cross validation results for the three methods.
The optimization provides relatively marginal improvements. FlexRP method gives the best overall accuracy and high sensitivity and specificity for the rigid regions. SVM provides the best sensitivity for the rigid regions and the best specificity for the flexible regions, while Naïve Bayes gives the highest MCC and the highest sensitivity for the flexible regions. In summary, the proposed FlexRP method is shown to provide the most accurate prediction of flexible/rigid regions; however, Naïve Bayes based method provides more accurate prediction for the flexible regions.
Additionally, we studied the impact of the varying values of the maximal spread, k, of the k-spaced AA pairs that are used to represent protein sequences on the prediction accuracy of the three optimized prediction methods. The accuracy in function of p for the k-spaced AA pairs where k ≤ p and p = 3, 4, 5, 6, 7, 8, 9, 10 is shown in Figure 2. The results show that accuracy increases steadily for p between 3 and 8, and saturates above the latter value. The best accuracy corresponds to p = 8 and is achieved by the FlexRP, while on average, over the three methods, the accuracy for p = 8 equals 79.1%, and for p = 9 and 10 is higher and equals 79.2%. Therefore, the proposed sequence representation includes features for p = 9 (for p = 10 the accuracy is the same, but the number of features is larger).
Comparison with similar prediction methods
The FlexRP was also compared with two recent methods that address similar predictions. Boden's group developed a method to predict regions that undergo conformational change via predicted continuum secondary structure [29].
On the other hand, the IUPred method performs prediction of intrinsically disordered/unstructured regions based on estimated energy content [30]. We note that although the above two methods perform similar prediction tasks, the definition of the flexible regions defined in this paper is different. Both of the above methods were tested on the same data as the FlexRP method and the results are summarized in Table 4. A direct comparison between accuracies may not be fair; however, low values of MCC for both IUPred and Boden's method in comparison with the MCC value for the FlexRP indicate that the proposed method is better suited for the prediction of flexible regions, as defined in this paper. The IUPred in general struggles with prediction of flexible regions, i.e., low sensitivity shows that it classifies a low number of the actual flexible residues as flexible, and low specificity shows that it classifies a relatively large number of the rigid residues as flexible, while doing relatively well in the case of prediction of the rigid regions. On the other hand, Boden's method is better balanced between the flexible and the rigid regions but it still overpredicts the flexible regions, i.e., it achieves low specificity for the flexible regions. The FlexRP method obtains relatively good predictions for both flexible and rigid regions.
We use an example to further demonstrate differences between the three prediction methods. The prediction was performed for a segment between 11E and 216A in chain A of 1EUL protein, see Figure 3. The continuum secondary AK DI AD AI DC DI ED DP AC EF FH ED AI AV HD FF GL EN EL EL KI EK AV AY IE FG PG GG KF KE KY FK GG DG NQ HP PS KC KG LI LL GG KQ DS PG IL TI structure is predicted by a cascaded probabilistic neural network (CPNN) [35], and the threshold to distinguish between the flexible and rigid residues is set to 0.49. The IUPred method uses a probabilistic score ranging between 0 (complete order) and 1 (total disorder), which is based on an energy value calculated using a pairwise energy profile along the sequence. This method uses a threshold that equals 0.5 to distinguish the disorder and ordered regions. Similar to the IUPred, the FlexRP method computes a probabilistic score that ranges between 0 (fully rigid) and 1 (fully flexible) and uses a threshold that equals 0.5.
In Figure 3, the actual (true) flexible regions are identified by the white background. Boden's method captures flexibility in all three flexible regions, but it also predicts over 50% of this sequence as flexible. This method performs prediction based on the entropy of the predicted secondary structure, and thus the quality of the predicted secondary structure determines the prediction of a flexible region. If CPNN is used with a sequence that shares low homology with the sequences that were used to train this neural network, then the resulting entropy may possibly have relatively large values, and as a result the corresponding residues will be classified as flexible (undergoing conformational change). Therefore, a large value of entropy may be related to the actual flexibility, or can be an artifact of a training set that does not include sufficiently homologous sequences. IUPred method generates scores that form local maxima around the first and the third flexible regions. However, the threshold is too high to identify them as disordered regions. We believe that this method could provide better prediction if a suitable optimization of the threshold value for a given sequence would be performed. At the same time, such optimization was not attempted in this method and may prove difficult to perform. Finally, the FlexRP method provides successful prediction of the first and the third flexible regions but it still misses the second, short flexible region that consists of 6 residues.
Conclusion
Knowledge of flexibility/rigidity of protein sequence segments is of a pivotal role to improve the quality of the tertiary structure prediction methods and to attempt to fully solve the mystery of the protein folding process. At the same time, such information requires a very detailed knowledge of protein structure, and thus is available only for a small number of proteins. To this end, we propose a novel method, called FlexRP, for prediction of flexible/ rigid regions based on protein sequence. The method is designed and tested using a set of segments for which flexibility/rigidity is defined based on a comprehensive exploration of tertiary structures from PDB [12]. It uses a novel protein sequence representation, which is based on 95 features computed as a frequency of selected k-spaced AA pairs, and a logistic regression classifier. Based on out-ofsample, 10-fold cross validation tests, the FlexRP is shown to predict the flexible/rigid regions with 80% accuracy, which may find practical applications. Finally, the proposed method is shown to be more accurate when compared with four other machine learning based approaches and two recently proposed methods that address similar prediction tasks.
The prediction accuracy in function of p for the k-spaced AA pairs where k ≤ p Figure 2 The prediction accuracy in function of p for the kspaced AA pairs where k ≤ p. The number of features used to represent the sequence increases with the increasing value of p.
Dataset
Our previous study that concerns conservation of the tertiary protein structures shows that less than 2% out of 8127 representative segments extracted from the entire Protein Data Bank (PDB) [36] have flexible tertiary structure [12]. The representative segments include the longest sequence segments that occur in multiple structures; as such they form a complete dataset to study the conservation. These 8127 segments were derived from release #103 of PDB that included a total of about 53000 protein chains. We first collected all sequence segments which were longer than 10 AAs and which occurred in at least two chains. After filtering out segments that were contained in longer segments, 8127 of them were kept, and we found that among them, 159 incorporated either a rotating or disarranging flexible region. Based on a visual inspection of the 159 segments, 66 and 93 of them were found to be rotating and disarranging segments, respectively. After removing short segments that include less than 50 AAs and segments that include flexible regions only at the head or the tail, we created a dataset of 66 sequences, which is used to develop and test the proposed prediction method, see Table 5.
While this table lists only
The predictions obtained with the Boden's method [29], the IUPred method [30] and the FlexRP method on the 11E to 216A segment in chain A of 1EUL protein Figure 3 The predictions obtained with the Boden's method [29], the IUPred method [30] and the FlexRP method on the 11E to 216A segment in chain A of 1EUL protein. In the Boden's method residues with entropy greater than 0.49 are considered as regions undergoing conformational change; the IUPred method predicts all residues for which the probabilistic score is greater than 0.5 as belonging to the disordered regions. FlexRP classifies a residue as belonging to a flexible region if its corresponding probabilistic score is greater than 0.5. The actual flexible regions are identified using the white background.
one representative protein that includes a given segment, a comprehensive database that includes information about the location of each segment in multiple chains, the sequence itself, and the annotation of flexible/rigid residues in this segment can be found at [37]. This dataset includes segments that are characterized by different structures which were obtained experimentally. Since the dataset is relatively small, the evaluation of the prediction is performed using 10-fold cross validation to avoid overfitting and to assure statistical validity of the computed quality indices.
Definition of the flexible regions
Several different definitions of the flexible regions were proposed in the past: 1. all regions with NMR chemical shifts of a random-coil; regions that lack significantly ordered secondary structure (as determined by CD or FTIR); and/or regions that show hydrodynamic dimensions close to those typical of an unfolded polypeptide chain [38] 2. all regions with missing coordinates in X-ray structures [39,40] 3. stretches of 70 or more sequence-consecutive residues depleted of helices and strands [41] 4. regions with high B factors (normalized) from X-ray structures [42] In this paper, a data-driven definition of flexible regions, which is based on a comprehensive exploration of the experimental protein structures, is proposed. A given sequence (region) is considered flexible if it has multiple different experimental structures (in different proteins), i.e. the corresponding structure is not conserved. Although two existing methods, i.e., FlexProt [43] and Fat-Cat [44], can be used for identification of flexible regions for a pair of protein structures, a simpler and faster method that gives similar results was used. The selection of the applied method was motivated by the properties of the data that was used to define flexible regions, i.e., some segments in our dataset have dozens of structures and all combinations of pairs of structures had to be compared. Based on [12], the flexible regions are found using a sliding, six residues wide window. A protein sequence (segment) with n residues, which is denoted as consists of following n-5 six-residue fragments ..,..., A n-6 A n-5 ...A n-2 A n-1 , A n-5 A n-
...A n-1 A n
The flexible regions were identified by comparing distance, which was computed using the Root Mean Square Distance for Unit Vectors (URMSD) measure [45], between structures of the six-residue fragments among the multiple structures that correspond to the same segments (see "Dataset" section). Let us assume that a given segment has m structures, which are stored in PDB. For convenience, the m structures of the corresponding i th sixresidue segment are denoted as S i,1 , S i,2 ,..., S i, m-1 , S i, m . Based on results in [12], if the URMSD between two structures is smaller or equal to 0.5, then they are assumed to be structurally similar; otherwise they are assumed to be different. Therefore, given that the i th six-residue fragments is defined as flexible; otherwise it is regarded as rigid. In other words, the regions characterized by maximal URMSD that are larger than 0.5 are indexed as flexible, while the remaining regions are indexed as rigid. The 66 segments that constitute our dataset include a total of 5716 residues, out of which 3929 were assumed as rigid and 1787 as flexible.
Following, we use an example, in which we aim to identify the flexible regions for 88G to 573R segment in chain A of 1UAA protein and chain B of 1UAA protein, to contrast results of the above method with the results of Flex-Pro and FatCat. Computation of flexible regions took 10 seconds for FlexProt, 30 seconds for Fatcat, and less than a second for the method that was used in this paper. The FlexProt identified a flexible region (hinge) between GLY374 and THR 375, the FatCat gave the same result, and the third method identified TYR369 to PHE 377 as the flexible region. While similar flexible regions were identified by all three methods, the method from [12] is an order of magnitude faster. The efficiency is especially crucial considering that most of the sequence segments had numerous structures and the computations had to be performed for all combinations of pairs of structures.
FlexRP method
The proposed method performs its prediction as follows: 1. Each residue that constitutes the input sequence is represented by a feature vector. First, a 19-residues wide window, which is centered on the residue, is established. Next, frequencies of the 95 k-spaced AA pairs given in Table 2, which are inside the window, are computed.
2. The vector is inputted into a multinomial logistic regression model to predict if the residue should be classified as flexible or rigid.
The evaluation procedure applied in this paper assumes that the original dataset is divided into two disjoint sets: a training set that is used to develop the regression model max ( , ) . , and a test set that is used to test the quality of the proposed method (and other, considered methods). The logistic regression model is established through a Quasi-Newton optimization based on the training set [46]. Next, we provide details concerning the sequence representations and the performed experimental procedure.
Feature-based sequence representation
Four representations, which include PSI-BLAST profile, composition vector, binary encoding, and the proposed collocation based features are applied to test and compare the quality of the proposed FlexRP method. A window that is centered on an AA for which the prediction is computed is used to compute the representation. In this paper, the window size is set to 19, i.e., the central AA and nine AAs on both of its sides. The size was selected based on a recent study that shows that such a window includes information required to predict and analyze folding of local structures and provides optimal results for secondary structure prediction [32].
The composition vector is a simple representation that is widely used in the prediction of various structural aspects [47][48][49]. Given that the 20 AAs, which are ordered alphabetically (A, C,..., W, Y), are represented as AA 1 , AA 2 ,..., AA 19 , and AA 20 , and the number of occurrences of AA i in the local sequence window of size k (k = 19) is denoted as n i , the composition vector is defined as Another popular protein sequence representation is based on binary encoding [50,51]. In this case, a vector of 20 values is used to encode each AA. For AA i , the i th position of the vector is set to 1, and the remaining 19 values are set to 0. Each of the AAs in the local sequence window of size k = 19 is represented by such a vector, and the combined vector of 19*20 = 380 features is used to predict flexibility/rigidity of the central AA.
PSI-BLAST profile [52] is one of the most commonly used representations in a variety of prediction tasks related to proteins [35,51,53]. Using a PSI-BLAST method, a target protein sequence is first (multiply-) aligned with orthologous sequences. X i is set to the log-odds score vector (over the 20 possible AAs) derived from the multiple alignment column corresponding to the i th position in the window. This method treats each X i as a 21-dimensional vector of real values; the extra dimension is used to indicate whether X i is off the end of the actual protein sequence (0 for within sequence, 0.5 for outside). The log-odds alignment scores are obtained by running PSI-BLAST against Genbank's standard non-redundant protein sequence database for three iterations. In this paper, PSI-BLAST profiles were run with default parameters and a window size of 15 as suggested in [35] and [53].
1 For each segment, one PDB ID together with the start and the end of the segment are listed.
A new representation, which is based on frequency of kspaced AA pairs in the local sequence window, was developed for the proposed prediction method. Our motivation was that the flexibility of each AA is different, i.e. AAs with smaller side chain (e.g. Glycine) may be structurally more flexible since they are less affected by the arrangement of the side chains of adjacent AAs. Furthermore, if several AAs that are characterized by potentially higher flexibility would cluster together, then the corresponding entire region (window) would be more likely to be flexible. Based on this argument, for a given central AA, a sliding sequence window of size k = 19 was used to count all adjacent pairs of AAs (dipeptides) in that window. Since there are 400 possible AA pairs (AA, AC, AD,..., YY), a feature vector of that size is used to represent occurrence of these pairs in the window. For instance, if an AG pair occurs four times in this window, the corresponding value in the vector is set to 4, while if a KN pair would not occur in the window, the corresponding value would be set to 0. Since short-range interactions between AAs, rather than only interactions between immediately adjacent AAs, have impact on folding [32], the proposed representation also considers k-spaced pairs of AAs, i.e. pairs that are separated by p other AAs. K-spaced pairs for p = 0, 1,..., 9 are considered, where for p = 0 the pairs reduce to dipeptides. For each value of p, there are 400 corresponding features. Table 6 compares the four representations with respect to their corresponding number of features.
Feature selection
The binary encoding and the collocation based representations include relatively large number of features. Therefore, two selection methods, i.e., correlation and entropy based, were used to reduce the dimensionality and potentially improve the prediction accuracy by selecting a subset of the features.
The correlation-based feature selection is based on Pearson correlation coefficient r computed for a pair of variables (X, Y) [54] as where is the mean of X, and the mean of Y. The value of r is bounded within the [-1, 1] interval. Higher absolute value of r corresponds to higher correlation between X and Y. The method computes the correlation coefficient between each feature (variable) and the known predicted variable, i.e. flexibility/rigidity values (based on the training data) and selects a subset of features that have the highest absolute r value.
The entropy-based feature selection is based on information theory, which defines entropy of a variable X as where {x i } is a set of values of X and P(x i ) is the prior probability of x i .
The conditional entropy of X, given another variable Y is defined as where P(x i | y j ) is the posterior probability of X given the value y i of Y.
The amount by which the entropy of X decreases reflects additional information about X provided by Y and is called information gain [54] IG According to this measure, Y is regarded as more highly correlated with X than Z if IG(X|Y) > IG(Z|Y). Similar to the correlation-based selection, this method computes the information gain value between each feature (variable) and the known predicted variable, i.e. flexibility/rigidity values (based on the training data) and selects a subset of features that have the highest value of IG.
Logistic regression
Logistic regression is a method suitable to model a relationship between a binary response variable and one or more predictor variables, which may be either discrete or continuous. As such, this model perfectly fits the data used in this paper, i.e., the response variable is a binary flexible/rigid classification of a residue, and the predictor variables are the frequency of the selected k-spaced AA pairs in the local sequence window. We applied a statistical regression model for Bernoulli-distributed dependent variables, which is implemented as a generalized linear model that utilizes the logit as its link function. The model takes the following form where i = 1,..., n, n is the number of instances, and P i = P(Y i = 1).
α β β
The logarithm of the odds (probability divided by 1probability) of the outcome is modeled as a linear function of the predictor variables, X i . This can be written equivalently as In contrast to the linear regression, in which parameters α, β 1 ,..., β k are calculated using minimal squared error, parameters in the logistic regression are usually estimated by maximum likelihood. More specifically, (α, β 1 ,..., β k ) is a set of values that maximizes the following likelihood function
Experimental setup
The classification systems used to develop and compare the proposed systems were implemented in Weka, which is a comprehensive open-source library of machine learning methods [55]. The proposed FlexRP method applies multinomial logistic regression [46]. Our method was compared with a state-of-the-art Support Vector Machine classifier [34], popular and simple Naïve Bayes classifier [33], instance learning based IB1 classifier [56] and C4.5 decision tree classifier [57]. The experimental evaluation was performed using 10-fold cross validation to avoid overfitting and assure statistical validity of the results. To avoid overlap (with respect to sequences) between training and test sets, the entire set of 66 sequences is divided into 10 folds, i.e. 6 folds that include 7 sequences and 4 folds with 6 sequences. In 10-fold cross validation, 9 folds together are used as a training data to generate the prediction model and the remaining, set aside, fold is used for testing. The test is repeated 10 times, each time using a different fold as the test set.
The reported results include the following quality indices: where TP, TN, FP and FN denote true positive, true negative, false positive and false negative, respectively. | 9,522.8 | 2007-04-16T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Amplitudes and the Riemann Zeta Function
Physical properties of scattering amplitudes are mapped to the Riemann zeta function. Specifically, a closed-form amplitude is constructed, describing the tree-level exchange of a tower with masses $m_n^2 = \mu_n^2$, where $\zeta\left(\frac{1}{2} \pm i\mu_n\right) = 0$. Requiring real masses corresponds to the Riemann hypothesis, locality of the amplitude to meromorphicity of the zeta function, and universal coupling between massive and massless states to simplicity of the zeros of $\zeta$. Unitarity bounds from dispersion relations for the forward amplitude translate to positivity of the odd moments of the sequence of $1/\mu_n^2$.
Introduction.-The Riemann zeta function, an object of central interest in number theory, is defined as for Re(z) > 1, and by analytic continuation to the rest of the complex plane. The function is analytic everywhere except for a simple pole at z = 1 corresponding to the divergent harmonic series. Despite the importance of the zeta function in mathematics and physics, from number theory to path integrals, many questions remain. Of particular interest is the location of its zeros. While ζ(z) exhibits trivial zeros at negative even integers by the functional equation, it also possesses infinitely more zeros. The known examples of these nontrivial zeros, which number in the trillions [1], all lie on the critical line, ζ 1 2 ± iµ = 0 for µ real. (Throughout, we take Re(µ) > 0: µ 1 14.135, µ 2 21.022, etc.) The conjecture that µ is real for all nontrivial zeros of ζ(z) is the Riemann hypothesis, one of the most celebrated and fundamental extant problems in mathematics, with important consequences for the asymptotic distribution of the primes [2]. Other open questions include whether all nontrivial zeros are simple ones [3,4], as well as statistical properties of the zeros and asymptotic behavior of ζ on the critical line.
In this Letter, we will connect properties of the zeta function, including the Riemann hypothesis, to scattering amplitudes. The idea of relating mathematical prop-erties of the zeta function to a physical system dates back a century to the Hilbert-Pólya conjecture [4,5] that the µ n correspond to the eigenvalues of some quantum mechanical Hamiltonian. Much work has been done to attempt to find such an operator (see, e.g., Refs. [6-8]) or to identify other connections to physics, including Dyson's observation of the relation between the twopoint function of the Gaussian unitary ensemble and Montgomery's pair correlation conjecture [4], as well as interpretations of the phase of ζ in quantum chaotic nonrelativistic scattering [9,10].
Despite this progress, however, there has been relatively little work on the zeta function in the context of relativistic scattering amplitudes. The program of reinterpreting a compelling mathematical object as an amplitude-before a Hamiltonian is found, and as a guide toward developing new and interesting physicshas notable precedent. In casting Euler's beta function as a scattering amplitude, Veneziano's expression [11], and the search for a system producing this amplitude, led to the development of string theory. In fact, as shown by Freund and Witten [12], the Veneziano amplitude itself can be written in terms of ratios of Riemann zeta functions, but in such a way that the nontrivial zeros cancel out [13]. This leaves open the question of a scattering amplitude whose structure depends on the nontrivial zeros of ζ.
This is the question that will be answered in this work. In particular, we will build a scattering amplitude M(s, t), written in compact, closed form, for which there is an elegant correspondence between various physical properties and (known or conjectured) attributes of the Riemann zeta function: The existence of such an amplitude reframes the Hilbert-Pólya problem and suggests that seeking a theory that naturally reproduces the form of our M(s, t) could lead to a solution of this conjecture, as well as interesting physical insights in their own right. The purpose of this Letter will therefore be to construct such an amplitude M(s, t) and-without proving the Riemann hypothesis-map out the relations between its properties and features of the zeta function summarized above. This Letter is structured as follows. First we construct an amplitude with the desired properties and study its analytic structure and uniqueness. We next examine the forward limit of our amplitude in the context of analytic dispersion relations, both as a verification of its analytic and asymptotic structure and as a means of physically deriving identities involving the nontrivial zeros of ζ. After proving that the amplitude is on-shell constructible and relating this construction to the product form of the zeta function, we explore connections of the symmetry of the zeros across the critical line with CPT symmetry and comment on potential future directions.
Building the amplitude.-Let us define a function of a single complex variable s, where ζ (z) = dζ(z)/dz and ψ(z) is the digamma function; see Fig. 1. In terms of the Landau-Riemann (capital) xi function Ξ(z) = ξ 1 2 , application of the functional relation (2) allows us to write A(s) very compactly: We then use A to define an amplitude describing the four-point scattering of massless particles in terms of the Mandelstam variables, : As we will see, unlike an arbitrary complex function, M(s, t) possesses all of the standard properties of a scattering amplitude-unitarity, analyticity, and localityand describes the tree-level exchange of heavy states in the s and u channels, with spectrum m n = µ n . Let us see how these properties come about from our definition of A(s). One can show that Ξ(z) is everywhere analytic (i.e., entire), with roots corresponding to the zeros of the zeta function, Ξ(µ n ) = 0, and that the functional equation (2) becomes Ξ(z) = Ξ(−z). As a result, even though √ s has a branch cut at real s < 0, Ξ( √ s) is entire [15], and so A(s) from Eq. (4) is meromorphic, with simple poles at s = µ 2 n . More explicitly, starting from Eq. (3), direct evaluation of this limit for A yields lim →0 A(s+i )−A(s−i ) = 0 for real s, using the reality of ζ and ψ on the real line. Since the digamma and zeta functions are meromorphic, we therefore have that A(s) is meromorphic as well, i.e., M(s, t) is analytic except for poles. It thus remains to examine the behavior of A near the poles/zeros of ψ and ζ. The arguments of the digamma function and zeta function in Eq. (3) are chosen such that the simple zeros of ζ(z) at z = −2n coincide with the simple poles of ψ(z) at z = −n, at s = −(4n + 1) 2 /4 for positive integer n, with the result that the poles cancel and A(s) is analytic there. The only other digamma pole, ψ(0), occurs at s = −1/4, which is also the location of the pole in the 1/(s + 1 4 ) term; explicit evaluation shows that lim s→−1/4 A(s) is finite and equal to (2 + γ − log 4π)/2, where γ is the Euler-Mascheroni constant. Finally, the s → 0 limit is finite: Using the Hadamard product form for the zeta function, along with differentiation of the functional equation (2) (which allows for computation of odd derivatives of ζ at 1/2) and various gamma function relations, one can find an identity for ζ (1/2) in terms of a sum over the nontrivial zeros of ζ, which yields the beautiful result: As we will see, sums of this form, containing powers of the sequence 1/µ 2 n , will have important connections to analytic dispersion relations.
The only remaining candidate poles in A(s) correspond to the nontrivial zeros of the zeta function, ζ 1 2 + iµ n = 0 at s = µ 2 n . If the Riemann hypothesis is true, then all of these poles occur on the real s axis. Further, we find that each positive-s residue sat-isfies¸s =µ 2 n iA(s)ds > 0, as required by unitarity for a physical pole in an amplitude, i.e., if we move each pole at s = µ 2 n to s = µ 2 n − i in Feynman's i formalism, then we have Im A(s) > 0. Specifically, if the nth nontrivial zero of ζ has order g n , i.e., ζ(z) ∼ (z − z n ) gn near z n , then lim s→µ 2 n (µ 2 n − s)A(s) = g n , sǫ Our amplitude behaves as a tower of tree-level exchanges, with spectrum of masses m n in one-to-one correspondence with the nontrivial zeros of the Riemann zeta function, For a theory with scattering described by M(s, t), the Riemann hypothesis then becomes the physical requirement of real masses for the on-shell states in the spectrum. If all of the zeros of the Riemann zeta function are simple as has been conjectured [3,4], then g n = 1 for all n, in which case the massive states in the amplitude enjoy a universal coupling to the scattering states; if not, then the couplings are controlled by the order g n . We can parameterize g n = 1 by allowing multiple redundant µ n in any sum or product (e.g., Eq. (7)), which we will do henceforth. Our amplitude exhibits locality, i.e., near each pole, A(s) ∼ 1/(−s + µ 2 n ). A failure of locality in A(s) via a pole ∼ 1/(−s + µ 2 n ) k for some k > 1 would require ζ(z) ∼ exp[α/(z − z n ) k−1 ] near the corresponding zero z n , for some α. This would be an essential singularity: depending on the direction of approach, ζ could go to zero or infinity as z → z n . Hence, locality in A(s) is enforced by the fact that the zeta function is meromorphic and therefore lacks essential singularities.
Before exploring other interesting properties of M, we first argue that this is the simplest candidate amplitude satisfying the following requirements: i.) M is analytic everywhere except poles corresponding to the nontrivial zeros of the Riemann zeta function, and these poles are real if the Riemann hypothesis holds; ii.) each pole has positive residue as in Eq. (8); and iii.) the forward amplitude satisfies d 2 ds 2 M(s, 0) = 0 in the limit s → 0. Taking the ansatz that M is separable into channels A(s) and A(u) is a natural choice that enforces crossing symmetry. To satisfy condition i.) on the nontrivial zeros, one could take A(s) ∼ 1/ζ 1 2 + is . However, this choice runs afoul of requirement ii.), which can be simply corrected by multiplying by the derivative of the zeta function, which guarantees that each pole at a nontrivial zero has a residue of the same sign. Canceling the trivial zeros in ζ and the pole at z = 1 necessitates adding an infinite tower of other terms, which result in digamma and algebraic terms as in Eq. (3). Finally, the radicals in Eq. (3) are necessary, since if we take the forward amplitude and send s → s 2 to eliminate the square roots, we find that M(s 2 , 0) − M(0, 0) ∝ s 4 near s = 0; this is too soft to satisfy condition iii.)-which as we will discuss in the next section comes from dispersion relation bounds [16] (cf. the Galileon [17])-so we resolve this problem by introducing √ s, resulting in A(s) as given in Eq. (3). Hence, our form for M(s, t) is arguably the simplest possible amplitude relevant to the Riemann hypothesis, up to adding or multiplying by an entire function.
Analytic dispersion relations.-Forward amplitudes
in an infrared effective field theory coming from a well behaved ultraviolet completion are known to possess positivity properties coming from analytic dispersion relations. In particular, if M(s, t) is indeed an amplitude, we should find that for all k > 0. This is a classic consequence of analyticity and unitarity [16]. Computing a contour integral for C a small contour around the origin, analyticity of M allows C to be deformed to a new contour running just above and below the real s axis, plus a circle at infinity. We note that the definitions of c 0 match in Eqs. (6) and (11) (s, 0) is a boundary term. A nonzero boundary term for some k ≥ 0 would imply that Ξ(z) grows at least as fast as exp(αz 4k+2 ) for some α (i.e., growth order at least 4k + 2), which is inconsistent with the fact that Ξ(z) has a known growth order of unity [15]; thus, all of the c The Riemann hypothesis would imply the positivity of the c 2k required by unitarity and analyticity. See Fig. 2 for an illustration. This is a nontrivial check of the analytic and asymptotic structure of M(s, 0), confirming that it indeed behaves like a forward amplitude. Our amplitude construction allows for the derivation of further remarkable identities akin to the c 0 relation in Eq. (7). Defining the normalized nth derivative ζ n (z) = ζ (n) (z)/ζ(z) and ζ k n (z) = [ζ n (z)] k , we have: Like Eq. (7), Eq. (14) can be proven exactly albeit laboriously, without appeal to our amplitude, using repeated differentiation of the functional equation and the Hadamard product form of the zeta function, as well as various polygamma identities; the same should hold for all other c 2k . As a check, the prediction in Eq. (13) can be confirmed to within a relative precision of one part in 10 30 for k = 2, 3, 4, 5 by summing over numerical values of the zeros µ n given in Ref. [18]. While each order in Eq. (13) can be checked mathematically, what is remarkable is that our amplitude construction allows for much simpler, physical derivations of these identities.
On-shell constructibility.-Given the properties we have found for A(s), our amplitude M(s, t) describes two massless scalars exchanging a tower of massive states in the s and u channels with constant, momentumindependent coupling. For example, we could have two species of scalars, φ 1 and φ 2 , scattering via φ 1 φ 2 → φ 1 φ 2 . Alternatively, we could have instead defined M in Eq. (5) with full Bose symmetry as A(s) + A(t) + A(u) to describe the four-point scattering of a single scalar.
If there is a coupling ∝ φ 1 φ 2 X, where X is a tower of states with masses m 2 n = µ 2 n , then the tree-level amplitude for this theory will match Eq. (5). That is, our amplitude M(s, t) is on-shell constructible [19] from the three-point φ 1 φ 2 X amplitudes, which are all a constant (and universal for all X if the simple zero conjecture holds). The function defined in Eq. (3) is equivalent to and hence This equality can be seen as follows. Define ∆(s) as the difference between the right-hand sides of Eqs. (3) and (15). As we have shown, since A(s) as defined in Eq. (3) has poles only at s = µ 2 n with unit residue (writing any instance of multiple zeros as distinct µ n ), it follows that ∆ is entire. Expanding in a Laurent series around s = ∞, the form of Eq. (15) at large s and our previous result that Eq. (3) possesses no pole at infinity together then imply that ∆ is bounded, so by Liouville's theorem ∆ is constant. By the direct evaluation of A(0) in Eq. (7), ∆(0) = 0, yielding Eq. (15). As a result of the form in Eq. (16), M will automatically satisfy the EFThedron constraints [20], beyond the dispersion relation bounds discussed above, which we expect would lead to streamlined derivations of more zeta function identities analogous to Eq. (14).
Comparing Eq. (15) with the form of A in terms of Ξ in Eq. (4), we see that the on-shell constructible expression for the amplitude gives Ξ(z) = Ξ(0) n [1 − (z 2 /µ 2 n )], the Hadamard product expansion of the xi function.
Discussion.-The zeta function possesses various other properties that can be mapped to physical features of the scattering amplitude. For example, its zeros are symmetric both across the real axis and across the critical line at Re(z)=1/2 as a consequence of the Schwarz reflection principle ζ(z) = ζ(z) and Eq. (2). Hence, Im M(s, 0) is nonzero only by virtue of the i terms, going as a sum of πδ(±s−µ 2 n ). This allows the optical theorem Im M(s, 0) = s σ(s) to respect momentum conservation, with nonzero σ only when the external momenta produce an on-shell intermediate massive state. A consequence is that we can write the zeta zero-counting function N (T )-the number of z for which ζ(z) = 0 and 0 < Im(z) ≤ T -in terms of the cross section as N (s 2 0 ) = 1 π´s 0 0 σ(s) ds. Complex µ n = M − iW , violating the Riemann hypothesis, would contribute an additional imaginary part to the forward amplitude ∝ W for W M . Symmetry of zeros about the critical line ensures that such terms would come in pairs with ±W , eliminating this extra contribution to Im M(s, 0). As a resonance, these zeros represent a pair of decaying/growing modes, and the reflection of zeros about the critical line ensures that M obeys the CPT theorem.
Our construction of M(s, t) suggests various interesting generalizations. The construction of higher-point or loop amplitudes by gluing together copies of M merits investigation. Moreover, while the momentumindependent coupling evident in Eq. (15) implies that the states exchanged in M are scalars, we could generalize A by introducing momentum dependence into the propagator numerators, thus encoding spin for the massive states. Another compelling direction would be to construct the analogue of A from an arbitrary Dirichlet L-function, of which ζ is a special case. Doing so would modify the spectrum, and the generalized Riemann hypothesis would relate reality of the masses and zeros on the critical line. More broadly, replacing Ξ( √ s) with an arbitrary entire function possessing real, positive zeros and the requisite boundary conditions could generalize the amplitude construction to other functions of mathematical interest. Finally, the universality property of the zeta function [21] and its consequences for the amplitude are worthy of study. We leave such questions to future investigation. | 4,327.4 | 2021-08-17T00:00:00.000 | [
"Physics"
] |
A Heuristic Approach for Quantifying Household Travel GHG Emissions Using GPS Survey and Spatial Correlations-The Cincinnati Case Study
The United States Environmental Protection Agency (USEPA) reported that the historical increase of CO2 emissions from the transportation end user sector is largely attributable to the increased and imbalanced demand for land use and travel activities [1]. The current state of the practice for estimating GHG emission relies on the integration of two isolated modeling processes: travel demand forecasting and emission estimation. The procedure employs an ad-hoc approach using average link-based speed and traffic volume from travel demand model as transportation activities related inputs for the MOVES (Motor Vehicle Emission Simulator) model [2-4] Climate change, land use and socioeconomic development are principal variables that define the need and scope of adaptive engineering and management to sustain infrastructure development. It is in the Federal (e.g. U.S. EPA) and state governments’ (e.g. California Air Resources Board) best interests to investigate research questions, such as, are the changes tangible? What are the actionable sciences for decision-making? What adaptation changes can be made in the planning horizon? Are there any tools, models available to test those adaptive changes?
Introduction
The United States Environmental Protection Agency (USEPA) reported that the historical increase of CO 2 emissions from the transportation end user sector is largely attributable to the increased and imbalanced demand for land use and travel activities [1]. The current state of the practice for estimating GHG emission relies on the integration of two isolated modeling processes: travel demand forecasting and emission estimation. The procedure employs an ad-hoc approach using average link-based speed and traffic volume from travel demand model as transportation activities related inputs for the MOVES (Motor Vehicle Emission Simulator) model [2][3][4] Climate change, land use and socioeconomic development are principal variables that define the need and scope of adaptive engineering and management to sustain infrastructure development. It is in the Federal (e.g. U.S. EPA) and state governments' (e.g. California Air Resources Board) best interests to investigate research questions, such as, are the changes tangible? What are the actionable sciences for decision-making? What adaptation changes can be made in the planning horizon? Are there any tools, models available to test those adaptive changes?
From the emission modeling' perspective, accurate and detailed traffic operational activity inputs to MOVES model are crucial to maximizing its capability to accurately reflect the greenhouse gas emission associated with travel. Previous research [5][6][7][8][9] has proven that on-road traffic related emission varies with traffic operating conditions (i.e., speed, acceleration or deceleration). Recent studies [6,7,10,11] indicate potential deficiencies in converting travel demand outputs into the emission model inputs. Emission models often rely on traditional travel demand models for vehicle activity input, but traditional travel demand models are mostly calibrated and validated using aggregated total traffic data [12]. Therefore, the hourly emission estimates may not be accurate because hourly VMT and speed variations are underrepresented as well as aggregated inputs being used in the emission models [12,13]. In addition, real-world traffic data, especially location-based trip generations are spatial in its nature. Therefore, it contains unknown effects due to its spatial correlation [14,15]. Figure 1 illustrates the traditional link-based "bottom-up" (left) approach in comparison to the proposed "top-down" (right) approach in estimating the GHG emissions in Hamilton County, Ohio. The link-based "bottomup" approach clearly mapped out the interstate freeway network since the interstates are heavily loaded with traffic. It actually accounts for all the emissions that are emitted on the roadway network of the county but does not provide a measurement of the source of emissions. Adaptation planning to climate change impacts requires data-driven, location-based analysis capability to estimate spatial distribution of travel GHG emission contributing sources due to transportation activities. Therefore, household GHG emission generation modeling is viewed as a pressing need to provide data and location-driven decision support to addressing the aforementioned research questions and analysis capabilities. However, the challenge remains in the theoretical representation of sensitive interactions between spatial-dependent land use and traffic activities as well as providing location-based GHG emission information for decision makers. data support, it is difficult for planners to connect land use and household travel associated GHG emissions. Especially, there is almost impossible to make it traceable to the origin. Besides, since the household travel survey data analyses are cross-sectional studies and are spatially dependent, the effectiveness of incorporating spatial information into the research is not clear. A method of modeling household travel associated GHG emissions for accounting for spatial effects is needed.
The goal of this research is to develop a spatial regression-based GHG emission modeling approach at the TAZ-level using GPS household travel survey data. The method is expected to enable analyzing the sensitive interactions among land use changes, household travel characteristics and GHG emissions by introducing spatial information for decision support. To fulfill the above goal, the following objectives are designated: • To identify the contribution variables for household travel GHG emissions through statistical analysis using the high resolution (second-by-second) GPS household travel survey • To quantitatively reveal household travel GHG emissions at the TAZ level. Illustrating household GHG emission's socioeconomic and demographic characteristics with "ground-truth" traffic activity data inputs; • To utilize spatial information in GHG emission generation model bypassing the issues in Ordinary Least Square (OLS) regressionbased modeling assumptions • To compare model goodness of fit using an information-based measure of fit approach. The spatial cross-sectional regression method is based upon previously extracted travel and GHG emission characteristics of households as well as the spatial contiguity among TAZs.
Summary of Existing Studies
Spatial typically refers to data containing time series observations over a type of spatial unit such as TAZs, zip codes, regions, countries, and states. It is generally recognized that panel data are more informative since they contain more variation and less collinearity among the variables. The use of panel data results in a greater availability of degrees of freedom, and hence increases efficiency in the estimation [16]. A large body of literature [17][18][19] has proven that incorporating spatial factors into integrated land use and transportation applications are applicable and yields reliable results [20][21][22]. The spatial and temporal correlation characteristics, which were originally introduced to the transportation field from econometrics, consider traffic activities, similar to its source generation, to be spatially correlated. Several recent studies at the University of Cincinnati [23 -26] indicate that the spatial modeling approach is capable of achieving improved accuracy in both truck volume and Particulate Matter (PM 2.5) emission predictions. Hall et al. [27] identified that current land use land cover (LULC) models fail to incorporate and integrate spatial and temporal correlations in urban systems. To fill in the gap, they introduced the spatial linear and logistic regression model for panel data. They used the downtown population data for Austin, TX over multiple years to predict the population in 2020. A conclusion was drawn that spatial and temporal effects were shown to be highly statistically significant, suggesting that their recognition and formal inclusion in the models is likely to be of great value. Parent and LeSage [22] applied a spatial panel model with random effects to predict commuting times. They collected travel time to work, travel expenditures, traffic volume, lane miles and gas taxes to forecast the mean travel time to work for each state. The findings showed evidence of substantial of spatial spillovers and relatively weaker time dependence leading to much smaller time impacts accruing over future periods. A very recent article by Chakir and Le Gallo [28] investigates how the introduction of spatial effects and individual heterogeneity in an aggregated land-use share model affects the predictive accuracy of land use models. They considered agricultural, forest, urban and other land uses in their investigation. And one of the conclusions drawn is that controlling for both unobserved individual heterogeneity and spatial autocorrelation outperforms any other specification in which spatial autocorrelation and/or individual heterogeneity are ignored. Perugu et al. [29] applied spatial panel model for modeling truck factors and for improved PM 2.5 estimation in a regional roadway network. The proposed methodology enables plotting the spatiotemporal distribution of PM 2.5 emissions in a subarea. They also reported that the methodology presented is scalable and transferable and holds technical promise in its application across different regions and pollutants.
In summary, a gap exists between the current practices of aggregated level of household travel GHG emission estimation and the data and spatial informed needs for adaptive planning. This proposed research is expected to fill in the gap by connecting zonal level socioeconomics with household travel GHG emissions using spatial regression and high-resolution GPS household travel survey data. This paper extends previous work on modeling household travel GHG emissions in three ways: 1) building the capability of estimating a TAZ level GHG emission generation model which is highly-desirable for adaptive planning, and 2) developing a spatial regression based modeling approach which added to currently practiced approach, and 3) testing the spatial information's role in modeling regional-level household travel GHG emissions from large GPS-based household travel survey datasets.
Methodology
To fulfill the research gap identified, an integrated approach is proposed based on the Greater Cincinnati Household Travel Survey Data. The purpose of the methodology is to build up a linkage between household travels related GHG emissions and land use, socioeconomic, demographic, and spatial and temporal factors. Rapidly quantifying the GHG emissions through simulation of scenario-based land use and socioeconomic changes is an additional methodological goal. Figure 2 illustrates the heuristic framework of this research. The household travel data processing procedure extracts household travel characteristics base on the survey database. The purposes are threefold. First, to calculate the GHG emissions from the location specific household using the traditionally unavailable vehicle specific power (VSP) approach and the EPA approved MOVES model. Second, the extracted trip features based on household socioeconomic data will be used to update the trip rates table for the customized travel demand model. Module two, the contributing variables, is to produce contributing variables for spatial cross-sectional modeling including TAZ level, trip level attributes and spatial weights. The spatial crosssectional model will then be estimated. Third, the spatial model calibration module will provide justified land use patterns and associated household spatial distribution. The last part of this research is measuring the goodness of fit from OLS and the proposed spatial regression models.
Spatial autocorrelation of the variables
The first law of geography according to Waldo Tobler is "Everything is related to everything else, but near things are more related than distant things. " [30]. This observation is embedded in the gravity model of trip distribution. It is also related to the law of demand, in that interactions between places are inversely proportional to the cost of travel, which is much like the probability of purchasing a good is inversely proportional to the cost. Spatial autocorrelation refers to the correlation of a variable with itself through space. If there is any systematic pattern in the spatial distribution of a variable, it is said to be spatially auto-correlated. OLS regressions assume that observations have been selected randomly. However, if the observations are spatially clustered to a certain degree, the estimates obtained from the correlation coefficient or OLS estimator will be biased and overly precise. The bias comes from areas with higher concentrations of events having a greater impact on the model estimation and will overestimate precision since events tends to be concentrated, and therefore, there are actually fewer independent observations than assumed.
The most common measurement of spatial autocorrelation is the Moran's autocorrelation coefficient (often denoted as I). It is an extension of Pearson-moment correlation coefficient to a univariate series [31,32]. Recall that Pearson's correlation (denoted as ρ) between two variables x and y both of length n is: where x and y are the sample means of both variables. ρ measures whether, on average, i x and i y are associated. In the study of spatial patterns and processes, it is logically expected that close observations are more likely to be similar than those far apart. It is common to associate a weight with each pair ( , i j x x ) that quantifies this expectation [33]. In its simplest form, these weights will be 1 for close neighbors, and 0 otherwise. The weights are sometimes referred to as a neighboring function with ii w set to be 0. Moran's I can be interpreted as the correlation between variable, x, and the "spatial lag" of x formed by averaging all the values of x for the neighboring areal units (i.e., polygons).
Moran's autocorrelation coefficient I's measured by: where ij w is the weight between observation i and j , and 0 S is the Moran's I varies on a scale between [-1,1]. When the value is close to -1, it means high negative spatial autocorrelation; when the value is close to 0, it means no or minimal autocorrelation; when the value is close to 1, it suggest high positive spatial autocorrelation.
The null hypothesis is that the Spatial Autocorrelation (Moran's I) is that the data is completely spatial random. If the p-value is not statistically significant, the null hypothesis cannot be rejected. If the p-value is statistically significant, and the z-score is positive, the null hypothesis is rejected. Table 1 shows Moran's I and its statistical testing results. Almost all the zonal attributes are determined to be spatially dependent.
Candidate spatial cross-sectional models
The general form of spatial cross-sectional model is below: where: • WY denotes the endogenous interaction effects among the dependent variables, • WX the exogenous interaction effects among the independent variables, and
•
Wu the interaction effects among the disturbance terms of the different spatial units.
• ρ is called the spatial autoregressive coefficient, • λ the spatial autocorrelation coefficient, while • θ represents a K × 1 vector of fixed but unknown parameters. Figure 3 shows the variations of spatial cross-sectional models with respect to assumptions in the error distribution in the above parameters. Since no predeterminations on the error term distribution can be made, this study tested all the below spatial cross-section models and the best model fits the data will be selected. Table 2 shows the variables with their coefficient estimates. The R 2 (coefficient of determination) gives information about the goodness of fit of a model. In regression, the R 2 is a statistical measure of how well the regression line approximates the real data points. An R 2 of 1 indicates that the regression line perfectly fits the data. The linear model has a R 2 of 0.8002, which suggests that the model is a good fit. The scale-location plot is similar to the residuals versus fitted values, but it uses the square root of the standardized residuals. A good fit linear model should show randomness in this plot. The last plot, residuals versus leverage, uses Cook's distance to identify points which have more influence than other points. Generally these are points that are distant from other points in the data, either for the dependent variable or one or more independent variables. Each observation is represented as a line whose height is indicative of the value of Cook's distance for that observation. There are no hard and fast rules for interpreting Cook's distance, but large values (which will be labeled with their observation numbers) represent points, which may require further investigation.
K-fold cross-validation of the OLS model
K-fold cross validation is one way to improve over the holdout method. The data set is divided into k subsets, and the holdout method is repeated k times. Each time, one of the k subsets is used as the test set and the other k-1 subsets are put together to form a training set. Then the average error across all k trials is computed. The advantage of this method is that it matters less how the data gets divided. Every data point gets to be in a test set exactly once, and gets to be in a training set k-1 times. The variance of the resulting estimate is reduced as k is increased. The disadvantage of this method is that the training algorithm has to be rerun from scratch k times, which means it takes k times as much computation to make an evaluation. A variant of this method is to randomly divide the data into a test and training set k different times. The advantage of doing this is that you can independently choose how large each test set is and how many trials you average over. A common k number for model cross validation is 10. However, since there are 693 TAZs in our dataset, k=9 is used to ensure each "fold" is equal.
Since the data are randomly assigned to a number of 'folds' . Each fold is removed, in turn, while the remaining data is used to refit the regression model and the deleted observations are predicted. Table 3 shows the residual sum of squares and mean square. Figure 5 is the validation plot showing the removed (folded) vs. fitted data. The validation plot shows a good validation since each removed vs. fitted data flows similar 45 degree line. Overall, the OLS model is validated and it is a good fit.
Spatial regression analysis results
The spatial regression models are estimated using the maximum likelihood method. Table 4 shows the variable coefficients using the OLS, SAR, SEM, SDM, SDEM, KPM, and MAM. The coefficients that are not spatially dependent (i.e., Avg_CarbEM, Avg_TRIPSP) are quite similar. And the spatially dependent variables have more variations in the coefficient. This is expected because each of the models has different assumptions and is of different forms as shown in Figure 3.
Goodness of fit measures for candidate models
The goodness of fit measures in spatial regression models is slightly more complex due to the lack of standard measures such as R 2 . However, commonly used goodness of fit measures is the information-based measures. The information-based goodness of fit measures utilizes several model performance measures and rank based on the values. The model with the lowest rank is considered a better fit than others. Table 5 shows the information based measures and their ranks for OLS, SAR, SEM, SDM, SDEM, KPM and MAM models. This ranking utilized AIC, Log Likelihood and Moran's I on Residuals as measures. For all three criteria, smaller values are better. Therefore, the SDEM model has the lowest summation of ranks and it fits the data better.
Discussion
A spatial regression-based modeling framework was developed based on finding the minimal model residuals and multiple informationbased measures of fit. The goodness of fit measures in spatial regression models is slightly more complex due to the lack of standard measures such as the R 2 . However, a common goodness of fit measures is the OLS model has an R 2 (coefficient of determination) of 0.8, which is a good fit. However, when examining the residuals on diagnosis plots, it was found that the residuals are still spatially correlated. This suggests that spatial models can fit the data better and reduce the residual spatial correlation. After performing spatial regressions, the informationbased measure of fit based on AIC, log likelihood and Moran's I on residuals are compared and the best model fitting the given dataset is the Spatial Durbin Error Model. The SDEM has the lowest AIC and Moran's I on residuals compared to other candidate models.
This study has provided a proof of concept for the proposed methodology and solid foundation for the modeling land use changes, and GHG emission analysis. It has been proven that the proposed method has the capability to reveal the dynamic linkage between land use, transportation, and emissions. The findings from this research provide insights on how land-uses planning alternatives built on adopted policies and enforced development regulations correlate with travel patterns and their sequential GHG emissions. The level of specificity, such as the land use change and GHG emission analysis presented in this study enables more data and indicators to be developed. Such data and indicators can be incorporated into decision makers' plans, policies and ultimately regulations and its possible integration with project level review processes.
Conclusion
While the results from this study offer specific recommendations as to which types of land use planning policy practices are most highly associated with a higher amount of VMT, GHG emissions, there are also some potential to reveal policy impacts that can be applied to integrated land use and transportation sustainability practices. The results of this research are expected to add to the existing body of knowledge to enable faster and easier methods of examining the impact of adaptive planning strategies on alleviating the effects of household travel GHG emissions. The spatial cross-sectional regression model is developed through the integration of actual and scenario based land use visioning and planning, demographical changes, transportation emission analysis, and computer forecasting and evaluation of future scenarios. This research makes it possible to assess the household travel GHG footprint and provides models, data for possible GHG emission mitigation through land use policies and changes. Although the results may be pertaining to the specific dataset but it helps transportation decision makers to better connect the land use development and its related household socioeconomics with their GHG emission characteristics. Particular, the household travel GHG emission quantification results made its contribution to the current body of knowledge on the following: (1) provides accurate GHG emission results by using the best available traffic activity data inputs (VSP distributions) for emission modeling; (2) provides connections between household socioeconomics and their travel GHG footprint. The research suggests important potential to provide solid grounds for analyzing, modeling of sustainable community strategies, adaptive planning policies, and many other policy-making applications. Table 5: Information-based measure of fit for spatial models. | 5,061 | 2017-02-03T00:00:00.000 | [
"Economics"
] |
Computational estimation from a statistical physics approach and its contributions to the Covid-19 in Colombia
The objective of this article is to present a computational estimate from a statistical physics approach and its contributions to Covid-19 in Colombia. Based on the daily data of contagions, recoveries and deaths, during the months of March to July, the estimation of the behavior of the epidemic was made using the nonlinear regression method with adjustment of curves by minimum squares. Highlighting the benefits that this method presents in the study of physical phenomena, it was used in the present research developing two types of modeling: exponential and Gaussian, and with these some predictions were made. The coefficients of determination of the exponential model were: 0.9641 for contagions, 0.9400 for recoveries and 0.9788 for deaths, and those of the Gaussian model were: 0.9799 for contagions, 0.9606 for recoveries and 0.9894 for deaths, showing a good correlation between the models and the real behavior of the pandemic, being the Gaussian one, the most approximate. This was also evidenced by comparing the prognosis of both models with the actual data for the first 13 days of August, concluding that the pandemic is beginning to mitigate, and the curve is flattening out.
Introduction
Covid-19 is a virus unknown to mankind. Its understanding is based on the hard work of countless researchers [1]. Many of them have analyzed the epidemic from a statistical point of view using different models, such as susceptible, exposed, infected and recovered (SEIR) [2] and Susceptible, Infected and recovered (SIR) [3], specialized in the prediction of infectious diseases, however, these could generate imprecise results due to the systematic variations in the prognosis curve and the complexity of the epidemic [2], therefore, it is extremely important for science and humanity to have more research using different methods such as: non-linear regression modeling [4].
Non-linear regression is a method that uses least-squares curve fitting [5]. This method emerged from the fields of astronomy and geodesy. The first scientists to contribute to this method were Carl Friedrich Gauss, Adrien-Marie Legendre and Robert Adrain in the XVIII century [6]. It has been widely used in different areas of knowledge such as statistical mechanics, a discipline that was born in the nineteenth century with the contributions of Rudolf Clausius, James Clerk Maxwell and especially Ludwig Boltzmann [7]. All these scientists established the basis of statistical physics and their contributions still have great importance in research of recent years, such as those carried out by Flórez and Laguado, for example, in computational fluid dynamics [8], that of Plaza in the modeling of physical and natural phenomena [9] or Vera, Delgado and Sepulveda, in solar energy [10].
Statistical mechanics is as a link between a macroscopic world treated as continuous and a microscopic world of discrete nature [6]; as the Covid-19, whose discrete components are the data reported each day, and these describe behavior that can be modeled continuously over time [11], this, in
Materials and methods
The research was carried out using a descriptive and applied methodology that can be seen in Figure 1.
Collecting data
Recent studies such as those conducted by Diaz Pinzón [13], Verbel, Mejía, Manjarres and Troncoso [14], show that the pandemic does not have a defined behavior. Figure 2 for example, shows the behavior of the pandemic in Colombia. This figure was made with data on contagions, recoveries and deaths reported by the "Instituto Nacional de Salud" (INS) [15]. Figure 2 shows that the Colombia curve has a growing exponential behavior. Although the effects of the pandemic were initially mitigated, the curve has been steadily increasing, reaching levels of contagion and deaths similar to those countries that initially did not take preventive measures and were greatly impacted by the pandemic. In those countries, drastic measures were taken due to the high rate of contagions and deaths and a flattening of the curve was witnessed. In Colombia, on the contrary, measures are being taken, but the curve continues to increase exponentially.
Modelling
Using non-linear regression with a technological tool, two types of mathematical models were chosen: exponential [16] and Gaussian [5]. Exponential due to the behavior of the curve in Colombia, which does not yet show any kind of flattening. And the Gaussian one to estimate a flattening in the following months.
Model analysis
The number of terms indicated were chosen to obtain a better result in the behavior of the curve and in the calculation of the determination coefficient.
Prediction development
With the exponential model, predictions were made for the months of August and September, considering that the curve is close to the flattening point, and with the Gaussian model, predictions were made for the months of August, September, October, November and December. With both models the error between the predicted and actual data was calculated for the first 13 days of August.
Results and discussion
Below are the two types of models estimated from data collected between March 6, 2020, the day the first contagion occurred, and July 31, 2020. Figure 3 shows the exponential model for the contagion curve and Figure 4 shows the Gaussian model. The same modeling was done with the recoveries and deceased data. The exponential model in Figure 3 is appropriate to predict a possible increase in contagions until day 209, which is equivalent to September 30th, 2020. The same modeling was used for the recoveries curve and the deaths curve.
Models
The Gaussian model predicts a possible flattening of the curve between the days 210 and 270, which is equivalent to the months of August and September 2020. The same modeling was used for the recoveries curve and the deaths curve. Two models were used for the forecasts: The exponential model predicts an increase in the number of contagions per day, a phenomenon that was evident in the behavior of the real data taken, and, if it is not contained, it can produce more than 80000 contagions per day after September 30th, 2020. On the contrary, the Gaussian model predicts a flattening of the curve, whose peak does not reach 20000 contagions per day, and by December 31, 2020, it expects to be below 300 contagions per day. Table 1 shows the determination coefficients of each of the models. The determination coefficients showed a good correlation between the models and the real behavior of the pandemic, being more approximate the Gaussian model. Table 2 shows the predictions of the exponential model for some days in August and September and Table 3 shows the predictions of the Gaussian model for a possible flattening of the curve. These predictions presented refer to accumulated data per day. With the exponential and Gaussian model, we calculated the accumulated contagions per day, the accumulated recoveries per day and the accumulated deaths per day and these were the data that were compared with the real ones. It is observed that after a possible flattening of the Gaussian curve there will be fewer contagions and deaths by December 31st, 2020 than predicted by the exponential model for September 30th, 2020. Table 4 shows the comparison between actual contagions and models for the first 13 days of August. Table 4 shows that the errors of the Gaussian model were all below 1, while those of the exponential model were all above 1.5. Table 5 shows the comparison between actual recoveries and models for the first 13 days of August. In this table we can see that the errors started to increase above 5% for both models, after August 4th, 2020. What allows us to see this phenomenon is that the curve of recoveries per day does not seem to follow a defined pattern but remains between an exponential and a Gaussian behavior. For example, for the 13 of August of the 2020, the exponential model predicted almost 300000 recoveries, whereas the Gaussian model just over 200000, and recoveries were 250971, that is to say, almost to half of a model and another one. Table 6 shows the comparison between actual deaths and models for the first 13 days of August. This table shows that the deaths curve followed the Gaussian model, since the errors were greater in the exponential model. Table 7 shows the average errors between the models and the real data for contagions and deaths during the first 13 days of August. It is observed that the mean error between the real data and the forecast is lower for the Gaussian model in the contagions and deaths, and lower in the recoveries for the exponential model. Table 7 shows the average errors between the models and the real data for contagions and deaths during the first 13 days of August. It is observed that the mean error between the real data and the forecast is lower for the Gaussian model in the contagions and deaths, and lower in the recoveries for the exponential model.
Conclusions
Approaching the research from a statistical physics approach was successful, because the behavior of covid-19 can be modeled as a macro system, continuous in time that depends on discrete elements, reported day by day. The use of technological instruments in the statistical analysis and the prediction methods facilitates the models' estimation, making the process more dynamic and shortening the necessary time to reach conclusions that can benefit the decision making. Differences that may arise between prognostic curves and actual data relate to the nation's response to the pandemic, the presence of asymptomatic patients, prevention measures, data collection capacity, processing times, and followup of infected patients, but the results presented here are nonetheless relevant to the analysis of the pandemic. The determination coefficients showed a good correlation between the models and the pandemic, with the Gaussian model being the most accurate, allowing us to infer that the pandemic will begin to show mitigation from day 200, which is equivalent to the second week of September 2020.
The behavior of the pandemic depends on the actions taken by the authorities to mitigate the contagion and the compliance of citizens to these actions, since the exponential curve shows that, if the current growth is not mitigated, we could have 2390798 contagions and more than 68000 deaths by September 30th, 2020. If the pandemic responds to Gaussian behavior it is expected to have more than 2 million contagions and about 56800 deaths by December 31st, 2020, which is a much lower number than expected by the exponential curve, however, are still worrying figures that should lead the country to strengthen prevention measures. | 2,448.6 | 2020-11-01T00:00:00.000 | [
"Computer Science",
"Physics"
] |
APPLICATION OF GOOGLE TRANSLATE FOR WRITING THESIS ABSTRACT IN ENGLISH (GRAMMAR ERROR ANALYSIS)
Google Translate is a popular translation and is used by most people around the world. Google Translate not only offers translation in various languages, it can even speed up work Translate. In learning the new norm, Google Translate is considered the easiest way to facilitate translation work and assist students in completing academic assignments. Based on the preliminary survey that had been done toward fifteen English thesis abstracts of post graduate students, it was found a lot of mistakes in the writing (tenses). This research aims at identifying the use of tenses/grammar in English thesis abstracts. The type of this research is descriptive through documentation study approach.s. The type of this research is descriptive through documentation study approach. The population was all English thesis abstracts at post graduate program Jakarta Islamic University. The sampling was purposive sampling. The data were analyzed descriptively. The results of the research showed that the inappropriate use of tense was more than that of the appropriate one in the part of background; the inappropriate use of tense was more than that of the appropriate one in the part of research method; the inappropriate use of tense was more than that of the appropriate one in the part of result (findings); the appropriate use of tense was more than that of the inappropriate one in the part of discussion; and the appropriate use of tense was more than that of the inappropriate one in the part of conclusion and suggestion
INTRODUCTION
One of the obligations for students as graduation requirement is to write a paper scientific. Scientific writing is written work that arranged systematically according to the rules based on the results of scientific thinking. Work Scientific reports are divided into several types namely: reports research, papers, scientific papers, theses, dissertations and proposals (Jauhari, 2010). One of the important elements in scientific writing is abstract. This is because the abstract is summary results containing the essence of scientific work. Abstract is also a benchmark for the content of the scientific work. Readers will be interested or not to read scientific works is also determined by the abstract.
Abstract is placed on the first page of an article with the aim of helping readers quickly and definitely know the purpose, content of a study (Polontalo, 2013). Abstract is a brief description of a scientific paper that includes the background, the problem of the study, the method used, the results obtained, conclusions, and suggestions.
Abstract is made in two languages: Indonesian and English. Writing abstract in English is a response from where the progress of science and technology scientific works published via the internet so that scientific works can be accessed globally. Therefore, abstract translation from Indonesian to English is very crucial. Abstract writing in English version must be written with accordance to the rules of writing right English, in this case it must be written with the appropriate type of sentence (tenses). Many errors in writing sentences (tenses) in the introduction to the abstract. This is because most of the students translate abstract from Indonesian into English word by word by word. This is not in accordance with the rules in translate.
Translating is an act of transferring text from the source language into target language in a certain context (Foster, 2008). Translate is a process and method used to convey the meaning of the original language into the target language and more focused on the idea of meaning and good grammar (Ghazala, 2015).
Today's technology is growing. Technology is used in most aspects of life including in the field of teaching and learning. With the new situation that is sweeping the world, technology is seen as a very important and necessary aspect. However, there are times when the technology used does not necessarily guarantee the expected results and requires human intervention. It can be seen in the use of Google Translate. Google Translate has long been used since it was discovered by Google and was introduced in April 2006. Google Translate is a translation tool that is now widely used throughout the world by all groups, ages, backgrounds and professions. Students in higher education institutions are no exception (Lubna Abd Rahman, & Arnida A. Bakar 2018).
University Students in Indonesia also use Google Translate as their translation tool to prepare academic assignments, including translating abstract thesis from Indonesian to English. This paper discusses the results of the grammar error analysis of post graduate students at Jakarta Islamic University in making English abstract in their thesis.
Based on the result of the study that had been done by Kusumawati dan Sugiarsi (2020) with the title " Analisis Penulisan Abstrak Bahasa Inggris Pada Karya Tulis Ilmiah Mahasiswa D3 Rekam Medis Dan Informasi Kesehatan Stikes Mitra Husada Karanganyar" toward fifteen thesis English abstracts of education in 2019/2020, it was still found a lot of mistakes in the writing (tenses). This research aims at identifying the use of tenses in sentences applied in the English abstract of thesis. Based on this, it is necessary to implement research with the title "Application of Google Translate For Writing Thesis Abstract in English".
The process of translating from one language to another using Google Translate indirectly makes it the most popular translation tool in the world. Likewise, the use of Google Translate as a translation tool by students who make their thesis abstracts in English is also considered the easiest and fastest way to solve it. Especially in a pandemic situation, Google Translate is considered a savior and helper they trust to ensure English abstracts can be prepared. However, does Google Translate really help students in preparing their assignments? When viewed from the point of view of the speed of time, it is undeniable that Google Translate can translate in the blink of an eye. If you look at the accuracy of the meaning, is Google Translate able to produce a translation that is truly correct or at least in accordance with the target culture and language? This is the problem The academic literature uses abstracts to concisely communicate complex research. Abstract can act as a stand-alone entity, not a full paper. Thus, abstracts are used by many organizations as a basis for selecting proposed research for presentation in the form of posters, platform/oral presentations or workshop presentations at academic conferences. Most bibliographic databases index only abstracts rather than providing the entire text of the paper.
Abstracts can convey the main results and conclusions of a research such as theses, dissertations and scientific article. An abstract allows one to screen a large number of papers for ones where the researcher can be more confident that they will be relevant to his research. Once papers have been selected on the basis of abstracts, they should be read carefully to evaluate their relevance. It is generally agreed that one should not base reference citations on the abstract alone, but on the content of the entire paper.
According to the 2015 scientific paper writing guidelines, it states that the abstract is a brief description of the reasons for the research being conducted, the method or approach chosen or used in research, important results, and the main conclusions from the results of scientific work writing activities, such as theses, theses, and dissertation.
In abstract writing, we must also understand the nature and elements of the abstract. In making abstracts, it has informative and descriptive properties. Informative and descriptive, meaning that the data or information contained in the abstract is based on existing data and facts. and it is not recommended to briefly include information that has no correct data and facts into the abstract.
Limitless communication and the use of social media today have made the world feel smaller and smaller. Regardless of location, country, or nation, communication now no longer requires a face-to-face meeting. Knowledge and information can also be searched more easily through cyberspace. However, the language problem still exists. In this case, the translation is still of interest (Kusumawati & Sugiarsi 2002).
Loutayf & Soledad (2017) stated that human translation is central to communication between culture and language. There are times when human translation is seen as time consuming and complicated. In this case, Google Translate is seen as the best solution and easy to achieve. Google Translate is a text translation that is done by a computer without involving humans. Google Translate is also known as fast translation which produces translations in very short time. Users only need to enter words, phrases, sentences, paragraphs and continue to be translated by the selected translation tool. Doro (2013) Stated that one of the most popular translation tools is Google Translate. There are also many language options offered. Google Translate is not only used in communication but it is also used as a learning tool for most students. Students who write abstract on their theses are no exception using Google Translate. English is one of several languages used in writing research abstracts. A translation tool is a translation provided by a computer, without human assistance or not. The use of this translation tool is very widespread because it does not need to be paid for and is easily accessible without calculating place and time.
Furthermore, this translation tool is capable of translating words, phrases and sentences without the need to refer to a dictionary. There are too many languages offered, more than 50 languages (Supatranonta & Pisamai, 2012). Using Google Translate as a translation tool can sometimes help with translation work. However, there are translations given that are not in accordance with the context and culture of the target language. The results of the translation using the translation tool are simply different when compared to the translation that is used done by humans. The same impression as in the source text is easier to understand through human translation.
Translators should not rely solely on translation tools. Terzi, Canan & Arslanturk, Yalcin. (2014) states that translation tools only speed up the translation process, but the meaning is not necessarily correct. This is also supported by Zhen-ye & Ning. (2008). which states that the results of the translation process by Google Translate are still limited and have not provided the expected translation results. The quality of the translation from one language to another is not always the same.
Abstracts in the English version must be written in accordance with good English writing rules for abstracts in scientific writings. Sentences in the abstract must be written with the appropriate tenses. Imran : He is located in section 7, Kota Wisata Also in Example 4, Google Translate has translated the pronoun it self as him into Indonesian. This is the wrong personal pronoun because house in Indonesian is given the gender as male and woman. The pronoun should be used to refer to the house.
Grammar Errors (tenses)
The analysis also shows that there are grammatical errors in the tenses used. Example 5 SL: Saya tidur larut malam tadi malam, sekitar jam 3.00 pagi Biasanya saya tidur jam 9.00 malam TL: I go to bed quite late last night, around 3.00 a.m.
I usually go to bed at 9.00 pm Example 6 SL: Aisya, Anda pergi kemana kemaren TL: Aisyah: Where you go yesterday? Sl: Saya pergi ketaman kemaren TL: Fifi: I go to the park yesterday. Example 7 SL: Raja: Kapan Anda tiba? TL: Raja: When you arrive? SL: Farhana: Saya tiba tadi malam jam 6.00 TL: Farhana: I arrive at 6 o'clock in the evening, Examples 5 to Example 7 showed sentences that contain errors in the Target Language. Grammar errors in the use of words that showed the past tense in Indonesian, the auxiliary word did is not needed. Bibard (2019) says that: To form the past participle using did, the past participle has to change the verb into verb 2. The sentences in these examples showed all the mistakes made involving the auxiliary did. Students proved not to add did to the auxiliary used for past questions and did not change the verb into verb 2 for statement sentences. This was an error because the past tense grammar rules in Indonesian do not require the additional word did for the question clause, nor do they require verb changes such as: go-went for irregular verbs and the addition of ed for regular verbs such as: arrive -arrived.
Before making an abstract, there are several things that must be considered, first make a general abstract standard. That is, each particular goal will have a requirement in making the abstract. For example, for the purpose of publishing an international seminar, the format or template will be slightly different depending on the organizer, as well as scientific journals. Therefore, here are some general provisions related to making an abstract: 1. The word count is about 250 words, 2. Choose British English or American English, must be consistent, 3. Consists of an introduction, research objectives, methods, results and discussion (if necessary), conclusions 4. Tenses: Introduction = present tense Research objective = past tense method = past tense result = past tense Conclusion = present tense Make an abstract with the standard format above, so we just need to refine it according to the requested format. The mistake that often occurs is using the Google Translate facility from the Indonesian to English version of the abstract without being refined again.
This paper is written based on the objective; (i) to find out the types of errors made by students when making a thesis abstract in English and (ii) to analyze the errors which was translated from Indonesian to English using Google Translate.
METHODOLOGY
This type of research design is observational with a descriptive approach. The point is data collection through observation of scientific paper documents with the aim of describing the suitability of abstract writing in English.
The population in this study were all abstracts in English on the thesis of postgraduate students at the Jakarta Islamic University. The sampling technique was purposive sampling and obtained a sample of 40 abstracts. Data collection is done through document study using a checklist. The data analysis in this study was descriptive, namely providing a description of the part consisting of the introduction, research objectives, methods, results and discussion (if necessary), conclusions. This qualitative research used two analyzes (i) text analysis. Text analysis was used by researchers to see the writings written by students who wrote research abstracts, (II) to find out the errors contained in the English sentences used.
RESULT
This research in particular can help students in writing English abstracts. Sentence examples can be used to show students' mistakes. The results of the analysis can also be shown to students so that they are careful when using Google Translate. Students also need to be reminded to pay attention to their writing results, especially after using Google Translate.
This research is limited to the writing of English abstracts written by students in their thesis. This research only involves error analysis of English abstracts. This research was conducted from January 2021 to July 2021. Only abstracts containing sentences in English were analyzed. Abstracts that were not written in English were not analyzed. Every English sentence is translated with Google Translate, looking at the translation, it was found that there were tenses which translations did not appropriate to English writing rules for abstracts in scientific writings. Table 1 showed that the highest use of inappropriate tenses was found in the components of the research method as many as 24 (60%) abstracts in research articles. The error lies in the use of the Present Tense. The next highest error in using tenses was in the abstract background; namely the use of the Simple Present Tense test to refer to the preliminary survey that has been conducted; as much as 21 (52,5%). Errors in using Simple Present Tense tenses refer to Describing the research activity as many as 23 ( 57,5%). Likewise, errors in using tenses occur in writing research results; 22 (55%) namely the use of the Simple Present Tense. Most of the use of tenses in writing abstract conclusions is correct; 25 (62,5%) is the use of Simple Past Tense) . and which is not right; 15 (37,5%); Simple present tense.
DISCUSSION
Based on the description and analysis of the abstract data of the English thesis, it is obtained a description of the errors in the use of tenses contained in the introduction, research methods, results (findings), discussion, conclusions, and suggestions. The first finding in this study was the use of tenses in the introduction to an English abstract which included appropriate and Inappropriate. The use of the right tenses in the introduction was 19 (47,5%). The use of the right tenses in the introduction was to use the Simple Past Tense to explain what the researcher has done in his research.
The words found and there were refer to the Simple Past Tense which is used to express an event that has been done before (past). Meanwhile, there are 16 abstracts (53.3%) whose use of tenses is not appropriate, namely by using the Simple Present Tense to refer to in the preliminary survey that has been carried out: The word shows the Simple Present Tense which refers to on survey activities that have been carried out by researcher. The use of shows in this sentence is not appropriate because the survey was conducted by the researcher at a time before the research was conducted, so the correct word to replace shows is showed, which is the past tense (Verb 2) of show.The Simple Present Tense used in the introduction is only limited to showing the facts and truth of the information we receive and is still relevant today and is considered correct, The second finding in this study was the use of tenses in the research method section. The use of the right tenses found as many as 16 abstracts ( 40%) The appropriate tenses in the research method section were the Simple Past Tense, such as: "The population in this study were the whole new students at ..." and "The instruments used a checklist and guidelines of unstructured interview". The words were and used show the Simple Past Tense where were is to be Past Tense while used is the past tense (Verb 2) of the word use, which means use. The were and used are appropriate because the Past Tense form in the method section in the abstract explains what the researcher has done related to the research approach that is implemented.There were also 24 abstracts (60%) with tenses is not appropriate, such as: "This study uses a cross-sectional approach." And "The instrument in this research is a questionnaire". The sentence shows the activities carried out by the researcher including the research approach carried out, so the use of uses and is (Present Tense) are not appropriate. The sentence should use the Simple Past Tense so that the correct sentence is: "This study used a cross-sectional approach." and "The instrument in this research was questionnaire". The use of the Simple Present Tense in the research methods section is only limited showing standard procedures and activities in research.
The third finding is the use of tenses in the results section. The use of the right tenses found as many as 18 abstracts (45%). Meanwhile, there are 22 (55%) abstracts (55%) with the use of the wrong tense, namely by using the Simple Present Tense to state the findings: "The result showed that ………" The use of the word showed in the sentence is right because showed is a form past (Verb 2) which comes from the root word (Verb1) show which means to show. Meanwhile, there are 22 abstracts (55%) with the use of the wrong tense, namely by using the Simple Present Tense for stated the findings: "The result of research shows that the world wall picture has been . ..." and "The result of research shows that social class and cost of living still below standard ..."; and "The result of research shows that most of students' knowledge on teaching method is ...".
In all three sentences above there is the word shows which shows the use the Simple Present Tense in the results section findings. The word shows in that context is not appropriate because it refers to an outcome of an activity that has been done in the past. The word show should be written as showed (Simple Past Tense) which refers to past activities, so that the sentence becomes: "The result of research showed that the world wall picture has be "The result of research showed that social class and cost of living still below standard ..."; and "The result of research showed that most of students' knowledge on teaching method is ...".
The fifth finding was the use of tense in the conclusions and suggestions section. There were 18 abstracts where the writer did not include conclusions and suggestions. The use of the right tense in the conclusions and suggestions section were 25 abstracts (62,5%). Meanwhile, there were 15 (37,5%) abstracts that the use of the tense was not correct, such as : "The conclusion of the study is that the management of risk in the university had not been carried out…." and "The suggestion in this study is that it was better to conduct socialization or training to ...." The use is (Simple Present Tense) at the conclusion an abstract is appropriate because it refers to gist, summary and implications of the findings. Whereas the use of is (Simple Present Tense) in the suggestion is also appropriate because to state something common, which in fact can implemented or not.
The outcomes of this analysis are close to the research conducted by Erna Adita Kusumawati dan Sri Sugiarsi in their research entitled "Analisis Penulisan Abstrak Bahasa Inggris Pada Karya Tulis Ilmiah Mahasiswa D3 Rekam Medis Dan Informasi Kesehatan Stikes Mitra Husada Karanganyar" the study concluded that the use of tenses the highest imprecision is found in the component research methods as many as 19 (63.3%) abstracts on research articles. The error lies in use of Present Tense. The next highest tense use error is in the introduction to the abstract; i.e. use tenses Simple Present Tense to refer to preliminary survey that has been conducted; as much 16(53,3%). Similarly, misuse thesis occurs in the writing of research results; that is use of Simple Present Tense. Most of the the use of tenses in writing abstract conclusions is correct; 20(66.6%) is the use of Simple Past tense) . and which is not right; 10(33.3%); Simple Present tense.
CONCLUSION
This research showed that there were inaccuracies and errors in the writing a thesis abstract in English, if Google Translate was used as a translation tool. The accuracy of meaning based on context and grammar was not noticed by this translation tool. To overcome this problem, students should understand the use of correct English grammar. If students still want to use Google Translate, they need to pay attention to the accuracy of the translation they get.
Based on the data exposure and research findings, several conclusions can be made. Errors in the use of tense in English abstracts in scientific writings (thesis) students of Master of Islamic Religious Education Postgraduate at the Islamic University of Jakarta, consist of the use of appropriate and inappropriate tenses. The use of inappropriate tenses is done more than the use of appropriate tenses, namely in the introduction, research methods, and research results (findings). While the | 5,492.2 | 2021-12-12T00:00:00.000 | [
"Linguistics",
"Computer Science"
] |
A robust approach for multi-type classification of brain tumor using deep feature fusion
Brain tumors can be classified into many different types based on their shape, texture, and location. Accurate diagnosis of brain tumor types can help doctors to develop appropriate treatment plans to save patients’ lives. Therefore, it is very crucial to improve the accuracy of this classification system for brain tumors to assist doctors in their treatment. We propose a deep feature fusion method based on convolutional neural networks to enhance the accuracy and robustness of brain tumor classification while mitigating the risk of over-fitting. Firstly, the extracted features of three pre-trained models including ResNet101, DenseNet121, and EfficientNetB0 are adjusted to ensure that the shape of extracted features for the three models is the same. Secondly, the three models are fine-tuned to extract features from brain tumor images. Thirdly, pairwise summation of the extracted features is carried out to achieve feature fusion. Finally, classification of brain tumors based on fused features is performed. The public datasets including Figshare (Dataset 1) and Kaggle (Dataset 2) are used to verify the reliability of the proposed method. Experimental results demonstrate that the fusion method of ResNet101 and DenseNet121 features achieves the best performance, which achieves classification accuracy of 99.18 and 97.24% in Figshare dataset and Kaggle dataset, respectively.
Introduction
In recent years, the rising incidence and mortality rates of brain tumor diseases have posed significant threats to human well-being and life (Satyanarayana, 2023).Because of the different causes and locations of brain tumors, the treatment methods for brain tumors are very different.Additionally, the severity of lesions significantly impacts the efficacy of treatment methods.Therefore, it is very important to determine the type and severity of brain tumor lesions prior to treatment development.With the development of modern technology, Computer-Aided Diagnosis (CAD) technology plays an increasingly important role in the medical diagnosis process (Fujita, 2020;Gudigar et al., 2020;Sekhar et al., 2022).The diagnosis and analysis of brain tumor magnetic resonance imaging (MRI) images by physicians based solely on personal experience is not only inefficient but also subjective and prone to errors, leading to misleading results (Chan et al., 2020;Arora et al., 2023).Consequently, enhancing the efficiency and accuracy of computer-aided diagnosis for brain tumors has emerged as a prominent research hotspot in the field of brain tumorassisted diagnosis.
Traditionally, the classification method of medical images consists of several stages, including image pre-processing, image segmentation, feature extraction, feature selection, training of classifiers and image classification (Muhammad et al., 2021;Yu et al., 2022).Nevertheless, in recent years, with the emergence of deep learning theory, more and more researchers applied the deep learning theory into medical image processing (Maurya et al., 2023).Deep learning has been employed widely in the analysis and diagnosis of diverse diseases (Cao et al., 2021;Gu et al., 2021;Lin et al., 2022;Yang, 2022;Yao et al., 2022;Zolfaghari et al., 2023).Convolutional Neural Networks (CNNs) are widely recognized as one of the most prominent deep learning techniques.By utilizing the images as input, CNNs mitigate the issue of low classification accuracy resulting from the selection of unrepresentative features by humans.
Medical images are usually difficult to obtain, and the amount of image data is relatively small (Shah et al., 2022).Although training an effective deep learning model typically necessitates a substantial amount of data, transfer learning can address the issue of limited dataset size and expedite the training process.Therefore, transfer learning has been widely used in the medical field (Yu et al., 2022).Yang et al. (2018) utilized AlexNet and GoogLeNet for glioma grade classification.Experimental results demonstrated that CNNs trained using transfer learning and fine-tuning were employed for glioma grading, achieving improved performance compared to traditional machine learning methods reliant on manual features, as well as compared to CNNs trained from scratch.Swati et al. (2019) and Zulfiqar et al. (2023) employed VGG19 and EfficientNetB2, respectively for the classification of brain tumors.Arora et al. (2023) examined the classification performance of 14 pre-trained models for the identification of skin diseases.DenseNet201 obtained superior classification performance, achieving an accuracy of 82.5%.Meanwhile, ResNet50 exhibits the second-highest classification accuracy at 81.6%.Aljuaid et al. (2022), ResNet 18, ShuffleNet, and Inception-V3Net models were used to classify breast cancer, with ResNet 18 showing excellent performance with an accuracy of 97.81%.
However, only relying on a single model often results in overfitting on the training set and poor generalization on the test set, in turn to diminish the model's robustness.Therefore, in this paper, to addresses the limitations associated with only relying on a single model, model integration techniques are proposed.In this paper, three pre-trained models namely ResNet101, DenseNet121, and EfficientNetB0 are used to extract the features of brain tumor images.Subsequently, the extracted features are fused using a summation method, followed by classification of the fused features.The main contributions of this paper are as follows: 1 An image classification method for brain tumors based on feature fusion is proposed.2 The feature outputs of the three pre-trained models were adjusted to have consistent dimensions.3 Feature fusion was accomplished through summation.4 The validity of the method was verified on two publicly available datasets including Figshare dataset (Cheng et al., 2015) referred to as dataset 1, and Kaggle dataset (Bhuvaji et al., 2020) referred to as dataset 2, and the model outperformed other state-of-the-art models.
Related work
There have been many studies on the classification of brain tumors.Alanazi et al. (2022) constructed a 22-layer CNN architecture.Initially, the model underwent training with a large dataset utilizing binary classification.Subsequently, the model's weights were adjusted, and it was evaluated on dataset 1 and dataset 2 using migration learning.The model achieved accuracy of 96.89 and 95.75% on dataset 1 and dataset 2, respectively.Hammad et al. (2023) constructed a CNN model with 8 layers.The model achieved an accuracy of 99.48% for binary classification of brain tumors and 96.86% for three-class classification.Liu et al. (2023) introduced the self-attention similarityguided graph convolutional network (SASG-GCN) model to classify multi-type low-grade gliomas.The model incorporates a convolutional depth setting signal network and a self-attention-based method for chart construction on a 3D MRI water surface, which achieved an accuracy of 93.62% on the TCGA-LGG dataset.Kumar et al. (2021) employed the pre-trained ResNet50 model for brain tumor classification, achieving a final accuracy of 97.48% on dataset 1. Swati et al. (2019) presented an exposition on the merits and demerits of conventional machine learning and deep learning techniques.They introduced a segmented fine-tuning approach leveraging a pre-trained deep convolutional neural network model.Through fine-tuning, they achieved an accuracy of 94.82% on dataset 1 using the VGG19 architecture.Ghassemi et al. (2020) employed a pre-trained generative adversarial network (GAN) for feature extraction in the classification of brain tumors.The experiment was conducted on dataset 1, yielding an accuracy of 95.6%.Saurav et al. (2023) introduced a novel lightweight attention-guided convolutional neural network (AG-CNN).This network incorporates a channel attention mechanism.The model achieves accuracies of 97.23 and 95.71% on dataset 1 and dataset 2, respectively.
Integration through models is a feasible solution.In Hossain et al. ( 2023), an ensemble model IVX16 was proposed based on the average of the classification results of three pre-trained models (VGG16, InceptionV3, Xception).The model achieved a classification accuracy of 96.94% on dataset 2. A comparison between IVX16 and Vison Transformer (ViT) models reveals that IVX16 outperforms the ViT models.Tandel et al. (2021) presented a method of majority voting.Firstly, five pre-trained convolutional neural networks and five machine learning models are used to classify brain tumor MRI images into different grades and types.Next, a majority voting-based ensemble algorithm is utilized to combine the predictions of the ten models and optimize the overall classification performance.In Kang et al. (2021), nine pre-trained models including ResNet, DenseNet, VGG, AlexNet, InceptionV3, ResNeXt, ShuffleNetV2, MobileNetV2, and MnasNet were employed.The pre-trained models were utilized to extract features, which were then forwarded to a machine learning classifier.From the extracted features, three deep features with excellent performance were selected and concatenated along the channel dimension.The resulting feature representation was subsequently sent to both the machine learning classifier and fully connected (FC) layer.On dataset 2, the model achieved an accuracy of 91.58%.Alturki et al. (2023) employed a voting-based approach to classify brain tumors as either healthy or tumorous.They utilized a CNN to extract tumor features, and employed logistic regression and stochastic gradient descent as the classifiers.To achieve high accuracy of tumor classification, a soft voting method was employed.Furthermore, the combination of CNNs and machine learning classifiers offers the potential ways to enhance the model's performance.Sekhar et al. (2022), image features were extracted using GoogLeNet, and feature classification was performed using both support vector machines (SVM) and K-Nearest Neighbor (KNN).Ultimately, KNN outperformed SVM, achieving a model accuracy of 98.3% on dataset 1. Deepak and Ameer (2021) employed a hybrid approach combining CNN and SVM to effectively classify three distinct types of brain tumors.The researchers introduced a CNN architecture comprising five convolutional layers and two fullyconnected layers.Subsequently, they extracted features from the initial fully connected layer of the designed CNN model, and ultimately performed classification using SVM.Remarkably, this approach achieved an impressive classification accuracy of 95.82% on dataset 1. Özyurt et al. (2019), the researchers utilized a hybrid approach called Neutrosophy and Convolutional Neural Network (NS-CNN) to classify tumor regions that were segmented from brain images into benign and malignant categories.Initially, the MRI images undergo segmentation employing the Neutral Set Expert Maximum Fuzzy Determination Entropy (NS-EMFSE) method.Subsequently, the features of the segmented brain images are extracted through a CNN and then classified using SVM and K-Nearest Neighbors (KNN) classifiers.The experimental results demonstrated that the utilization of CNN features in conjunction with SVM yielded superior classification performance, achieving an average accuracy of 95.62%.Gumaei et al. (2019) introduced the classification method of brain tumors based on the hybrid feature extraction method of regularized extreme learning machine (RELM).In this paper, the mixed feature extraction method is used to extract the features of brain tumors, and RELM is used to classify the types of brain tumors.This method achieves 94.233% classification accuracy on dataset 1. Öksüz et al. (2022) introduced a method that combines deep and shallow features.Deep features of brain tumors were extracted using pre-trained models: AlexNet, ResNet-18, GoogLeNet, and ShuffleNet.Subsequently, a shallow network is developed to extract shallow features from brain tumors, followed by fusion with the deep features.The fused features are utilized to train SVM and KNN classifiers.This method achieves a classification accuracy of 97.25% on dataset 1.In their work, Demir and Akbulut (2022) developed a Residual Convolutional Neural Network (R-CNN) to extract profound features.Subsequently, they applied the L1-Norm SVM ReliefF (L1NSR) algorithm to identify the 100 most discriminative features and utilized SVM for classification.The achieved classification accuracies for 2-categorized and 4-categorized data were 98.8 and 96.6%, respectively.
Moreover, the hyperparameters of the model can be optimized through the utilization of an optimization algorithm.Ren et al. (2023), the study employed preprocessing, feature selection, and artificial neural networks for the classification of brain tumors.Furthermore, the authors utilized a specific optimization algorithm known as water strider courtship learning to optimize both the feature selection and neural network parameters.The effectiveness of the proposed method was evaluated on the "Brain-Tumor-Progression" database, obtaining a final classification accuracy of 98.99%.SbDL was utilized by Sharif et al. (2020) for saliency map construction, while deep feature extraction was performed using the pre-trained Inception V3 CNN model.The connection vector was optimized using Particle Swarm Optimization (PSO) and employed for classification with the softmax classifier.The proposed method was validated on Brats2017 and Brats2018 datasets with an average accuracy of more than 92%.In Nirmalapriya et al. (2023), employed a combination of U-Net and CFPNet-M for segmenting brain tumors into four distinct classes.The segmentation process was conducted using the Aquila Spider Monkey Optimization (ASMO) to optimize segmentation model and the Spider Monkey Optimization (SMO), Aquila Optimizer (AO), and Fractional Calculus (FC) optimized SqueezeNet models.The model achieved a tested accuracy of 92.2%.The authors introduced a model, referred to in Nanda et al. (2023) as the Saliency-K-mean-SSO-RBNN model.This model comprises the K-means segmentation technique, radial basis neural network, and social spider optimization algorithm.The tumor region is segmented using the k-means clustering method.The segmented image then undergoes feature extraction through multiresolution wavelet transform, principal component analysis, kurtosis, skewness, inverse difference moment (IDM), and cosine transforms.The clustering centers are subsequently refined using the social spider optimization (SSO) algorithm, followed by processing the feature vectors for efficient classification using the radial basis neural network (RBNN).The final model achieves classification accuracies of 96, 92, and 94% on the three respective datasets.
Materials and methods
This paper utilizes three pre-trained models, namely ResNet101, DenseNet121, and EfficientNetB0.The outputs of these models are adjusted to ensure consistent data size, and then the extracted features from these models are fused.Subsequently, feature classification is performed.To achieve consistent output from the feature extraction modules across all models, we harmonized the feature extraction modules of EfficientNetB0 and ResNet101 with DenseNet121 by utilizing a 1 × 1 convolutional layer.
The MRI data consists of two-dimensional images with a size of 512 × 512.However, the input of the pre-training model is necessary to be RGB image.Therefore, the images were resized to dimensions of 224 × 224 × 3. Furthermore, the min-max normalization method was adopted to scale the intensity values of the image to the range of [0, 1].The dataset 2 was processed in the same way.We divided the dataset into a training set and a test set with a ratio of 8:2.
Architecture of the proposed method
Transfer learning is a kind of machine learning technique, which leverages the knowledge acquired during training on one problem to train on another task or domain.The transfer learning approach, which utilizes pre-trained network knowledge obtained from extensive visual data, is very advantageous in terms of time-saving and achieving superior accuracy compared with training a model from scratch (Yu et al., 2022;Arora et al., 2023).
ResNet, DenseNet and EfficietNet have been proved to be very effective brain tumor classification models (Zhang et al., 2023;Zulfiqar et al., 2023).The accuracy of brain tumor classification of VGG19 and ResNet50 is 87.09 and 91.18%, respectively (Zhang et al., 2023).The accuracy of GoogLeNet is 94.9% (Sekhar et al., 2022).We also have tested the ability of ResNet101 and EfficientNetB0 for brain tumor classification, whose accuracy is 96.57, 96.41%, respectively.The comparison shows that ResNet101, DenseNet121 and EfficientNetB0 are more accurate, so they are chosen as the basic models.
Figure 1 depicts the framework of the proposed method in this paper.Firstly, the brain tumor data was processed and the images were adjusted.Secondly, features are extracted from brain tumor images using pre-trained models.Finally, the extracted features are then aggregated for feature fusion, followed by classification.Specifically, ResNet101, DenseNet121, and EfficientNetB0 serve as pre-trained models.The outputs of the ResNet101 and EfficientNetB0 feature extraction layers are adjusted to dimensions of (1,024, 7, 7).Brain tumor feature fusion is accomplished by pairwise summation of the extracted features.Finally, the fused features are classified using a linear classifier.
Pre-trained models
As a fundamental component of neural network architecture, the convolutional layer extracted features by sliding a fixed-size convolutional kernel over the original image and performing multiplication operations between the kernel parameters and the image.To achieve different effects, the convolution operation relies on additional parameters, primarily the step size, padding, and size of the convolution kernel.The size of the output features from the convolutional layer can be calculated using Equation ( 1).
where H in and W in represent the dimensions of the input data, padding refers to the number of zero-padding layers, Kernel_size represents the dimensions of the convolution kernel.And stride represents the step size of the convolution operation.The formula indicates that when the kernel_size is set to (1,1), the stride is set to 1 and padding is set to 0, the output dimension of the convolutional layer remains unchanged.
ResNet101
Residual network (ResNet) is a widely recognized and straightforward model used for deep learning tasks, particularly in image recognition (He et al., 2016).Previously, as the number of network layers increases, a common issue of vanishing gradients may arise, resulting in performance saturation and degradation of the model.Deep residual networks address this issue by incorporating jump connections between layers to mitigate information loss.The core idea of the deep residuals network is to add a path parallel to the main convolution path, which combines the features from the subsequent convolution layer with those from the previous layer within the same residuals block, in turn to can achieve a deeper network model.Within the residual network, each building block performs an identity mapping, and the resulting features are elementwise summed across the convolutional layers preceding and following
DenseNet121
The DenseNet convolutional neural network model was proposed by Huang et al. (2017).The network is based on the ResNet structure, but it incorporates dense connections (i.e., summed variable joins) between all preceding and subsequent layers.Another significant aspect of DenseNet is the reuse of features through channel connections.In DenseNet, every layer receives feature maps as input from all preceding layers, and its output feature maps are subsequently utilized as input for each subsequent layer.In ResNet, the features of each block are combined by summation, whereas in DenseNet, feature aggregation is accomplished through concatenation.Figure 3 shows the fundamental framework of the DenseNet121 model.The core of the network is the reused combination of Dense Blocks and Transition Layers, forming the intermediate structure of DenseNet.Additionally, the topmost part of DenseNet consists of a 7 × 7 convolutional layer with a stride of 2, and a 3 × 3 MaxPool2d layer with a stride of 2. The output dimension of the feature extraction layer of the model is (1,024, 7, 7).
EfficientNetB0
The EfficientNet model was proposed by the Google AI research team in 2019 (Tan and Le, 2019).In contrast to traditional scaling methods used in previous studies, where the width, depth, and resolution of the deep CNN architecture are arbitrarily increased to enhance model performance, EfficientNets achieve network performance improvement through a fixed-scale approach that scales the width, depth, and resolution of the network's input images.The calculations are as follows [Equations (2-6)]: where, α, β, and γ are obtained by hyperparametric mesh search techniques and can determine the allocation of additional resources to the width, depth, and resolution of the network.φ is a user-specified coefficient that controls the amount of additional resources used for model scaling.In Figure 4, the structure of the EfficientNetB0 model is shown.In order to transform the feature output of the EfficientNetB0 model from its original dimension of (1,280, 7, 7) to the desired dimension of (1,024, 7, 7), a 1×1 convolution with 1,024 convolution kernels is applied so that the output is (1,024, 7, 7).
Training of CNNs
The convolutional neural network training process is a combination of forward and backward propagation.It starts at the input layer and propagates forward from layer to layer until it reaches the classification layer.The error is then propagated back to the first layer of the network.In layer L of the network, input from layer L-1 neuron j is received in a forward propagation path.The weighted sums are calculated as follows [Equation ( 7)]: Here, the letters W l ij stand for weights, x j stand for training samples, and b i stand for bias.The nonlinearity of the model can be increased by the activation function to make the network fit the data better.Equation (8) shows how the Relu function is calculated.
In the classification layer of the convolutional neural network, the probability of categorization is calculated by the following softmax function.This classification layer evaluates the probability score of each category by softmax function.Equation ( 9) shows the method of calculation.
Here, m represents the total count of training samples.Stochastic gradient descent on small batches of size N is used to minimize the cost function C and approximate the training cost by the small batch cost.W denotes the weights at iteration t of the l convolutional layer, and C denotes the small batch cost.The weights are then updated in the next iteration as follows [Equation (11)]: In this case, α l is the learning rate of layer l. γ is the scheduling rate that reduces the initial learning rate at the end of a specified number of periods.And μ stands for the momentum factor, which indicates the effect of the previously updated weights on the current iteration.
Results and discussion
The experiments were conducted on a Windows 10 system with 64 GB of Random Access Memory (RAM).The graphics card utilized was RTX 4070, and the programming language employed was Python, with PyTorch serving as the framework.The hyperparameters of the model in the experiment are shown in Table 1.
Evaluation metrics
To comprehensively assess the effectiveness of the model, the evaluation metrics including accuracy, precision, recall, and F1-score are employed in this paper.The expressions of the evaluation metrics are shown in Equations (12-15) (Yeung et al., 2022;Alyami et al., 2023).Structure of the DenseNet121 model.
Accuracy TP TN TP TN FP FN
Structure of the EfficientNetB0 model.
where, true positive (TP) represents the count of accurately classified sick images in each respective category.True negative (TN) denotes the total number of correctly classified images in all categories, excluding the relevant category.False negative (FN) represents the count of incorrectly classified images in the relevant category.False positive (FP) denotes the count of misclassified images in all categories, excluding the relevant category.
Classification results
This section presents the classification results of the proposed method and includes a comparative analysis with and without the utilization of feature fusion methods.
The representation of a single model
The confusion matrix illustrating the classification results of models, which was pre-trained through fine-tuning on the test set of the dataset 1, is presented in Figure 5.To analyze the classification outcomes of the three pre-trained models on the test set of the dataset 2, Figure 6 shows the corresponding confusion matrix.Additionally, Table 2 lists the specific values of accuracy, precision, recall, and F1-score, calculated using Equations (12-15) respectively.According to Table 2, on dataset 1, DenseNet121 has the best classification performance for brain tumor with 98.53% accuracy, while on dataset 2, ResNet101 has excellent classification performance with 95.71% accuracy.
With feature fusion
Figures 7, 8 display the confusion matrices of the brain tumor classification results achieved by feature fusion on dataset 1 and dataset 2, respectively.Furthermore, Table 3 present detailed values of the classification indexes for dataset 1 and dataset 2. It can be seen that ResNet101 + DenseNet121 attains optimal classification results on both datasets, with an accuracy of 99.18% on dataset 1 and 97.24% on dataset 2.
Figures 9A, B show the average evaluation metrics for brain tumor classification of every model on dataset 1 and dataset 2, respectively.On the dataset 1, from Figure 9A, it can be observed that the combination of ResNet101 and DenseNet121 (ResNet101 + DenseNet121) achieved the best classification accuracy, precision, recall, and F1-score, with values of 99.18, 99.07, 99.11, and 99.08%, respectively.Additionally, among the individual models, EfficientNetB0 exhibits the best classification results for brain tumor classification.Notably, DenseNet121 outperforms ResNet101 + EfficientNetB0 but is outperformed by both ResNet101 + DenseNet121 and DenseNet121 + EfficientNetB0.In Figure 9B (i.e., dataset 2), the ResNet101 + DenseNet121 model also achieves the best performance.However, among the individual models, DenseNet121 exhibits the best classification results, with accuracy, precision, recall, and F1-score of 97.24, 97.06, 97.58, and 97.28%, respectively.Unlike dataset 1, where DenseNet121 showed strong performance, it appears to have the weakest classification ability on the dataset 2. Conversely, ResNet101 + DenseNet121, ResNet101 + EfficientNetB0, and DenseNet121 + EfficientNetB0 all outperform the individual models.The experimental results validate the effectiveness of combining features from different models through feature fusion, thus providing a more reliable approach for brain tumor classification than relying on a single model.In addition, the average improvement of ResNet101 + DenseNet121 is 2.085% (dataset 1 is 2.61%, dataset 2 is 1.56%) and 1.32% (dataset 1 is 0.65%, dataset 2 is 1.99%) compared with ResNet101 and DenseNet121, respectively.Similarly, the accuracy improvement for ResNet101 + EfficientNetB0 is 1.035% (1.31% for dataset 1 and 0.76% for dataset 2) and 1.345% (1.47% for dataset 1 and 1.22% for dataset 2) compared with ResNet101and EfficientNetB0 alone.In comparison with Densenet121 and EfficientNetB0, the average accuracy improvement for DenseNet121 + EfficientNetB0 is 1.225% (0.61% for dataset 1 and 1.84% for data set 2) and 1.985% (2.28% for dataset 1 and 1.69% for dataset 2), respectively.The modeled results strongly support the efficacy of employing feature fusion in brain tumor classification.In addition, it is evident that ResNet101 achieves the most favorable classification results, while DenseNet121 yields the terrible results on dataset 2. But the classification effectiveness of ResNet101 + DenseNet121 surpasses that of ResNet101 + EfficientNetB0 and DenseNet121 + EfficientNetB0.This suggests that the combination of ResNet101 and DenseNet121 outperforms configurations involving EfficientNetB0.The possible reason for this phenomenon is the inferior feature matching effect of ResNet101 + EfficientNetB0 and DenseNet121 + EfficientNetB0 compared to ResNet101 + DenseNet121.
A subject Receiver Operating Curve (ROC) is also utilized in the analysis process.It is a curve that illustrates the relationship between the true positive rate and the false positive rate.The size of the Area Under Curve (AUC) of the ROC curve indicates the strength of the model's ability to differentiate between different types of tumors, with a larger AUC value indicating better classification performance.As shown in Figure 10, the ROC curves of ResNet101 + DenseNet121 for the model are demonstrated and the values of AUC for the three types of brain tumors in dataset 1 are 0.9987, 0.9952, and 0.9999, respectively.In dataset 2, the values of AUC are 0.9991, 0.9971, 0.9999, and 0.9998, respectively.
Parameters Setting
Epoch 25 Learning rate 0.0001
Batch size 32
Optimizer Adam
Loss function Cross entropy
Frontiers in Neuroscience frontiersin.org
Cross-dataset validation and robustness validation
Based on the foregoing, it is evident that the ResNet101 + DenseNet121 yields superior classification results across the two public datasets.This section aims to assess the robustness of ResNet101 + DenseNet121.To further assess the model's robustness, a cross-data verification method was employed.The normal class in Dataset 2 was excluded, and data from the remaining three brain tumor classes were utilized to evaluate the dataset 1 trained model, ResNet101 + DenseNet121.The precision, recall, F1-score and accuracy of ResNet101 + DenseNet121 are verified to be 94.71, 94.44, 94.41, and 94.38%, respectively, which indicates its good robustness.
Discussion
There have been many studies on brain tumor classification.Among these methods, the key is the extracted features.Generally, there is a relationship between the effectiveness of the model and the amount of data.Whereas the acquisition of medical images is usually difficult and expensive.Transfer learning can take full advantage of its advantages on tasks with small datasets to improve model performance, accelerate the training process, and reduce the risk of overfitting.In addition, model integration is a technique that combines the prediction results of multiple independently trained models to obtain more powerful and robust global predictions, which can improve the upper limit of performance.In our work, the pre-trained model is used to extract the features of the image, and then the extracted features are fused using the model integration method of feature fusion to enhance the ability of the model.
From the previous analysis, it can be found that among the three fused models, ResNet101 + DenseNet121 achieves the best classification results.ResNet101 adopts the method of residual learning to construct residual blocks, which makes the network easier to train and reduces the problem of gradient vanishing.Densenet121, on the other hand, uses the idea of dense connectivity, where each layer's input contains the output of all previous layers.This kind of connection is helpful to the transmission of information and the flow of gradients, and slows down the problem of information bottleneck.Dense connectivity also facilitates feature reuse.The features extracted by ResNet101 and those extracted by Densenet121 are fused to realize the complementary feature, which makes the feature more abundant and diversified, and thus achieves better classification effect.To demonstrate the effectiveness of the proposed method, we use the method of t-Distributed Stochastic Neighbor Embedding
Comparison with other state of the art methods
We compared the classification results obtained in this study with those reported in the literature using the same
Conclusion
This paper proposes a novel method for brain tumor classification, utilizing feature fusion to improve performance.Three advanced pre-trained models including ResNet101, DenseNet121, and EfficientNetB0, were selected as base models and adjusted to have the same output size (1,024, 7, 7).Brain tumor images were fed into these models to extract their respective features, and then feature fusion was achieved by pairwise combination of the models through feature summation.The fused features were subsequently used for the final classification.The method was validated on two publicly available datasets, and evaluation metrics such as accuracy, precision, recall, and F1-score were employed.Experimental Results indicated that the combination of ResNet101 and DenseNet121 (ResNet101 + DenseNet121) achieved the best classification results for both dataset 1 and dataset 2. On dataset 1, accuracy of 99.18%, precision of 99.07%, recall of 99.11%, and F1-score of 99.08% were achieved.For dataset 2, the corresponding metrics values including accuracy of 97.24%, precision of 97.06%, recall of 97.58%, and F1-score of 97.28% were obtained.Comparing our method with other state-of-the-art techniques, our approach exhibits superior classification performance.
In the future, we plan to study two important works.On one hand, we will expand the experimentation by incorporating additional models to validate the effectiveness of feature fusion through summation for brain tumor classification.On the other hand, we aim to extend this method to encompass other brain diseases, thus enhancing the model's capacity to recognize multiple classes of brain diseases.
updated by Backpropagation.The algorithm uses unknown weight W to minimize the tracking cost function.The loss function is calculated as follows [Equation (10)]: x i represents the initial training sample.y i represents the label associated with the sample x i .And P probability of x i belonging to class y i .
(t-SNE) to visualize the features extracted by the model ResNet101 + DenseNet121 trained on dataset 1, and the visualization results are shown in Figure 11.The feature set of ResNet101 is shown in Figure 11A.It can be seen that some gliomas and meningiomas are nested with each other.The mean and standard deviation of the feature set are−0.0057 and 0.6141, respectively.The feature set of DenseNet121 is shown in Figure 11B, which shows that only a few gliomas and meningiomas are nested with each other.The mean and standard deviation of the feature set are 0.2323 and 0.652795, respectively.Figure 11C displays the feature set of ResNet101 + DenseNet121, indicating minimal nested classes.The mean and standard deviation of the feature set are 0.2267 and 0.9604,
FIGURE 5
FIGURE 5 Confusion matrix of predicted results for a single model on the test set of the dataset 1. (A) ResNet101 (B) DenseNet121 (C) EfficientNetB0.
FIGURE 6
FIGURE 6Confusion matrix of the predicted results of a single model on the test set of the dataset 2 (A) ResNet101 (B) DenseNet121 (C) EfficientNetB0.
Framework diagram of the proposed methodology. 12)
TABLE 2
Indicators for the classification of a single model.
dataset.The compared results shown in Table4demonstrate that our study achieved competitive classification performance when compared to the state-of-the-art approaches in the current literature.
TABLE 3
The classification results of feature fusion methods.
TABLE 4
Comparison with other state-of-the-art models. | 7,211.4 | 2024-02-19T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Research on the Effects of Hydropneumatic Parameters on Tracked Vehicle Ride Safety Based on Cosimulation
Ride safety of a tracked vehicle is the key focus of this research. The factors that affect the ride safety of a vehicle are analyzed and evaluation parameters with their criteria are proposed. A multibody cosimulation approach is used to investigate the effects of hydropneumatic parameters on the ride safety and aid with design optimization and tuning of the suspension system. Based on the cosimulation environment, the vehicle multibody dynamics (MBD) model and the road model are developed using RecurDyn, which is linked to the hydropneumatic suspensionmodel developed in Lab AMESim. Test verification of a single suspension unit is accomplished and the suspension parameters are implemented within the hydropneumatic model. Virtual tests on a G class road at different speeds are conducted. Effects of the accumulator charge pressure, damping diameter, and the track tensioning pressure on the ride safety are analyzed and quantified.This research shows that low accumulator charge pressure, improper damping diameter, and insufficient track tensioning pressure will deteriorate the ride safety.The results provide useful references for the optimal design and control of the parameters of a hydropneumatic suspension.
Introduction
Hydropneumatics is a technology typically used in vehicle suspensions.A hydropneumatic suspension has good properties such as nonlinear stiffness and damping, high power density, convenient tuning, and vertical position locking.Thus, it is widely used in tracked vehicles and improves ride comfort.In practical applications, common failures are the track separating from the road wheel or the sprocket.These failures may result in the vehicle losing control and bring very serious implications in terms of vehicle ride safety.
Previous studies have done some dynamic simulation analysis on the issue of track separating from the road wheel.It has been shown that improper tuning of the suspension parameters is the main reason for such failures [1,2].For a hydropneumatic suspension system, the primary parameters are the accumulator charge pressure and the damping.If these parameters are not tuned properly, or if they vary because of external gas leakage and internal oil leakage, then ride safety can be severely compromised.Currently, most research investigations focus on vehicle ride comfort, with little attention given to ride safety.Some dynamic analysis research has been conducted [3], but this work was based on a simplified mathematical model of the suspension and does not consider the dynamic influence of the track.Other research, which is usually conducted in a single simulation environment, is lacking systematic and quantitative analysis of how the hydropneumatic parameters affect the ride safety [4].
Determining the dynamic behavior of the track and its interactions with the hydropneumatic suspension system is very difficult to achieve mathematically.This is due to the complex nature of the nonlinear multibody system.Multibody dynamics (MBD) simulation has long been recognized as an excellent method to predict the dynamic response of vehicles [5][6][7][8][9].One of the key advantages of this approach is that it allows nonlinear elements such as the accumulator and the damping of the hydropneumatic suspension to be modeled readily.
The introduction of hydropneumatic suspension components that require extensive use of mathematical functions brings challenges for existing software that use the MBD approach and may restrict its use.MBD cosimulation gives a suitable framework for coupling software tools which specialize in different fields of mechanics without sacrificing overall accuracy, particularly if based on different mathematical methods [10].
The approach adopted here integrates the MBD software (RecurDynV8R3 [11]) used for modeling complex configurations of the tracked vehicle and the road model, with LMS Imagine.Lab AMESim14.0[12] used for the development of hydropneumatic systems.RecurDyn is a computer aided engineering tool for MBD analysis.It is capable of simulating vibrations, motions, stress, and rigid-flexible coupling analysis.Its Track (HM) module is excellent for modeling high mobility tracked vehicles.LMS Imagine.Lab AMESim is a software package used for modeling complex physical systems containing mechanical, hydraulic, electronic, thermal, and control components.
On the basis of these conditions, the detailed mechanical configurations and hydropneumatic elements can be modeled more readily using a cosimulation approach.The enormous and complicated calculations relating to the dynamic behavior of flexible objects such as tracks can be fulfilled.This leads to more accurate relationships between the parameter variations and the output variables.In this paper, the hydraulic-mechanical cosimulation on a virtual G road is conducted.Insights into how the hydropneumatic parameters affect the vehicle ride safety are achieved.The results can be readily used in the design, optimal control, and failure detection of a hydropneumatic suspension system.This paper is organized as follows.A detailed description of the ride safety of tracked vehicles and the numerical model development is presented in Section 2; Section 3 shows the validation process and validity of the constructed model; Section 4 reports the cosimulation results and the analysis; finally, conclusions are presented in Section 5.
Ride Safety and Its Evaluation
Parameters.For a high mobility tracked vehicle, the suspension design and tuning should improve the ride comfort on the condition that the ride safety is guaranteed.From the point of view of surviving in the battlefield, the ride safety is generally more important than the ride comfort.The ride safety of wheeled vehicles is usually evaluated by the dynamic load of the wheel.In contrast to wheeled vehicles, the special structure of the track and its interactions with the running system brings new problems to the ride safety.The factors that affect ride safety and the evaluation parameters with their criteria are proposed as follows.
(1) Track separating from the road wheel or the sprocket: the failures of the track separating from the road wheel or the sprocket are usually caused by continuous running after the relative position of the track and the road wheel (sprocket) is skewed.The main causes may be (a) the gap between the track and the road wheel which is too large (often over the teeth height) usually caused by variations in the hydropneumatic suspension parameters, (b) the track excessive jumping caused by insufficient track tensioning force [13], and (c) large lateral forces acting on the road wheel when the vehicle turns or runs on a side slope.By comparing the above three causes, it is identified that the first two are more critical than the third.Thus, the trackwheel gap and the quantity of jumping are taken as evaluation parameters for determining whether track separation from the road wheel or sprocket has occurred.
(2) End-stop impact of suspension: at the end-stop of the suspension the arm hits into the bumper [14].This can cause enormous shock to the crew and may damage the arm and the hull.In a hydropneumatic suspension, the suspension stroke variation is categorized as a Gaussian random process.This means that if the RMS (root of mean square) value of the stroke of the actuation cylinder is higher than one-third of the designed stroke length, collision with the bumper is a possible event [15,16].
(3) Wheel-ground adhesion ability: wheel-ground adhesion is evaluated by the relative dynamic load of the wheel, which is the ratio of the dynamic load and static load of the wheel.If the RMS of the ratio is more than 1/3, the wheel load on the road will be negative and the wheel-ground adhesion deteriorates [17].For a multiwheel tracked vehicle, although some individual wheels may lift off, the static load of the others increases and so the total ground adhesion of the vehicle will not change very much.However, the increase of dynamic load on the first or last wheel will cause the steering torque to decrease and the turning diameter to increase.This causes deterioration in how vehicle turning is controlled.Similar to wheeled vehicles, the relative dynamic load of the wheel is also an evaluation parameter for tracked vehicles.
As the values of the proposed evaluation parameters listed above belong to a Gaussian random distribution, a statistical metric is more meaningful for evaluating the ride safety.Table 1 shows the ultimate evaluation parameters and their criteria used in this paper.
Vehicle and Road Modeling.
As mentioned previously, a conventional calculation method based on the simplified mathematical model (high degree of linearization of the system stiffness and damping) is not suitable for a complicated nonlinear multibody system [18].The approach Hull cannot effectively solve the interactions between the track and ground and the track and road wheels and the effect of track tightening force on the track.In this paper, the approach of MBD cosimulation is used to establish the vehicle dynamics model and the ground model.
(1) Vehicle model: the vehicle components were first modeled using a CAD toolkit.Then, the model is exported to the RecurDyn, where the mass, joint constraints, and the motions are assigned.The model, which is shown in Figure 1, consists of the hull, the suspension system, and the track assemblies [19].Each track assembly is composed of a sprocket, road wheels, an idler, rollers, and a track.According to the actual structure of the vehicle, the running system has 6 pairs of road wheels and arms with actuation cylinders; 3 pairs of supporting rollers; 1 pair of idlers; 1 pair of sprockets; and 101 track plates.The parameters values of these parts are in accordance with those of a real vehicle.The total mass of the structure is about 28.5 tons.
(2) Mechanical model of hydropneumatic suspension: the connection sketch of a single hydropneumatic suspension unit with the hull is shown in Figure 2. The actuation cylinder is connected to the hull by a revolution joint.The arm, which is fixed with a crank pin, is assembled in the hole of the bottom plate through a supporting bearing.The crank and the crank pin are fixed and turn simultaneously with the arm.
Based on the configuration of Figure 2, the equations of the vehicle's equivalent stiffness and damping are shown as follows: where is the suspension transmission ratio, is the gas spring force, is the gas spring stiffness, / is the derivative of with the vertical displacement of the road wheel , and is the damping coefficient of the hydropneumatic suspension.Their mathematical expressions are shown as follows: In (6), ( ṡ ) is the damping force generated by the hydraulic restrictor of the suspension.It can be expressed as follows: For the above expressions, the definition of the other variables and their initial values are given in Table 2.
(3) Road model: the road model is derived based on the technique of solid modeling.A road roughness coefficient is an evaluation index which determines the road class.The following formula is used as the fitting expression of the power spectral density function of the international standard road: where is the spatial frequency (m −1 ), 0 is the reference spatial frequency (0.1 m −1 ), ( 0 ) is the road power spectrum value at the reference spatial frequency, also called the road roughness coefficient (m 2 /m −1 = m 3 ), and is the frequency index which is the diagonal slope in a double logarithmic coordinate axis [20,21].
In this paper, G class road is used to conduct the virtual simulation.The road file is first written in Matlab and introduced to the RecurDyn as shown in Figure 1.
Hydraulic Model of the Hydropneumatic Suspension.
The hydraulic system of the hydropneumatic suspension and the interface module are modeled using AMESim, as shown in Figure 3.The model consists of 12 hydropneumatic suspension units, one pair of tension cylinders, and one interface module.Each unit is composed of an actuation cylinder, a damping tube, an accumulator, and one pair of relief valves.The hydraulic port of the actuation cylinder is connected with the accumulator and the mechanical port is linked with the interface module.The disturbance from the road causes the actuation cylinder to extend and retract.Thus, the oil flows into or out of the accumulator through the damping tube.The relief valves are set to be closed in order to study the effect of the damping tube.The tensioning cylinders are used for tightening the two tracks.
(1) Mathematical model of the actuation cylinder: the output force of the actuation cylinder contains the gas spring force , the damping force generated by oil flow , and the friction force [22,23].The total output force can be written as where , are defined in Section 2.1 and is expressed by The static friction force and the dynamic friction force are calculated from the following, respectively.
where piston and rod are the static friction forces of the piston and the rod, V is the relative velocity between the piston and the cylinder barrel, V 1 and V 2 are the slip velocities for fully developed dynamic friction of the piston and the rod, and 1 and 2 are the dynamic friction coefficients of the piston and the rod.
(2) Mathematical model of the damping tube: a long and thin damping tube is arranged between the actuation cylinder and the accumulator.According to the theory of hydraulic fluid dynamics, the pressure loss through the tube can be written as [24] where is the flow restriction coefficient along the tube, and the sign function is given as where V is the fluid flowing velocity, and are the length and flow diameter of the damping tube, and ] and are the kinematic viscosity and the density of the fluid.
(3) Mathematical model of the accumulator: the gas chamber of the accumulator is filled with pressurized nitrogen.The state changing process of the gas can be described by the following equation.
where is the gas polytrophic exponent, is the actual gas volume, 0 is the initial gas volume, is the actual gas pressure, and 0 is the initial gas charge pressure.If the accumulator is loaded or unloaded rapidly, the thermodynamic process of the gas state change belongs to an adiabatic process and the value of the polytrophic exponent is = 1.4.Otherwise, the thermodynamic process is the isothermal process and = 1 [25,26].When considering its application in a real vehicle, which is under typical road disturbances, the reciprocating impact on the accumulator is usually instantaneous.However, certain heat exchange processes exist, so the state change can be considered as a process that is between adiabatic and isothermal.In this paper, the polytrophic exponent is set as 1.3 [27,28].
(4) Interface module: the function of the interface module is to carry out the exchange of the simulation data between the two simulation platforms, namely, RecurDyn and AMESim.Here, each program executes its respective simulation simultaneously.At each time step (0.001 sec), both codes update one another with new state values before advancing to the next step.Simulations begin when RecurDyn calculates the stroke and velocity of the actuation cylinder.The hydropneumatic system in AMESim calculates the force of the actuation cylinder and feed it back to the MBD vehicle model.
The interface module has 14 input ports and 28 output ports.The input ports f 1∼f 12 are designated to the actuation cylinders' output force and f 13∼f 14 denote the left and right track tensioning cylinders' output force.The output ports 1∼ 24 represent the stroke and velocity of the actuation cylinders and 25∼28 are for the track tensioning cylinders.The interface module and its connection with the suspension system are depicted in Figure 3.
Model Verification
In order to validate the model, a single hydropneumatic unit is tested using the suspension test rig shown in Figure 4.The parameters in Table 3 are used in the simulation models from ( 13) to (17) and are in accordance with those in the test rig.In the AMESim parameter setting mode, the values are assigned to each hydraulic model.Relative to all the forces in the actuation cylinder, the friction force is comparatively tiny.Thus, a simplification is to set the value of the fiction parameter to the system default, which is suitable for most cases.The vehicle is designated to traverse a triangle barrier at the speed of 5 kmph as illustrated in Figure 5. Based on the above conditions, the disturbance on the road wheel is clear.On the test rig, the excitation of the vibration table is also set the same as that in the simulation.Figure 6 shows the test and simulation results of the actuation cylinder (1) stroke.In the initial position, the track assemblies are not in contact with the ground; otherwise, the initial impact of the track on the ground would result in the simulations failing.Thus, the vehicle descends to the ground firstly and the actuation cylinders are compressed from the initial extended position to the static balance position.This is shown as an initial stroke compression in the simulation curve in Figure 6.After approximately one second, the vibrations are attenuated almost to zero and the vehicle reaches its static balance position.The change in stroke after 1.5 s is caused by the vehicle acceleration and the small subsequent oscillations are from the impact of road wheel with the gap between the shoe plates.The large compression in stroke after 4 s is the result of the road wheel bouncing when traversing the barrier.The 2nd∼6th road wheel passing over the barrier resulted in the signal oscillations after 5 s.In general, the simulation results agree well with the test data.
The test and simulation results of the road wheel's (1) acceleration are illustrated in Figure 7.In the simulation curve, except for the large acceleration caused by the barrier impact, there exist some small fluctuations induced for the same reason as discussed in Figure 6.In general, the test and simulation results matched very well and the constructed simulation model is validated.
Cosimulation Analysis
Using the modeling and simulation environment, the virtual tests on the G road are conducted under different conditions such as vehicle speed, flow diameter of the damping tube; accumulator charge pressure, and track tensioning pressure.The influences of the above parameters on the vehicle ride safety are investigated.
Effect of the Damping.
The damping of the hydropneumatic suspension is directly set by the flow diameter () of the tube installed between the cylinder and the accumulator.The simulations which explore the effect of different tube diameters are conducted at different vehicle speeds.Figure 8 shows that there is always an optimal diameter which renders the RMS value of the track-wheel gap ( rms ) to be minimum for any speeds.The optimal diameter for minimum rms decreases as the speed increases.The decrease of diameter on the left side of the optimal value has a stronger influence on rms than the increase of the diameter on the other side.Similar to Figure 8, it can be seen from Figure 9 that there always exists an optimal diameter which minimizes the RMS value of the road wheel dynamic force ( rms ), for any vehicle speeds.On the left side of the optimal value, the decrease of the diameter also has a stronger influence on rms .As the vehicle speed increases, the optimal diameter decreases.The higher the vehicle speed is, the stronger the effect the diameter has.Lower speed generally results in less dynamic force, but an improperly tuned diameter may cause higher dynamic force for a lower speed than that for a higher speed.
The RMS value of the stroke ( rms ) of the actuation cylinder for different damping diameters and vehicle speeds on a G road are shown in Figure 10.It can be seen that rms increases as the diameter and vehicle speed increase.For 47 kmph, as the diameter is more than 10 mm, rms exceeds one-third of the designed stroke length; thus an end-stop impact of the suspension will be a possible event.
Effect of the Accumulator Charge
Pressure.The accumulator charge pressure determines the basic stiffness of the suspension.Through simulations and analysis of various charge pressures, the effects of the charge pressure on the RMS value of the track-wheel gap, the road wheel dynamic force, and the stroke are obtained.Figure 11 shows how the RMS value of the wheel-track gap ( rms ) changes as the charge pressure ( ) changes.As the pressure decreases down to 2 MPa, rms increases slowly.
Once the pressure drops below 2 Mpa, rms increases rapidly.Thus, if rms is over one-third of the grouser height, the failure of the track separating from the wheel will be possible.It is known that the height of the grouser is about 58 mm.Therefore, if the pressure is less than about 2 MPa, the probability of occurrence of the failure increases significantly.Variations of the RMS value of the road wheel dynamic force ( rms ) with charge pressure ( ) are illustrated in Figure 12 for various vehicle speeds.It can be seen that rms increases with an increase in charge pressure at all speeds.
The RMS value of the stroke ( rms ) for different vehicle speeds and charge pressures are given in Figure 13.Notice that when the pressure is less than about 1.5 MPa, rms increases sharply as decreases.The designed stroke length of the actuator cylinder is 200 mm; if drops below about 1.5 MPa at the speed of 47 kmph, the end-stop impact of the suspension has a high probability of occurrence.safety of the vehicle.A decline of tensioning force will cause a rise in fluctuations and more frequent track jumping from the sprocket.Thus, the failure of the track separating from the sprocket will occur.In the hydropneumatic suspension system, a pair of hydraulic tensioning cylinders is used to adjust the track.The tensioning force is directly related to the actuating pressure and the track tensioning force fluctuations can be evaluated by its standard deviation ( ).
Effect of the
The RMS value of jumping quantity ( rms ) and the standard deviation of the tensioning force ( ) of the track for various pressures are given in Figure 14.Both rms and increase as the pressure decreases.When the jumping quantity is over the length of the effective mesh area on the sprocket teeth, the failure of the track separating from the sprocket may happen.A tighter track provides better ride safety, but, on the other hand, the force acting on the sprocket, the idler and the roller by the track will be higher.Thus, the friction force between the track and the above components increases [29].The loss of the chassis output power on the running system will also be greater.
Conclusions
Factors which influence the ride safety of a tracked vehicle are analyzed.Systematic evaluation parameters of ride safety and their criteria are proposed.Using a cosimulation technique, the vehicle MBD model and the hydropneumatic suspension model are built and verified.Through simulations at various vehicle speeds, accumulator charge pressures, and damping diameters, the effect they have on ride safety are investigated and quantified.The key conclusions are as follows.
(1) Different optimal damping parameters exist for the minimum wheel-track gap and the wheel dynamic force at different vehicle speeds.The value of the optimal diameter decreases as the vehicle speed increases.If the diameter is not tuned well, the dynamic wheel force at a low speed may be higher than that at a high speed.
(2) As the accumulator charge pressure decreases, the RMS value of the wheel-track gap and the stroke increases but the RMS value of road wheel dynamic force decreases.For the hydropneumatic suspension studied in this paper, when the pressure drops below about 2 MPa, the probability of the end-stop impact of the suspension increases significantly.
Figure 1 :
Figure 1: Vehicle dynamic model and G road model.
Figure 3 :
Figure 3: Hydraulic system model of the hydropneumatic suspension.
Figure 5 :Figure 6 :
Figure 5: Modeling and simulation of the vehicle traversing a barrier.
Figure 7 :
Figure 7: Test and simulation results of the road wheel acceleration.
Figure 8 :FlowFigure 9 :
Figure 8: The RMS value of the track-wheel gap versus the flow diameter.
FlowFigure 10 :Figure 11 :
Figure 10: The RMS value of the stroke versus the flow diameter for various speeds.
Figure 12 :Figure 13 :
Figure12: The RMS value of the road wheel dynamic force versus the accumulator charge pressure for various speeds.
FFigure 14 :
Figure 14: The standard deviation of track tension force and the RMS value of jumping quantity versus the actuating pressure of the track tensioning cylinder.
Table 1 :
Evaluation parameters and criteria of the ride safety.
Table 2 :
Suspension initial value of the design variables.
Table 3 :
Initial value of hydraulic variables. | 5,574.8 | 2017-01-01T00:00:00.000 | [
"Engineering"
] |
Regulation of Structure-Specific Endonucleases in Replication Stress
Replication stress results in various forms of aberrant replication intermediates that need to be resolved for faithful chromosome segregation. Structure-specific endonucleases (SSEs) recognize DNA secondary structures rather than primary sequences and play key roles during DNA repair and replication stress. Holliday junction resolvase MUS81 (methyl methane sulfonate (MMS), and UV-sensitive protein 81) and XPF (xeroderma pigmentosum group F-complementing protein) are a subset of SSEs that resolve aberrant replication structures. To ensure genome stability and prevent unnecessary DNA breakage, these SSEs are tightly regulated by the cell cycle and replication checkpoints. We discuss the regulatory network that control activities of MUS81 and XPF and briefly mention other SSEs involved in the resolution of replication intermediates.
Introduction
The DNA replication fork is sensitive to a variety of intrinsic and extrinsic stresses (reviewed in [1,2]). Endogenous blocks include collisions with transcription apparatus, natural pausing sites, and unusual DNA structures or sequences (reviewed in [3]). Highly repetitive DNA sequences (e.g., ribosomal DNA, telomeres) or common fragile sites (CFS) are also more prone to replication stress (reviewed in [4,5]). External agents that disrupt replication include depletion of deoxyribonucleotide triphosphate (dNTP) by hydroxyurea (HU) and DNA lesions caused by ultraviolet (UV) radiation, alkylating agents such as methyl methane sulfonate (MMS), or the topoisomerase inhibitor camptothecin (CPT) (reviewed in [1]).
Replication stress can result in accumulation of single stranded DNA, chromosome breaks, and rearrangements, which are deleterious to the cell (reviewed in [1,2]). Additionally, it may generate aberrant intermediates including DNA secondary structures, DNA lesions, and protein-DNA complexes (reviewed in [4]). Not surprisingly, increased replication stress is now recognized as a contributor to oncogenesis (e.g., reviewed in [6,7]).
A subset of structure-specific endonucleases (SSEs) that recognize specific DNA structures rather than DNA sequences, plays a crucial role in processing these aberrant structures to ensure replication fork stability and progression (reviewed in [8]). These SSEs are essential to maintaining genome stability, coordinating with the cell cycle to ensure that cells do not enter mitosis with structures that would promote improper chromosome segregation and breakage (reviewed in [9]). In this review we describe SSEs involved in processing DNA replication intermediates directly or indirectly regulated by the replication checkpoint ( Figure 1). We pay particular attention to two conserved, related SSEs: Mus81 (MMS and UV-sensitive protein 81) and XPF (xeroderma pigmentosum group F-complementing protein). [8]). In Schizosaccharomyces pombe, Mus81 is activated by DNA damage. Xeroderma pigmentosum group F complementing protein (XPF)-excision repair cross-complementing group 1 (ERCC1) (orthologs Rad1-Rad10 S.c. and Rad16-Swi10 S.p. ) is important for various DNA repair pathways and cleaves replication intermediates during S and G2 phases [10]. Scaffold protein SLX4 with associating partner SLX1 interacts with MUS81-EME1 (essential meiotic endonuclease 1) and XPF-ERCC1 in human cells (reviewed in [11]) [12][13][14][15] and their orthologs in in S. cerevisiae (reviewed in [16]) [17][18][19][20]. In contrast, Slx4 does not affect Rad16-Swi10 in S. pombe [21]. Activity of Yen1 S.c. is prevented until anaphase by restricting its nuclear entry due to phosphorylation of nuclear localization signal (NLS) [22][23][24]. Due to nuclear export signal (NES), GEN1 in human cells is able to access chromosomes only after nuclear membrane breakdown during mitosis [25]. S. pombe do not have Yen1 ortholog (reviewed in [26]). FEN1 (flap endonuclease 1) (orthologs Rad27 S.c. and Rad2 S.p. ) and FAN1 (Fanconi-associated nuclease I) (missing in S. cerevisiae) contribute to processing replication intermediates but cell cycle-dependent regulation of these SSEs are not well characterized (reviewed in [8]). Mms4 (methyl methane sulfonate sensitivity protein 4).
Mus81 Processes Replication and Recombination Intermediates
The Mus81 protein was identified for its role in processing complex branched DNA structures, including Holliday junctions, that form after complementary strand exchange between homologous Genes 2018, 9,634 3 of 21 sequences (reviewed in [27][28][29][30][31][32]). Mus81 can resolve synthetic Holliday junction structures in vitro [31,32] and has a high affinity for branched duplex DNA and replication fork substrates [33]. Consistent with this, loss of mus81 leads to severed meiotic defects, resulting in abnormal chromosomal segregation defects in yeasts [31,34,35]. In fission yeast (Schizosaccharomyces pombe), it is essential to complete sister chromatid exchange at the mating locus [31,36].
Mus81-dependent resolution of entangled sister chromatids is essential for survival of cells that depend on homology-directed repair of collapsed replication forks [36,37]. In human cells, MUS81 is similarly needed for replication fork restart after replication stress inducing agents [38][39][40][41]. MUS81deficient cells have decreased viability upon low-dose exposure to these replication inhibitors [41]. Importantly, fork restart in BRCA2 (breast cancer-associated protein 2)-deficient cells requires MUS81dependent cleavage of partially resected, regressed forks [42]. In addition to resolving replication intermediates, compensatory DNA synthesis during mitosis and cleavage of mitotic interlinks to allow chromosomal segregation also require MUS81 [43]. These and many more studies demonstrate Mus81 plays a critical role to resolve replication and recombination intermediates and ensure proper chromosome segregation during cell division.
But Mus81 is a double-edged sword. Unregulated activity can have deleterious effects. Mus81 causes replication stress-induced double stranded breaks (DSB) in mammalian cells [38] and promotes deletion mutations in polα mutant fission yeast [44]. When an active replication fork converges on a collapsed fork, replication termination is prone to Mus81-dependent deletions between repetitive DNA sequences in fission yeast [45]. In human cells, oncogene-induced chromosomal breakage involves MUS81 activity [46]. These findings suggest that tight regulation of Mus81 is necessary to repair replication-associated DNA structures without inducing unnecessary DNA cleavage.
Regulation of Mus81 by Cell Cycle Kinases
A key component of that regulation is cell cycle-and checkpoint-dependent regulation of Mus81. These restrict its activity to later in the cell cycle in unstressed cells. The Mus81 enzyme forms a complex with Eme1 (essential meiotic endonuclease 1) which creates a stable interaction with a DNA substrate for the complex [47]. Phosphorylation of Eme1 by various cell cycle kinases provide one mechanism to regulate Mus81 activity. In budding yeast (Saccharomyces cerevisiae), Mus81 forms a complex with the Eme1 ortholog Mms4 (methyl methane sulfonate sensitivity protein 4) [8]. Mus81-Mms4 S.c. is activated in a cell cycle-dependent manner and depends on phosphorylation of Mms4 S.c. by the cell cycle kinases Cdc28 S.c. (CDK1 in human) and Cdc5 S.c. (PLK1 in human) at the G2/M transition ( Figure 2) [48][49][50]. This restricts Mus81-Mms4 S.c. activity during S-phase to prevent unnecessary cleavage of DNA substrates while DNA replication is occurring [48,51]. Via the scaffold protein Rtt107 S.c. , Cdc7-Dbf4 S.c. (Dbf4-dependent kinase, DDK) interacts with and phosphorylates Mus81-Mms4, which is required for Mus81 activation during mitosis [52].
In fission yeast, which spends of most of its lifetime in G2 phase, Mus81-Eme1 S.p. activity is upregulated in response to DNA damage [8]. Cdc2 S.p. (CDK1 in human) phosphorylation of Eme1 S.p. primes it for phosphorylation and activation by the DNA damage sensor and checkpoint activator Rad3 S.p. (ATR in human) ( Figure 2) [53]. Mus81-Eme1 cleavage of replication intermediates in in turn may have a role in activation or propagation of checkpoint pathways. Deletion of Mus81 S.p. in replication stress-induced, temperature-sensitive Mcm4 helicase mutant (mcm4-ts) results in failure to maintain the DNA damage checkpoint and in subsequent abnormal chromosomal segregation [54].
In human cells, MUS81 is up-regulated at the onset of mitosis and has two partners, EME1 and EME2 [8,55]. Approximately 80% of MUS81 is associated with EME1 while the remaining 20% is associated with EME2 (reviewed in [56]). It is not obvious whether EME1 or EME2 is responsible for S phase-specific functions of MUS81 [55,57]. Interestingly, MUS81-EME1 activity is needed for maintaining replication fork speed [57] while MUS81-EME2 activity promotes replication fork restart and chromosomal stability [55].
MUS81-EME1 activity in human cells peaks during M phase after hyperphosphorylation of EME1 by cell cycle kinases CDK1 and PLK1 ( Figure 2) [12,58,59]. Uninhibited CDK1 activity results in chromosomal fragmentation following premature activation of MUS81 [63], further linking CDK to MUS81 activity. PLK1 promotes DNA repair protein BRCA1 recruitment to facilitate MUS81-mediated fork cleavage coupled with a break-induced replication [64]. Moreover, PLK1 interaction with BRCA1 and CDK1 activation of RECQ5 DNA helicase promotes MUS81-EME1 recruitment to CFS [65]. A recent study showed that the pleiotropic serine/threonine kinase CK2 kinase is able to phosphorylate MUS81 Genes 2018, 9, 634 5 of 21 in late-G2/mitosis and upon mild replication stress to promote its association with EME1 and scaffold protein SLX4, another stimulator of MUS81 activity [60]. These findings show that cell cycle-dependent kinases not only play a crucial role in restricting Mus81 activity to appropriate timing of the cell cycle but also contribute to Mus81-dependent DNA repair.
Other regulators down-regulate S-phase activity of MUS81. WEE1, a well-known inhibitor of CDKs, suppresses MUS81 activity during S-phase by: (1) Potentially phosphorylating MUS81, (2) by inhibiting CDK2 and thereby limiting origin firing and replication stress, and (3) by restraining CDK1 that phosphorylates and activates EME1 and scaffold protein SLX4 ( Figure 2) (reviewed in [56]). In the absence of WEE1, MUS81-EME1 activity results in unnecessary replication fork cleavage, leading to accumulation of DNA damage [61,66]. Deletion of MUS81 in the absence of WEE1 reduces DSB [61] but does not prevent activation of ATR and CHK1 [67], suggesting that MUS81 activity is downstream of replication fork stalling and S-phase checkpoint. This is also evidenced by the detrimental MUS81dependent processing of replication intermediates following CHK1 inhibition [68][69][70]. Although the mechanistic details are unknown, these findings indicate that CHK1 down-regulates MUS81 in human cells ( Figure 3).
Unlike CDK1-and PLK1-regulated control of MUS81-EME1 activity, the control of MUS81-EME2 activity is not well-established despite the evidence that MUS81-EME2 is responsible for the DNA damage during premature entry to mitosis upon WEE1 inhibition [62]. Because deletion of MUS81 or EME2 delays premature entry into mitosis induced by WEE1 inhibition, this suggests that regulating MUS81-EME2 activity may be the mechanism by which WEE1 prevents premature mitotic entry ( Figure 2) [8,62].
Mus81 is Regulated by the Replication Checkpoint during Replication Stress
During replication stress, Mus81 plays a crucial role in processing abnormal replication intermediates. It is recruited to sites of replication blockage to resolve replication intermediates and inhibits anaphase bridge formation, preventing chromosome mis-segregation and transmission of damaged DNA to daughter cells (reviewed in [71][72][73]). Loss of Mus81 attenuates recovery of stalled replication forks and makes cells hypersensitive to DNA damaging agents that obstruct replication fork progression [29,38,39,[74][75][76][77]. Paradoxically, although Mus81 is required to resolve aberrant replication intermediates, it can also create DNA breaks that threaten genomic stability. This is why Mus81 regulation during replication stress is crucial. Upon replication disturbance, the replication checkpoint pathway is activated to resolve replication hindrances and to delay mitosis until the replication stress is relieved (reviewed in [27,78]). Cds1 S.p. is the fission yeast replication checkpoint effector ( Figure 3). In budding yeast, the Cds1 S.p. homolog Rad53 S.c. is the effector of both the DNA damage checkpoint and the replication checkpoint. Fission yeast Cds1 S.p. acts downstream of DNA-dependent protein kinase-like family Rad3 S.p. (Mec1 S.c. /ATR in human) (reviewed in [79]). Upon replication stress, a conserved mediator protein Mrc1 S.p. (CLASPIN in human) is phosphorylated by Rad3 S.p. which then recruits Cds1 S.p. to stalled replication forks to be activated (reviewed in [80,81]).
In mammalian cells, DNA damage checkpoint kinase CHK1 and Cds1 S.p. -homolog CHK2 are activated downstream of ATM/ATR kinases in response to certain replication blocks and to DNA damage during S-phase (reviewed in [79]) [77]. Although Cds1-Mus81 S.p. interaction is conserved in human cells (CHK2-MUS81), it is unclear if CHK2 directly regulates MUS81 as in fission yeast (reviewed in [8]), although there is evidence that CHK2 up-regulates the protein level of MUS81; MUS81 in turn contributes to activation of CHK2 in Cisplatin-treated breast cancer cells (Figure 3) [84].
Other Regulators of Mus81 Recruitment and Activity
There is growing evidence that there are other regulators of Mus81 activity besides cell cycle and replication checkpoint kinases (Table 1). For example, the N-terminal fragment of DNA repair protein Rad52 S.c. stimulates the endonuclease activity of Mus81-Mms4 S.c. on homologous recombination intermediates in budding yeast [94]. RAD52 also promotes MUS81-mediated break-induced replication repair of collapsed forks and mitotic DNA synthesis in human cells [95,96]. The small ubiquitin-related modifier (SUMO)-like domain of the adaptor protein establishment of silent chromatin 2 (Esc2 S.c. ) in budding yeast interacts with and stimulates Mus81-Mms4 S.c. catalytic activity [75]. The replication factor C (RFC) complex and the loading of proliferating cell nuclear antigen (PCNA) also enhances recruitment and activity of Mus81-Mms4 S.c. [97]. S. c. Nuclear import at anaphase [23] Nuclear Export Signal Human Nuclear exclusion until nuclear envelope breakdown [25] Interestingly, the Structural Maintenance of Chromosomes (SMC) complexes are another modulator of Mus81 S.p/S.c. activity. In yeast, for example, the Smc5-Smc6 S.p/S.c. complex promotes Mus81 S.p/S.c. -dependent resolution of Holliday junctions [99,100]. The positive genetic interactions between certain mutants affecting methylation of cohesin subunit Psm1 S.p. and Mus81-Eme1 S.p. mutants in fission yeast suggests that methylation of cohesin subunits may be important for Mus81 activity at the stalled replication fork. Alternatively, Mus81 may be required for recruitment of the cohesin to sites of DNA damage [102]. In human cells, depletion of SMC2, which is required for chromosome condensation, or WAPL (Wings apart protein-like), which is required for release of sister-chromatid arm cohesin, results in failure to recruit MUS81 to chromatin [101].
In human cells, post-translational modification of MUS81 other than phosphorylation may be important for its activity during DNA repair. This is evidenced by compromised DNA damage response in cells with SUMOlyation-resistant MUS81 upon arsenic treatment that mimic metal carcinogenesis [105]. Epigenetic modifications adjacent to replication forks may also contribute to regulation of MUS81 recruitment and activity. For instance, EZH2 (enhancer of zeste homologue 2) that methylates histone H3 on Lys27 (H3K27) at stalled replication forks has been shown to mediate recruitment of MUS81 [106].
Localization of MUS81 is another way its activity is modulated. In human cells, MUS81 accumulates in the nucleolus during S phase, suggesting that it is required to maintain highly repetitive nucleolar DNA (reviewed in [8]). MUS81 relocates from the nucleolus to localized regions of UV damage specifically in S-phase cells [103]. Sub-localization of Mus81 S.c. also occurs in budding yeast. Following DNA damage, Mus81-Mms4 S.c. relocalizes to subnuclear foci and colocalizes with other endonucleases and with Cmr1 S.c. , a protein involved in genome stability maintenance [104]. Subnuclear Mus81-Mms4 S.c. foci persist until the resolution of accumulated DNA intermediates following DNA damage [104].
These findings demonstrate that cells are equipped with multiple means to tightly regulate Mus81 recruitment and activity. Investigating how these various modulators of Mus81 communicate with each other will further elucidate Mus81-dependent genome stability maintenance.
With MUS81-EME1, XPF-ERCC1 processes under-replicated DNA and replication intermediates at CFS and prevents anaphase bridges following recruitment by SLX4 [72,73]. However, MUS81 and XPF may differ in the timing of their activity (Figure 1). In human cells, the biological function of MUS8-EME1 is mostly during mitosis although MUS81 activity is present throughout the cell cycle, probably through its association with EME2 (reviewed in [56]). During S-and G2-phase, XPF-ERCC1 along with another endonuclease ARTEMIS, are responsible for replication stress-induced fork cleavage needed to resume DNA replication [10]. Data from fission yeast indicates that Mus81 S.p. and Rad16 S.p. /XPF may direct repair towards different templates, with Mus81 S.p. using the sister chromatid and Rad16 S.p. /XPF using ectopic sequences [36].
Association with different recruiting partners and stimulating proteins appears to determine in which repair pathway XPF-ERCC1 will function (Table 1). In fission yeast, a recently identified protein Pxd1 S.p. (pombe XPF and Dna2) stimulates 3ˇä-endonuclease activity of Rad16-Swi10 S.p. [113]. In budding yeast, Saw1 S.c. (Single-strand annealing weakened protein 1), a structure-specific DNA binding protein, recruits Rad1-Rad10 S.c. to single-strand annealing repair sites [114] while damage recognition protein Rad14 S.c. brings Rad1-Rad10 S.c. to NER [109]. In human cells, ERCC1 cannot enter the nucleus without XPF, demonstrating that XPF-ERCC1 heterodimer formation is critical [133]. In NER, XPF-ERCC1 cleavage of the damaged stand is stimulated by RPA and Rad52 [110,112]. RPA is also required for XPF-ERCC1 endonuclease activity in replication-coupled ICL repair [111]. In human cells, both XPF-ERCC1 and MUS81-EME1 are recruited to the replication fork stalled at ICL by the scaffold protein SLX4 and this depends on ubiquitylation of the FANCD2 (Fanconi anaemia complementation group D2) (reviewed in [98]) [134,135]. Independently of SLX4, the scaffold protein UHRF1 (ubiquitin-like PHD and RING finger domain-containing protein1) is needed to recruit FANCD2 and MUS81-EME1 and XPF-ERCC1 to DNA damage sites [107,108].
Genes 2018, 9, 634 9 of 21 the nucleus without XPF, demonstrating that XPF-ERCC1 heterodimer formation is critical [138]. In NER, XPF-ERCC1 cleavage of the damaged stand is stimulated by RPA and Rad52 [115,117]. RPA is also required for XPF-ERCC1 endonuclease activity in replication-coupled ICL repair [116]. In human cells, both XPF-ERCC1 and MUS81-EME1 are recruited to the replication fork stalled at ICL by the scaffold protein SLX4 and this depends on ubiquitylation of the FANCD2 (Fanconi anaemia complementation group D2) (reviewed in [101]) [102,103]. Independently of SLX4, the scaffold protein UHRF1 (ubiquitin-like PHD and RING finger domain-containing protein1) is needed to recruit FANCD2 and MUS81-EME1 and XPF-ERCC1 to DNA damage sites [112,113].
In human cells, increased association of MUS81-EME1 with the scaffold protein SLX4 contributes to MUS81-dependent processing of DNA secondary structures [12]. SLX4 deletion reduces MUS81-dependent formation of DSBs that occur after WEE1 inhibition [62,67]. SLX4 is phosphorylated by CDK1 in late G2 and M phase and interacts with MUS81-EME1 complex and SLX1, forming stable SLX-MUS complex (reviewed in [49,62]. SLX4 recruitment to chromatin and SLX4-mediated sister chromatid resolution requires TOPBP1 [138]. In addition to recruiting MUS81-EME1 and XPF-ERCC1, nuclease activity of SLX4 is important for processing telomeric structures and oppose aberrant telomere synthesis observed in cancers (reviewed in [139]) [140,141]. SLX4 also suppresses chromatin association with another SSE, GEN1(Yen1), in the absence of MUS81 and prevent DSBs after pathological replication stress [142]. Human SLX4 has ubiquitin-binding zinc finger (UBZ) motif and SUMO-interaction motif (SIMs) (Figure 4) [135,136]. The UBZ motif is required for SLX4 recruitment to sites of replication-dependent ICL repair while the SIMs is required for the function of SLX4 during replication stress and in suppressing CFS instability [143,144].
Other Structure-Specific Endonuclease in Replication Stress
Although MUS81-EME1 and XPF-ERCC1, along with the scaffold protein SLX4, are the most well-characterized SSEs to be responsible for processing replication intermediates, a few other SSEs have been noted to be important in dealing with replication stress. Flap endonuclease 1 (FEN1 in human, Rad2 S.p. in fission yeast, Rad27 S.c. in budding yeast) has an important role of removing 5ˇä flaps that form during Okazaki fragment maturation via its interaction with DNA processivity factor PCNA (Table 1) (reviewed in [115]). FEN1 is also involved in processing DNA secondary structures during replication fork impediment, especially in rDNA and telomeres [145][146][147]. This process requires FEN1 to undergo SUMOylation and subsequent interaction with the PCNA-like Rad9-Rad1-Hus1 complex [116,124]. FEN1 and MUS81 associate with each other and collaborate in removing various aberrant DNA structures, including regressed replication fork substrates [117][118][119]. FEN1 removes the 5ˇä-flap after MUS81 processes DNA junction structures (reviewed in [78]) [86]. This process requires FEN1 to be stimulated by the helicase WRN (Werner syndrome ATP-dependent RecQ like helicase) [120][121][122]. This activity is especially critical for the fork restart at telomeres [146]. Regulation of FEN1 activity is important in maintaining genome stability as overexpression of FEN1 is associated with poor prognosis in various cancers [137]. FEN1 overexpression results in impediment in replication fork progression, mid-S phase arrest, and hypersensitivity to DNA damaging agents [137].
Fan1 (Schizosaccharomyces pombe)/Absent in Saccharomyces cerevisiae/FAN1
(Fanconi-Associated Nuclease I) (Human) FAN1 (Fanconi-associated nuclease I) is another structure-dependent endonuclease that plays a critical role in ICLs repair (reviewed in [148]) and in promoting replication fork progression in response to replication stress induced by agents such as HU and MMS [123,149]. There is no apparent FAN1 homolog in the budding yeast. FAN1 exhibits endonuclease activity toward 5ˇä flaps and has 5ˇä-3ˇä exonuclease activity [150]. A recent study suggests that FAN1 dimerizes to have optimal cleavage of a long 5ˇä flap strand [151]. FAN1 nuclease activity at stalled replication forks is tightly regulated as FAN1 activity is needed for fork restart but excessive activity can result in fork degradation (reviewed in [148]) [149]. Fan1 −/− mice have repeat expansions in brain and other somatic tissues, demonstrating that FAN1 activity contributes to the maintenance of genome integrity [152]. Like SLX4, FAN1 has a UBZ motif which allows its association with monoubiquitylated FANCD2 and subsequent recruitment to at replication forks (Table 1) [123]. FAN1 can also be recruited to aphidicolin-stalled replication forks via FANCD2-BLM (Fanconi anemia group D2 protein-Bloom's helicase) complex independent of the UBZ domain [149]. FAN1 also contains PCNA interacting peptide (PIP) motif that allows its association with ubiquitylated PCNA accumulated at stalled replication forks [124].
5.3. Absent in S. pombe/Yen1/GEN1 Yen1 S.c. (crossover junction endodeoxyribonuclease 1) in budding yeast and GEN1 (XPG-like endonuclease 1) in humans are SSEs that belong to the XPG/Rad2 family and define another Holliday junction resolvase that can process replication intermediates (reviewed in [153]). In MUS81-deficient human cells, GEN1 can induce DSB following replication stress which is opposed by the presence of SLX4 [142]. In budding yeast, Yen1 S.c. is phosphorylated by Cdc28 S.c. at G1/S transition which inactivates its nuclear localization signal (NLS), ensuring Yen1 S.c. stays in the cytoplasm until anaphase (Table 1) [22][23][24]. Cdc14 S.c. dephosphorylates Yen1 S.c. at anaphase, allowing it to enter the nucleus [23]. In human cells, GEN1 contains a nuclear export signal (NES) and cannot access chromatin until the nuclear envelope is broken down during mitosis [25]. It is absent in fission yeast which may explain why meiosis is highly dependent on Mus81-Eme1 in fission yeast (reviewed in [26]).
It is important to remember that there may be nucleases that have not been previously implicated in replication stress that may also contribute to processing replication intermediates. For example, a recent study suggests that Artemis, a nuclease involved in non-homologous DNA end-joining (NHEJ) (reviewed in [154]), contributes to processing stalled DNA replication forks and prevent chromosomal segregation defect during mitosis [10]. Artemis is not present in yeast.
Concluding Remarks
We have summarized findings showing how SSEs, MUS81 and XPF in particular, are controlled during cell cycle and replication stress ( Figure 5). Cell cycle kinase, replication checkpoint kinase, and the various interacting partners as well as inducers of post-translational and epigenetic modifiers work in consortium, allowing cells to quickly respond to replication stress but limit extraneous DNA damage. Teasing out the regulatory networks that control SSE activities and how they communicate with each other can help gain a more comprehensive understanding of how SSEs contribute to cancer. On one hand, SSEs are needed to maintain genome stability but on the other hand, DNA cleavage by SSEs can contribute to inducing DNA damage and chromosome rearrangement. For example, Mus81 cleavage of the displacement loop (D-loop), the initial recombination intermediate that form in broken replication forks, limits mutagenic template switches that propels genome instability in cancers [155]. The ability of Mus81 to work with Rad27 S.c. (FEN1 in human) and post-replication DNA repair protein, Rad18 S.c. , to suppress repeat-mediated chromosomal rearrangements has been suggested to inhibit large inverted duplications of chromosomal segments observed frequently in cancers [156]. In other contexts, Mus81 activity can contribute to survivability of cancer cells. For instance, Mus81-mediated resolution of toxic intermediates resulting from break-induced replication in the absence of Srs2 S.c. helicase increases cell viability [157].
There is somewhat conflicting evidence on how SSEs influence chemotherapy response. In various types of cancer cells, downregulation of XPF or MUS81 increases sensitivity to chemotherapeutic drugs cells via CHK1 pathway activation or stimulation of apoptosis [63,158,159]. However, there is also evidence that cytosolic DNA generated by MUS81 in prostate cancers stimulate immune response, potentially contributing to host rejection of cancer cells [160]. More in-depth understanding of how SSE activities are controlled will help formulate better predictions about their involvement in carcinogenesis and in patient-response to anti-cancer therapeutics.
Some of critical questions regarding SSEs still need to be addressed: Exploring these questions and other uncharacterized aspects of SSEs will garner exciting and important insights needed to integrate our understanding of the replication process, genome stability and the cell cycle. There is somewhat conflicting evidence on how SSEs influence chemotherapy response. In various types of cancer cells, downregulation of XPF or MUS81 increases sensitivity to chemotherapeutic drugs cells via CHK1 pathway activation or stimulation of apoptosis [63,161,162]. However, there is also evidence that cytosolic DNA generated by MUS81 in prostate cancers stimulate immune response, potentially contributing to host rejection of cancer cells [163]. More indepth understanding of how SSE activities are controlled will help formulate better predictions about their involvement in carcinogenesis and in patient-response to anti-cancer therapeutics.
Some of critical questions regarding SSEs still need to be addressed: Exploring these questions and other uncharacterized aspects of SSEs will garner exciting and Funding: This work was funded by NIH R35-GM118109 | 5,804.4 | 2018-12-01T00:00:00.000 | [
"Biology"
] |
Methylphenidate Exposure Induces Dopamine Neuron Loss and Activation of Microglia in the Basal Ganglia of Mice
Background Methylphenidate (MPH) is a psychostimulant that exerts its pharmacological effects via preferential blockade of the dopamine transporter (DAT) and the norepinephrine transporter (NET), resulting in increased monoamine levels in the synapse. Clinically, methylphenidate is prescribed for the symptomatic treatment of ADHD and narcolepsy; although lately, there has been an increased incidence of its use in individuals not meeting the criteria for these disorders. MPH has also been misused as a “cognitive enhancer” and as an alternative to other psychostimulants. Here, we investigate whether chronic or acute administration of MPH in mice at either 1 mg/kg or 10 mg/kg, affects cell number and gene expression in the basal ganglia. Methodology/Principal Findings Through the use of stereological counting methods, we observed a significant reduction (∼20%) in dopamine neuron numbers in the substantia nigra pars compacta (SNpc) following chronic administration of 10 mg/kg MPH. This dosage of MPH also induced a significant increase in the number of activated microglia in the SNpc. Additionally, exposure to either 1 mg/kg or 10 mg/kg MPH increased the sensitivity of SNpc dopaminergic neurons to the parkinsonian agent 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP). Unbiased gene screening employing Affymetrix GeneChip® HT MG-430 PM revealed changes in 115 and 54 genes in the substantia nigra (SN) of mice exposed to 1 mg/kg and 10 mg/kg MPH doses, respectively. Decreases in the mRNA levels of gdnf, dat1, vmat2, and th in the substantia nigra (SN) were observed with both acute and chronic dosing of 10 mg/kg MPH. We also found an increase in mRNA levels of the pro-inflammatory genes il-6 and tnf-α in the striatum, although these were seen only at an acute dose of 10 mg/kg and not following chronic dosing. Conclusion Collectively, our results suggest that chronic MPH usage in mice at doses spanning the therapeutic range in humans, especially at prolonged higher doses, has long-term neurodegenerative consequences.
Introduction
Methylphenidate (MPH; marketed under trade names Con-certaH MetadateH, MethylinH, RitalinH) is one of the most commonly prescribed stimulant medications for the symptomatic management of ADHD and narcolepsy [1,2,3]. MPH has been shown to have addictive potential, although it is not abused as frequently as cocaine [4]. Recent studies have detailed an increasing incidence of MPH abuse among young adults and college students in the United States, most likely for its purported non-therapeutic benefit of cognitive enhancement also called ''neuroenhancement''. The Monitoring the Future Study (MTF) reported that 2.7% of high school students reported a non-therapeutic use of MPH while 1.9% of college students reported a similar non-medicinal usage [5,6] In both diagnosed ADHD and non-ADHD populations, MPH has been shown to increase scores on standardized tests [7,8], as well as increase working memory [9] and thus, there have been calls for making it available as an ''over the counter'' (OTC) drug [10]. Despite the extensive use of this stimulant in ADHD and as well as for ''off-label'' use, few papers have been published regarding the long-term neurological consequences of MPH exposure in the CNS.
MPH is a Schedule II CNS stimulant that exerts its pharmacological effects via preferential blockade of the dopamine transporter (DAT) and norepinephrine transporter (NET), similar to that of cocaine [4]. This blockade results in a reduction of dopamine/norepinephrine uptake, leading to an increase in postsynaptic dopamine/norepinephrine levels [11,12]. Thus, MPH usage leads to an acute increase in striatal dopamine levels [13]. In terms of neurological effects, dopamine has been shown to have a major modulatory effect in the developing brain on both neostriatal and cortical neurogenesis [14,15]. Additionally, excess dopamine has been shown to be toxic both in vitro and in vivo due to the production of superoxide, hydrogen peroxide, and the dopamine quinone [16,17,18]. In fact, both acute and chronic treatment with MPH has been shown to result in superoxide production in the brain [19,20,21,22,23]. Free dopamine has also been shown to induce an inflammatory response in the brain characterized by an increase in cytokines and chemokines [24] that lead to an induction of microgliosis.
In this study, we investigate whether long-term administration of MPH in mice at two doses (1 mg/kg and 10 mg/kg) that reproduce the therapeutic window in humans (treatment of ADHD and recreational use/narcolepsy, respectively) [25,26,27] can induce changes in the basal ganglia. Specifically, we examined if acute or chronic administration of MPH altered SNpc dopamine neuron number and catecholamine levels in the striatum. Since excessive dopamine can induce oxidative stress and inflammation, we examined if MPH rendered the basal ganglia more sensitive to MPTP, an agent that has previously been shown to induce neuron damage in the SNpc.
Chronic MPH administration affects SNpc DA neuron number
We conducted a systematic stereological analysis of the SNpc in Swiss-Webster mice to determine if chronic exposure to saline, 1 mg/kg, or 10 mg/kg MPH for 90 days affected dopaminergic (DA) neuron number (Fig. 1A-I). While no change in SNpc DA neuron number was observed in animals treated with 1 mg/kg MPH, we did observe a 20% reduction of SNpc DA neurons in mice treated with 10 mg/kg MPH (Fig. 1J). The distribution of cell loss demonstrated that DA neurons towards the caudal end of the SN were more vulnerable to MPH effects while those residing in the more rostral end of this structure appeared unaffected (Fig. 1K).
Chronic MPH exposure results in microglia activation in the SNpc
Since excess dopamine has been reported to induce oxidative stress and inflammation, we examined whether chronic administration of MPH could induce a pathological immunoligcal reaction in response to chronic MPH in the SNpc. We estimated the total number of Iba-1-positive microglia cells within the SNpc, and based upon morphology, determined the proportion of microglia in the resting and activated states. We observed that chronic administration of 10 mg/kg MPH did not affect the number of resting microglia ( Fig. 2A), but did induce a significant increase in activated microglia (Fig. 2B). We did not observe any change in the number of resting or activated microglia after treatment with 1 mg/kg MPH ( Fig. 2A,B).
Dopamine and dopamine turnover affected following chronic MPH dosing
In order to determine if long-term administration of MPH resulted in changes in total striatal dopamine levels or dopamine turnover, striata were microdissected 7 days after mice had been administered 90 days of saline, 1 mg/kg MPH, or 10 mg/kg MPH. We found that long-term administration of 1 mg/kg, but not 10 mg/kg MPH, induced a significant increase in total striatal dopamine compared to saline-injected controls (Fig. 3A). We also found a significant increase in the major dopamine metabolite 3,4dihydroxyphenylacetic acid (DOPAC) at both 1 mg/kg and 10 mg/kg compared to saline-treated mice (Fig. 3B). However, a significant increase in dopamine turnover (DOPAC/DA) was observed only at 10 mg/kg MPH (Fig. 3C).
Chronic MPH exposure sensitizes the SNpc to MPTP effects
Given that chronic administration of 10 mg/kg MPH-administration lowers SNpc DA neuron number, we examined whether chronic exposure to MPH increased the sensitivity of these neurons to the parkinsonian agent 1-methyl-4-phenyl-1,2,3,6tetrahydropyridine (MPTP). We also examined if addition of MPTP after MPH altered the immunological microglial response.
In terms of SNpc dopaminergic neuron loss, we have previously shown that the Swiss-Webster strain is resistant to MPTP-induced neuron loss [28,29]. Thus, if any SNpc neuron loss is observed, it can be inferred that this was due to previous exposure to MPH. We found that chronic administration of either 1 or 10 mg/kg MPH sensitizes SNpc DA neurons to the effects of MPTP compared with saline-injected controls. As shown in figure 1J, MPTP induces an approximately 20% increase in cell death in mice that received chronic administration of either 1 or 10 mg/kg MPH.
We also examined the microglial response to MPTP in these MPH-treated Swiss-Webster mice. Since we only observed an increase in activated microglia in mice chronically-administered 10 mg/kg MPH (Fig. 2B), we only examined the immune response to added MPTP in this condition. We found that mice administered 10 mg/kg MPH+MPTP exhibit a significant decrease in the number of resting microglia ( Fig. 2A), with a concomitant rise in the number of activated microglia (Fig. 2B). This suggests that the MPTP potentiates the immune response to MPTP.
Although the increase in SNpc dopaminergic neuron loss was not large enough in and of itself to result in the onset of parkinsonism, this study does suggest that chronic administration of MPH has the potential to be a predisposing or contributing factor to disorders that lead to neurodegenerative disorders involving the dopaminergic system.
Alterations in Gene Expression following Acute and Chronic MPH Exposure in SN
In order to begin to identify the mechanism(s) underlying the MPH-induced decrease in SNpc DA neuron number and increases in CNS inflammation, we conducted an Affymetrix gene array study to identify SNpc mRNA changes that were induced in response to chronic administration of MPH. Using unsupervised hierarchical clustering analysis, probes were selected using a median absolute deviation score. Differentially expressed genes between each treatment condition and controls were derived using local-pooled-error test (LPE) with a FDR of 0.05 as the cutoff. We found a total of 115 genes and 54 genes out of 45,037 on the arrays whose expression were significantly different at the p#0.05 at 1 and 10 mg/kg MPH doses, respectively, in the SNpc (Tables S1 and S2). Of these gene changes, 23 were common between the high and low MPH doses (Fig. 4A). Since the cellular changes, both in SNpc DA neuron number and microglia, were observed primarily at the 10 mg/kg MPH dose, we used qPCR to further examine and validate the expression of genes in animals exposed to only this dose. Specifically, we examined expression of genes associated with modulations in basal ganglia toxicity including brain derived neurotropic factor (bdnf), glial derived neurotropic factor (gdnf), tyrosine hydroxylase (th), the dopamine transporter DAT1 (slc6a3), and the vesicular monoamine transporter VMAT2 (slc18a2). We found significant reductions in mRNA expression in gdnf, th, slc6a3, and slc18a2 after both acute and chronic administration of 10 mg/kg MPH, while bdnf was only reduced after chronic 10 mg/kg MPH ( Fig. 4B-E).
Evidence for inflammation associated with acute doses of MPH
Due to the observed increase in the number of morphologicallyactivated SNpc microglia following administration of 10 mg/kg MPH, we investigated whether the expression of inflammatory genes, including il-6, tnf-a, cox-2, and il-1b were altered following chronic or acute dosing of 10 mg/kg MPH in both SN and its target, the striatum. We found significant increases in mRNA expression of the pro-inflammatory genes tnf-a and il-6 in the striatum of animals administered a single dose of 10 mg/kg MPH compared saline-injected controls ( Fig. 5A-D). No changes were seen in the expression of these genes in the SN.
Discussion
The present study investigated the pathological effects of acute and chronic MPH in the basal ganglia using two different doses that span the therapeutic window of MPH use for ADHD and The images presented are at 46, 206 and 1006, respectively. The brain sections have been immunostained with anti-TH (brown) to identify dopaminergic neurons and anti-Iba-1 (purple) to identify microglia. (J) Stereological estimates of dopamine neuron number in substantia nigra pars compacta (SNpc) in animals administered saline (ctrl), saline+MPTP (ctrl+MPTP), 1 mg/kg MPH, 1 mg/kg MPH+MPTP, 10 mg/kg MPH and 10 mg/kg MPH+MPTP. Saline, 1 mg/kg MPH and 10 mg/kg were administered for 90 days following a one-week drug washout period before 4620 mg/kg MPTP was injected (n = 10). (K) The distribution of dopamine neurons along the rostral-caudal axis in SNpc following chronic administration (90 days) of saline (control), 1 mg/kg MPH and 10 mg/kg MPH. **p#0.01 compared to saline-treated controls; ***p#0.001 10 mg/kg MPH compared to saline-control (ctrl), control+MPTP (ctrl+MPTP) and 1 mg/kg MPH+ no MPTP (n = 10). One-way ANOVA statistical test was performed to draw comparisons between the different groups followed by Bonferroni post-hoc tests. narcolepsy in humans (1 mg/kg and 10 mg/kg, respectively) [30,31]. We demonstrate that chronic administration of 10 mg/kg MPH induces a small but significant loss of SNpc dopaminergic neurons. We also find that chronic exposure to both 1 mg/kg and 10 mg/kg MPH can sensitize SNpc dopamine neurons to a further oxidative stress. Though the complete mechanism for this sensitization of dopamine neurons is not well understood, our experiments suggest a combined effect of an increased inflammatory response with reduced levels of several trophic factors, including BDNF and GDNF.
Despite the extensive use of MPH in school aged and adult populations with ADHD (including a proportion that are improperly diagnosed with ADHD) [32,33]) as well its use in general cognitive enhancement in non-ADHD individuals [10], only a few studies have investigated the neuropathological consequences of long-term MPH exposure. In this study, we used a 12-week MPH administration schedule that spans the developmental period in rodents and corresponds to the pre-adolescent through young adult period in humans, during which MPH is typically used [34].
MPH's mechanism of action is to increase the availability of extracellular DA and NE in the synaptic cleft through blockade of the dopamine transporter (DAT) and norepinephrine transporter (NET) [12,35,36]. In this study, we observed a significant increase in total dopamine levels in the striatum at 1 mg/kg MPH, a change that was not observed at 10 mg/kg MPH. Previous studies have also reported a similar increase in striatal dopamine levels at similar lower doses of MPH [37]. The observed lack of change in total dopamine concentrations at the higher dose might reflect a ceiling effect achieved due to chronic dosing of the drug, or it may be the result of a compensatory alteration in the production of dopamine that results from the observed 20% loss in dopamine neurons in the SNpc. In order to determine if this compensation is occurring, we measured the ratio of striatal dopamine to SNpc DA neurons. When examined as a ratio, both 1 and 10 mg/kg MPH treatment demonstrate a significant increase in the dopamine:SNpc neuron ratios (150% in 1 mg/kg MPH and a 132% increase for mice treated with 10 mg/kg MPH), suggesting that either dose of MPH increases striatal dopamine, not just that of 1 mg/kg MPH.
It is well known that increased extracellular DA may be problematic. Oxidation of DA can produce both superoxide and hydrogen peroxide, which may then form hydroxyl radicals in the presence of certain metals [17]. Additionally, previous studies have indicated that DA can become neurotoxic following its oxidation to a DA quinone, which may then react with cellular thiols to form 5-S-glutathionyl DA and 5-S-cysteinyl DA [38]. The subsequent oxidation of 5-S-cysteinyl DA produces a number of neurotoxic compounds [17]. An increase in the free radical content in the basal ganglia has been shown to potentiate neurodegeneration [39,40,41].
In addition to a direct effect of MPH on the basal ganglia, we hypothesized that chronic MPH could increase sensitivity of SNpc dopamine neurons to a later oxidative stress exposure. MPH's mechanism of action-blockade of the DAT-is similar to that of cocaine [42] and results in an increase in extracellular dopamine, which has been shown to quickly form free radical adducts [43,44]. Since increased free radical production has been shown to increase the sensitivity of SNpc neurons to environmental or administered xenobiotics [42], it is possible that long-term MPH could be a contributing etiological factor in a multi-hit hypothesis for induction of Parkinson's disease [45,46,47].
In this study, we administered an acute regimen of MPTP (4620 mg/kg), an agent that is known to induce oxidative stress [48,49,50], to MPTP-resistant Swiss-Webster mice [28] treated with a chronic regimen of 1 or 10 mg/kg MPH. We found that chronic exposure to both 1 mg/kg and 10 mg/kg MPH increased the sensitivity of SNpc dopamine neurons to oxidative stress, based on a significantly increased SNpc dopamine neuron loss in mice administered MPH as compared to saline-treated control mice. Although the mechanism for this neuronal loss is unknown, a significant increase in MPH-induced activated microglia was observed; therefore, we hypothesize that an increase in free radical formation along with a concomitant neuroinflammatory response increases the sensitivity of the SNpc dopamine neurons to a later oxidative challenge. This conclusion is supported by a recent epidemiological study that showed that long-term amphetamine usage, which like MPH results in higher levels of striatal dopamine in the synaptic cleft, results in a significantly higher risk for developing Parkinson's disease [51].
In order to address the mechanism for increased sensitivity of dopamine neurons, we used an unbiased gene microarray analysis. A comparison of heat maps representative of relative mRNA expression ( Figure 4A,B) shows a number of genes whose direction of expression change (6) was similar after chronic administration of 1 and 10 mg/kg. Gene Set Enrichment Analysis (GSEA) identified gene sets that were related to inflammation and cell damage and repair pathways. Using qPCR validation, we measured significant decreases in mRNA expression of the neurotrophins bdnf and gdnf in the SNpc after both acute and chronic dosing of 10 mg/kg MPH. We also found significant decreases in mRNA expression of genes involved in dopamine biosynthesis (tyrosine hydroxylase, th) and handling (dopamine transporter, (slc6a3) and the vesicular monoamine transporter (slc18a2)). These changes were observed following both acute and chronic doses of 10 mg/kg MPH in the SNpc. Previous reports have associated decreases in mRNA expression of vmat2 and dat1 with neurotoxicity in cases where pharmacotherapeutic agents that alter dopamine levels and neurodegenerative conditions, respectively [52,53]. The observed downward changes in the mRNA message of dat1 and th may also be due to the covalent modification by dopamine quinones leading to its translational inactivation [41,54,55].
Notably, our Affymetrix and qPCR studies also found that acute exposure to higher doses of MPH increased the expression of inflammatory genes in the striatum, including the pro-inflammatory genes tnf-a and il-6. This increase in the pro-inflammatory gene expression following a single acute dosage suggests that MPH does induce inflammation, and this is supported by our finding of increased numbers of both total and activated microglial cells in the SNpc. Surprisingly, we did not see an increase in inflammatory gene expression after chronic administration of MPH, although we did continue to observe an increased number of morphologically activated microglia. This suggests that sometime during the course of the chronic exposure to MPH, there might be a dampening (self-repressesion) of inflammatory gene expression. It is unknown at this time if the gene repression we observe after chronic treatment with MPH is permanent (longer than 90 days), or if it can at a later time be re-induced. If this is the case, then the activated microglia observed, have the potential to play a modulatory role in later inductions of oxidative stress that would affect the same brain systems. Alternatively, it is also possible that microglia that are activated do not have the ability to return to their morphologically pre-inflamed state, as other studies have shown evidence of microglia activation long after resolution of the initiating insult [56].
Taken together, our results suggest that chronic administration of methylphenidate in mice, at doses that approximate those at the higher therapeutic range in humans, results in a reduced expression of neurotrophic factors, increased neuroinflammation, and a small, but significant loss of SNpc dopamine neurons. These results can only be interpreted in the context on normal brain structure and function, and thus would have direct implications for the illicit/neurocognitive use of MPH. Since the underlying anatomy and biochemistry of ADHD has not been definitively characterized, our findings may or may not be generalizable to the vast majority of humans who are properly diagnosed with ADHD and are prescribed methylphenidate. Nevertheless, this work supports studies [51,57,58,59] that demonstrate that drugs shown to increase the levels of dopamine in the synaptic cleft can contribute to degenerative changes in the basal ganglia.
Animal Handling and Treatment
Three week old Swiss Webster mice (Hsd:ND4, Harlan Laboratories) were acclimated in our animal facility for a period of a week and maintained on a 12 h light/dark cycle with ad libitum food and water. Starting at postnatal day PD28, mice were administered intraperitoneal (i.p.) injections of saline, 1 mg/kg, or 10 mg/kg methylphenidate hydrochloride (MPH, Cat # M2892 Sigma-Aldrich), once daily, 1 hour prior to the initiation of the animal's active phase (18:00 hrs). The doses of MPH used in this study were chosen based on previous studies in rodents suggesting that MPH doses of less than 5 mg/kg i.p. mirror those that are used in clinical practice [26], whereas recreational use of MPH or its use in the treatment of narcolepsy would be reflected by a dose of 10 mg/kg [27]. MPH injections were administered using a school week schedule (5/days/week), as this dosing paradigm is a recommended schedule for administration of MPH to eliminate the possibility of MPH to abrogate growth [60,61]. At the end of 12 weeks, the animals were allowed a washout period of 7 days (except when noted) to ensure complete elimination of MPH from the CNS [62]. Subsequently, mice were either transcardially perfused with 3% paraformaldehyde for histological studies, or rapidly decapitated following an induction of deep anesthesia, after which individual brain regions were dissected and processed for mRNA isolation.
Immunohistochemistry
After a one week drug washout period, Swiss-Webster mice were deeply anesthetized with an overdose of Avertin; following the loss of the deep tendon and corneal reflexes, mice were transcardially perfused with cold physiological saline followed by cold 3% paraformaldehyde. The perfused brains were processed for paraffin embedding. Brains were sectioned on the microtome at 10 mm thickness and mounted on polyionic slides (Superfrostplus, Fisher Scientific). Deparaffinized sections were incubated with primary antibody for identification of dopamine neurons (mouse monoclonal anti-tyrosine hydroxylase, TH; Sigma-Aldrich;1:500) or dopaminergic neurons and microglia (mouse monoclonal anti-tyrosine hydroxylase and rabbit polyclonal Iba- . qPCR analysis demonstrating normalized fold-change expression of (B) bdnf, (C) gdnf, (D) dat1(slc6a3), (E) vmat2(slc18a2) and (F) th mRNA in SN (n = 3). *p#0.02 vs saline-controls (ctrl); **p#0.02 vs saline-controls and 10 mg/kg MPH-acute dose; #p#0.02 10 mg/kg MPH acute-dose vs saline-controls (ctrl). One-way ANOVA statistical test was performed to draw comparisons between the different groups followed by Bonferroni post-hoc tests. doi:10.1371/journal.pone.0033693.g004 1 (Wako Chemicals; 1:500)). The secondary antibodies included biotinylated mouse IgG (for TH, 1:1000) or biotinylated rabbit IgG (for Iba-1, 1:1000). Diaminobenzindine (DAB) or a VIP kit (Vector labs) reaction was used to yield a brown (TH) or a purple (Iba-1) color, respectively. All tissue sections were counter stained with the nissl stain Neutral Red for landmark identification.
Quantification of SNpc DA neurons and Iba-1-positive microglia DA neuron and Iba-1-positive microglial cell number in the SNpc were estimated using standard model-based stereological methods [64]. Briefly, for neuronal counts, brains were blocked and serially-sectioned at 10 mm from the rostral hippocampus to the cerebellar-midbrain junction. Serial sections were mounted 5 sections per slide onto polyionic slides. TH-positive neurons and TH-negative, Nissl-positive cells within the SNpc that had the characteristics of dopaminergic neurons were counted using a 406 objective (total magnification 4006). Specifically, neurons from both left and right sides of the SNpc within one section per slide (chosen randomly and then maintained throughout all sections, (i.e. the 3 rd section on each slide) were counted) [64].
Microglia were counted using the optical fractionator method [65] using Microbrightfield StereoInvestigator (MBF Biosciences, Williston, VT). Both Iba-1 resting and activated microglia were counted [66]. Stringent measures were adopted to classify Iba-1 positive microglia as resting or activated based on morphology based on the detailed description by Graeber and Streit [67]. Microglial cells would be deemed as resting if they contained a small oval Iba-1-positive cell body that averaged 3 microns in diameter with long slender processes. Microglia would be classified as activated when the cell body was slightly increased in size compared to resting microglia and had an irregular shape. Based on cell size of the counting particle in 12 micron (empirically measured) sections, we used a high NA lens and a total magnification of 10006 in which we were able to clearly define approximately 18 focal planes within our section (1 focal plane equals approximately 0.54 mm). The processes on the microglia were shorter and had thickened processes. All numbers are expressed as mean 6 SEM.
Microarray Analysis
Animals injected with either saline, 1 mg/kg MPH, or 10 mg/ kg MPH for 3 months using a school-day schedule were allowed a 7 day drug wash period, after which animals were rapidly decapitated under deep anesthesia. The substantia nigra (SN) was rapidly dissected, flash frozen, and stored at 280uC. mRNA was isolated from SN in accordance with the protocol outlined in RNAqueousH-Micro kit (Ambion, Austin, TX) according to manufacturers recommendations. Technical procedures for microarray analysis, including quality control of mRNA, labeling, hybridization and scanning of the arrays were performed by the Hartwell Center for Bioinformatics & Biotechnology (HC) at St. Jude Children's Research Hospital (SJCRH) according to standard operating procedures for Affymetrix protocols (GeneChipH Expression Analysis manual, Affymetrix, Santa Clara CA, USA).
The mRNAs were profiled using Affymetrix HT MG-430 PM arrays. The array signals were normalized using Robust Multichip Average [68] and batch-effect of three replicates were corrected using ComBat [69]. The processed data were analyzed using linear models algorithm with Limma [70]. Differentially expressed genes between the treated and control samples were selected using FDR-corrected p-value of 0.01(q value of #0.05). All data are MIAME compliant, and the raw data have been deposited in a MIAME compliant database (GEOID: GSE33619).
Validation of Microaray Data and Quantitative Analysis of mRNA levels
Dissected substantia nigra and striatum were homogenized and processed to yield mRNA in accordance with the protocol outlined in RNAqueousH-Micro kit (Ambion, Austin, TX). The isolated RNA was converted to cDNA using the High Capacity RNA to cDNA kit (Applied Biosystems, Carlsbad, CA). The cDNA was subsequently used for qPCR analysis using a Taqman assay (Applied Biosystems). The ribosomal 18S and beta-actin genes were used as the standardizing control gene. The final values have been expressed as 2 2DDCt denoting fold-change in mRNA levels for each gene.
Statistical Analysis
One-way ANOVAs with Bonferroni post-hoc tests were used to draw comparisons between treatment groups. Data was plotted as mean 6 S.E.M. A value of p#0.05 was considered significant.
Supporting Information
Table S1 Identification of differentially expressed genes (p#0.05) in the substantia nigra comparing saline and 1 mg/kg MPH-treated mice.
(XLS)
Table S2 Identification of differentially expressed genes (p#0.05) in the substantia nigra comparing saline and 10 mg/kg MPH-treated mice. | 6,164.2 | 2012-03-21T00:00:00.000 | [
"Biology"
] |
On Critical Circle Homeomorphisms with Infinite Number of Break Points
and Applied Analysis 3 intervals are mutually disjoint except for the endpoints and cover the whole circle. The partition obtained by the above construction will be denoted by P n = P n (ξ, f) and called the nth dynamical partition of S. Obviously, the partition P n+1 is a refinement of the partition P n . Indeed, the short intervals are members of P n+1 and each long interval I i ∈ P n , 0 ≤ i < q n , is partitioned into k n+1 + 1 intervals belonging to P n+1 such that
Introduction
Let 1 = R/Z with clearly defined orientation, metric, Lebesgue measure, and the operation of addition be the unit circle.Let : R → 1 denote the corresponding projection mapping that "winds" a straight line on the circle.An arbitrary homeomorphism that preserves the orientation of the unit circle 1 can "be lifted" on the straight line R in the form of the homeomorphism : R → R with property ( + 1) = () + 1 that is connected with by relation ∘ = ∘ .This homeomorphism is called the lift of the homeomorphism and is defined up to an integer term.The most important arithmetic characteristic of the homeomorphism of the unit circle 1 is the rotation number: where is the lift of with 1 to R.Here and below, for a given map , denotes its th iterate.The rotation number is rational if and only if has periodic points.Denjoy proved that if is a circle diffeomorphism with irrational rotation number = () and log is of bounded variation, then is topologically conjugate to the pure rotation : → + mod 1; that is, there exists an essentially unique homeomorphism of the circle with ∘ = ∘ (see [1]).Since the conjugating map and the unique invariant measure are related by () = ([0, ]) (see [1]), regularity properties of the conjugating map imply corresponding properties of the density of the absolutely continuous invariant measure .The problem of relating the smoothness of to that of has been studied extensively.Indepth results have been found; see [2][3][4][5].
Other classes of circle homeomorphisms are known to satisfy the conclusion of Denjoy's theorem (see [6], Chapters I and IV, and [2], Chapter VI) and the study of the regularity of their conjugation maps arises naturally.Two of these classes are commonly referred to as the following.
1.1.Critical Circle Homeomorphisms.The orientation preserving circle homeomorphisms , such that ∈ , ≥ 3, have finite number of critical points , around which, in some coordinate system, has the form → , where ≥ 3 are the odd integers.Such critical points we say are of polynomial type of order .
The existence of the conjugating map for the class critical circle homeomorphisms was proved by Yoccoz in [7] and for the class -homeomorphisms the existence of conjugating map was proved by Herman in [2].
The singularity of the conjugating map for critical circle homeomorphisms was shown by Graczyk and Świątek in [8].They proved that if is 3 smooth circle homeomorphism with finitely many critical points of polynomial type and an irrational rotation number of bounded type, then the conjugating map is a singular function.For the homeomorphisms, the situation is different; that is, in this case, the conjugating map can be singular or absolutely continuous.Indeed, in the works [9][10][11], it was shown that the conjugating map is singular.The deeper result in this area was obtained by Dzhalilov et al. [12].They proved that if is piecewise-smooth -homeomorphism with finite number of break points and the product of jump ratios at these break points is nontrivial, then the conjugating map is a singular function.But in the works [9,13], it was shown that if is piecewise-smooth -homeomorphism with finite number of break points having the (D)-property (see for the definition [13]) and the product of the jump ratios on each orbit is equal to 1, then the conjugating map is an absolutely continuous function.Now, we discuss the symmetric property of a given function.
The criteria of quasisymmetry of the conjugating map of the critical circle homeomorphisms were obtained by Świątek in [14].Due to [14], if the circle homeomorphism with an irrational rotation number is analytic and has finitely many critical points, then the conjugating map is quasisymmetric if and only if the rotation number is of bounded type.
The quasisymmetric property of the conjugating map of -homeomorphisms is also different from the case of critical circle homeomorphisms.More precisely, if the rotation number of -homeomorphism is irrational of bounded type, then conjugating map is quasisymmetric, but there is a -homeomorphism with irrational rotation number of unbounded type such that the conjugating map is quasisymmetric.In this paper, we introduce a new class of circle homeomorphisms with the aid of the above two classes.Our aim in this work is to show the existence of conjugating map for this new class and study the quasisymmetric property of this conjugating map.Now, we introduce our class.
Let be a circle homeomorphism.
Note that all the above results were obtained for the class -homeomorphisms with finite number of break points, but in our work it is not necessary for the number of break points to be finite.Now, we state our main results.
Theorem 2. Suppose that a circle homeomorphism satisfies the conditions (a)-(c) and the rotation number () is irrational.Then, there exists circle homeomorphism : 1 → 1 , such that the functional equation The proof of Theorem 2 is based on the method of crossratio distortion estimates.Note that the cross-ratio estimates were used in dynamical systems for the first time by Yoccoz [7] and later by Świątek [14].In fact, the proof of Theorem 2 follows closely that of Świątek [14].Our second result below is also proved by using cross-ratio estimates.Theorem 3. Suppose that a circle homeomorphism satisfies the conditions (a)-(c) and the rotation number () is irrational.Then, there exists universal constant = () > 1 such that any two adjacent atoms 1 and 2 of a dynamical partition P ( , ) (see, for the definition, below) are -comparable; that is,
Dynamical Partition
We will assume that the rotation number = () is irrational throughout this paper.We use the continued fraction For ∈ 1 we define the th fundamental segment 0 = 0 () as the circle arc [, ()] if is even and [ (), ] if is odd.We denote two sets of closed intervals of order : "long" intervals: The long and short intervals are mutually disjoint except for the endpoints and cover the whole circle.The partition obtained by the above construction will be denoted by P = P (, ) and called the th dynamical partition of 1 .Obviously, the partition P +1 is a refinement of the partition P .Indeed, the short intervals are members of P +1 and each long interval −1 ∈ P , 0 ≤ < , is partitioned into +1 + 1 intervals belonging to P +1 such that
Cross-Ratio Inequality
Now, we equip 1 with the usual metric | − | = inf{| x − ỹ|, where x, ỹ range over the lifts of , ∈ 1 , resp.}.Our main analytic tools are ratio and cross-ratio distortions.Let , , , ∈ 1 be four points of the circle which preserve orientation; that is, ≺ ≺ ≺ ≺ on the circle; we define a ratio of three points of , , by and we define a cross-ratio of four points of , , , by The distortions and of the ratio and the cross-ratio by the function are defined by respectively.It is clear that (, , , ; ) = (, , ; ) ⋅ (, , ; ) .
Notice that the ratio and the cross-ratio distortions have the following properties.
Now, let us formulate the following theorem, which plays an important role during studying the properties of dynamical partitions.Theorem 4. Suppose that a homeomorphism satisfies the conditions (a)-(c).Consider a system of quadruples {( , , , ), ≺ ≺ ≺ ≺ , 1 ≤ ≤ } on the circle 1 .Suppose that the system of intervals {( , ), 1 ≤ ≤ } covers each point of the circle at most times.Then, there exists a constant 1 = 1 (, ) such that the following inequality holds: Inequality ( 12) is called the cross-ratio inequality.This inequality was proved for the critical circle homeomorphisms by Świątek [15].Now, we prove the following three important lemmas to be used in the proof of the main results.
Lemma 5. Suppose that a homeomorphism satisfies the conditions (a)-(c).Consider a system of quadruples {( , , , ), ≺ ≺ ≺ ≺ , 1 ≤ ≤ } on the circle 1 .Suppose that the system of intervals {( , ), 1 ≤ ≤ } covers each point of the set 1 \ ( ) at most times.Then, there exists a constant 2 = 2 (, ) such that the following inequality holds: Proof.Since is a -homeomorphism on the set By assumption, the system of intervals F := {[ , ], 1 ≤ ≤ } covers each point of the set 1 \ ( ) at most times.Now, we describe this system of intervals as a union of subsystems of F , ≤ in the following way: first, we take [ 1 , 1 ] as an element of F 1 and then consider the intersection [ 1 , 1 ] ⋂[ 2 , 2 ]; if this intersection is empty, then we count the interval [ 2 , 2 ] an element of F 1 ; otherwise, we count an element of F 2 .Next, consider F 1 ⋂[ 3 , 3 ] (here and below, it is considered the intersection with each element of F 1 ); if it is empty, we count [ 3 , 3 ] an element of F 1 ; otherwise, we check the intersection is empty, we count [ 3 , 3 ] an element of F 2 ; otherwise, we count [ 3 , 3 ] an element of F 3 .Continuing this process, we get all F , ≤ .By construction of subsystems F , ≤ of F, the elements of each subsystem do not intersect with each other and where = Var log .
We will use the following definition and fact to formulate the next lemma.Definition 6.Let be a 3 function such that ̸ = 0.The Schwarzian derivative of is defined by Fact (see [6]).If S < 0 on interval , then for any quadruples , , , ∈ .
Proof of Main Theorems
Proof of Theorem 2. The proof of Theorem 2 follows from assertion of Theorem 4 together with the following proposition which was proved by Świątek in [14].Before we prove Theorem 3, we formulate two lemmas and use them to prove this theorem.Note that these lemmas also were obtained by Świątek [14].
Lemma 10.Let be a circle homeomorphism with irrational rotation number.Assume that satisfies the cross-ratio inequality with bound 1 .Then, there is a constant 6 = 6 ( 1 , ) ≥ 1 such that for every ∈ In (I), the adjacent atoms are and −1 (), and in this case by Lemma 10 these intervals are 6 -comparable.Consider case (II).Using the property of dynamical partition, it is easy to see that (it suffices to prove it for the linear rotation , which follows from arithmetical properties of ).
Proposition 9 .
Let be a circle homeomorphism with irrational rotation number .Assume that satisfies the cross-ratio inequality with bound 1 .Then, there is a circle homeomorphism : 1 → 1 , which conjugates to the linear rotation .Furthermore, is quasisymmetric if is of bounded type.If has at least one critical point of polynomial type, then is quasisymmetric if and only if is of bounded type. | 2,756.6 | 2014-03-20T00:00:00.000 | [
"Mathematics"
] |
Ethical dilemmas posed by mobile health and machine learning in psychiatry research
Abstract The application of digital technology to psychiatry research is rapidly leading to new discoveries and capabilities in the field of mobile health. However, the increase in opportunities to passively collect vast amounts of detailed information on study participants coupled with advances in statistical techniques that enable machine learning models to process such information has raised novel ethical dilemmas regarding researchers’ duties to: (i) monitor adverse events and intervene accordingly; (ii) obtain fully informed, voluntary consent; (iii) protect the privacy of participants; and (iv) increase the transparency of powerful, machine learning models to ensure they can be applied ethically and fairly in psychiatric care. This review highlights emerging ethical challenges and unresolved ethical questions in mobile health research and provides recommendations on how mobile health researchers can address these issues in practice. Ultimately, the hope is that this review will facilitate continued discussion on how to achieve best practice in mobile health research within psychiatry.
Introduction
Mental illness affects approximately one in five persons in any given year and is the leading cause of disability worldwide. [1][2][3] Nevertheless, there are approximately nine specialty care providers for those with psychiatric conditions per 100 000 persons across the globe, 4 whereas approximately 17 600 per 100 000 experience a common mental illness, 1 which suggests the current system is unable to deal with everyone needing treatment. In recognizing this fundamental issue of limited access, researchers have increasingly turned to technology to eliminate barriers to care. 5 Indeed, approximately half of the world's population has access to the internet and the average number of mobile or cell phone network subscriptions is greater than one per person globally. 6,7 Consequently, technology-based treatments may offer innovative ways of closing gaps in access.
Mobile devices can provide unprecedented amounts of intensive, longitudinal data on movement intensity and duration, psychomotor disturbance, social interactions, concentration, sleep duration and quality, information-seeking behaviour and affective states. 8 Moreover, the pattern-recognition capabilities of machine learning algorithms can be applied to these data to classify individuals by health-relevant characteristics (using, for example, defined "biomarkers" or "digital phenotypes") 9,10 and to predict clinical outcomes, such as a diagnosis or risk level. 11 These classifications and predictions can then be employed to guide health-care assessments and to inform the delivery of individually tailored interventions. However, the application of machine learning approaches to individuals who may be members of vulnerable psychiatric population groups raises a variety of ethical dilemmas. 12
Ethical models and principles
Ethical models and principles, particularly those applied specifically to psychiatry, are central to our discussion of mobile health and machine learning in psychiatry research. 13 Though ethical decision-making in health care is a vast field, 14 we selected three well-known ethical models for consideration because of their relevance to decision-making in mobile health and machine learning: utilitarianism, Kantian ethics and principlism.
In utilitarianism (a form of consequentialism), actions that produce good consequences and benefit the largest number of people are prioritized, even if individuals' privacy and autonomy must be sacrificed. 15 This model may be applicable to several topics relevant to machine learning and mobile health, perhaps most obviously to balancing the right to privacy against the advancement of science. The Kantian model places less emphasis on the outcome of an action; rather, moral rules and ideals and rational principles are considered to be of the utmost importance in ethical decision-making. 16 As applied to psychiatry research, this model might be especially relevant to researchers' responsibility to act and monitor data collection. This approach contrasts with other ethical decision-making perspectives that emphasize the personal rights of individuals. 13 These two models are particularly relevant to certain issues in machine learning and mobile health research, such as obtaining informed consent for monitoring and for accessing various data sources (e.g. smartphones, electronic health records and social media).
Finally, principlism is an ethical model that encompasses principles such as: (i) autonomy, which is defined as a patient's right to choose their own course of action (e.g. with regard to interventions); (ii) beneficence (i.e. ensuring the patients being treated benefit); (iii) nonmaleficence (i.e. ensuring
Policy & practice
Ethics of mobile health and machine learning Nicholas C Jacobson et al.
no unreasonable harm to patients); (iv) justice (e.g. fairness of, and access to, clinical practice); (v) confidentiality (i.e. ensuring the parties involved keep freely provided information confidential); and (vi) privacy (i.e. freedom from intrusion into personal matters). 14,17,18 Each of these principles applies to the issues discussed below. In particular, we highlight the plethora of confidentiality and privacy issues that arise when the predictive algorithms applied in machine learning make use of patients' data. Furthermore, autonomy, beneficence and nonmaleficence are relevant to the process of informed consent as well as to researchers' responsibility to monitor data and possibly intervene to prevent harm to patients.
Monitoring and intervening during data collection
Asking participants about sensitive topics (e.g. psychiatric symptoms) and assessing passive data streams (e.g. photographs on a smartphone) may result in researchers having information about negative health outcomes, such as worsening mental status, mood or physical health (i.e. adverse events). Researchers may have an ethical obligation to monitor the emergence of adverse events and potentially to intervene to mitigate negative outcomes. For example, in studies in which suicidal ideation and intent are being monitored, any indication of imminent risk may warrant taking safety measures. Researchers should consider the following when monitoring for adverse events: (i) the feasibility of monitoring and intervening given the resources and expertise available; (ii) validity (i.e. the possibility that monitoring or interventions may interfere with the behaviours under observation); and (iii) loss of privacy (e.g. breaking confidentiality when calling emergency services because of the suspected imminent risk of serious harm to oneself or others). The vastly increased frequency of data collection in daily life coupled to reduced personal contact with participants in mobile health assessment studies makes it more difficult to address these concerns than it would be in a traditional laboratory or clinic-based study.
Feasibility
The drastic increase in the availability of information with no specific time frame or location has made it less straightforward and feasible to monitor risks and to intervene. For example, an individual may indicate the intention to end his or her life imminently, but researchers may be unaware of the individual's location or be unable to make contact. Also, there is a lack of broad agreement about the threshold for initiating an intervention in response. Does the risk have be simply high or high and imminent? Should the intervention require a full assessment or be based only on study data? Should it ever be triggered by passively collected data (e.g. the content of text messages) or by predictive algorithms that are still under development? In some cases, institutional review boards advise collecting data only when the ability to monitor and intervene is viable; for example, in an anonymous online study, researchers may be advised not to ask for information about high-risk indicators (e.g. suicidal intent) or to collect open data streams (e.g. photographs) that might lead to an ethical obligation to intervene. However, this approach will limit researchers' ability to learn about critical psychiatric phenomena as they occur in ecologically valid (i.e. realworld) settings and time frames.
Validity
A key advantage of mobile health studies compared with laboratory or clinicbased studies is their greater ecological validity. However, close monitoring (and possible intervention) may alter participants' responses, thereby compromising the validity of their data or increasing the risk they will withdraw from the study. For example, knowing that a study staff member may contact them in response to certain answers to survey questions could increase the likelihood that individuals who welcome such contact would give those answers. Others may be deterred from giving those same answers to avoid contact or being sent to hospital. Understanding how monitoring affects data validity is an important area of future research and is vital for developing guidelines on such issues.
Privacy
There are several ethical questions about the use of study participants' data to monitor, and intervene during, adverse events. For example, is it permissible to combine multiple streams of data to inform risk assessment (e.g. using physiological sensor data to assess reported health events) or to guide interventions (e.g. using smartphone geolocation data to deploy emergency services based on a self-reported response that indicates an imminent risk)? As in other contexts, currently there are no guidelines governing the trade-off between participants' safety and their privacy, confidentiality and autonomy, especially when an intervention may occur without anyone speaking to the participant in advance. At the very least, researchers are encouraged to fully inform participants during the initial consent process about how their data will be monitored and used to guide emergency interventions.
The right to privacy versus the advancement of science
Privacy considerations are especially germane to data collection that involves passive monitoring of participants' daily lives, such as geolocation and actigraphy (i.e. movement) data. Since these data streams are collected without participants' active engagement, there is a higher likelihood that they will lack full awareness of how such information can be transformed into a highly idiosyncratic and complex picture of a participant's behaviour, which could potentially reveal an individual's identity and actions. The ethical concerns that may arise from the tension between the advancement of science using mobile health technology and participants' rights over their personal data are particularly relevant to: (i) informed consent, transparency and voluntary participation; and (ii) the protection of participants' data (which could affect their security and safety).
Informed consent
Informed consent is a key ethical requirement in research and has been defined as "the process by which a fully informed user participates in decisions about his or her personal data." 19 In obtaining consent, the following principles must be taken into account: (i) disclosure, whereby researchers clearly and thoroughly inform prospective participants of the potential risks and benefits of participating in the study; Policy & practice Ethics of mobile health and machine learning Nicholas C Jacobson et al.
(ii) agreement, whereby participants are asked to accept or decline participation; (iii) comprehension, wherein participants must demonstrate full and detailed understanding of the study; (iv) competence (i.e. the participant must have the mental and physical ability to provide consent); and (v) voluntariness (i.e. participants consent of their own volition). 17, 18 Mobile health research using machine learning presents unique challenges in adhering to these principles. Recently, several recommendations have been made in response to the growing challenge of obtaining informed consent in the modern digital world. 20,21 One of the most difficult principles to address is comprehension because consent is often requested online. 22 Currently, digital platforms frequently obtain consent via terms-of-service agreements, which are written in incomprehensible legalese and are rarely even read. 23,24 Researchers have an ethical responsibility to minimize the risk of this occurring by disclosing all relevant study information in a comprehensible manner. To enhance comprehension, the overall consenting process should be thorough, engaging and accessible. Giving consent in person or through a video or phone call is preferable, but is often not feasible for large-scale, mobile health projects. Researchers should carefully consider the potential trade-offs between how consent is obtained and maximizing access to study benefits or interventions, particularly in low-resource areas where individuals may have few options for good-quality care of their psychiatric conditions. When consent is obtained via an online platform, the process should highlight key information and prevent participants from clicking through without reading. 25 Engagement can be increased by using interactive screens and video or audio content and by summarizing sections in clear, concise language. 23,26 Participants' comprehension and competence can both be assessed using short quizzes or games and a live chat feature can allow participants to clarify their understanding. 21 The principle of voluntariness implies that engagement in research is noncoercive and of the participant's own free will and that participants have been presented with an explicit opportunity to decline participation. These criteria can be enhanced by giving participants the opportunity to consent to different data collection streams (e.g. to opt into daily diaries but not geolocation data) or to different research modules (e.g. to opt into the cardiovascular health module but not the sexual dysfunction module). 21 Although this approach maximizes voluntariness and the autonomy of each participant, it may come at the cost of sacrificing data quality and, thereby, predictive accuracy. For instance, if participants opt out of entire modules, it could be more difficult to accurately determine their health risk levels and to deliver optimized, just-intime interventions, which may rely on algorithms that require multiple data streams (e.g. the cross-referencing of geolocation and accelerometer data that could predict drug use in a high-risk environment). Therefore, in addition to disclosing the risks of opting in to all data streams, researchers and health providers should help participants to make informed decisions by fully revealing the personal benefits of opting in and the societal benefits of each research module. In addition, obtaining ongoing consent is recommended: participants should repeatedly revisit the terms of the study protocol throughout its duration to ensure their continued comprehension, competence and voluntary consent. 27
Protection of participants' data
Ethical considerations also extend into the realm of protecting participants' data. Since mobile health research involves the passive collection of data that is not typically associated with health outcomes (e.g. screen time) and may fall outside long-standing regulatory protections, researchers must implement the highest of ethical standards in protecting such intensive and potentially sensitive data. Moreover, highly detailed information is often collected from multiple sources and it is possible that combining data streams could produce information that could identify individuals and be used in unauthorized ways (thereby impacting an individual's employment opportunities or eligibility for insurance) or that could be commercialized for targeted advertisements. In extreme circumstances, unauthorized use of such highly granular data could put a participant's safety at risk if accessed by ill-intentioned actors.
Some recent efforts to establish regulations to protect personal data are the European Union's General Data Protection Regulation and California's Consumer Privacy Act, both of which give people more access to, and autonomy over, their data, require greater transparency on data use and stipulate tighter oversight to ensure the protection and security of data. 28,29 By following the recommendations of these regulatory initiatives, research teams can adopt dissociable roles for the handling and processing of intensive mobile health data. For example, one individual could be designated to manage identifying information, whereas another could handle unidentifiable aggregated data and conduct analyses. This would enhance data security by minimizing the proliferation of sensitive information. An additional security step could involve further dissociation of roles: faceto-face interactions could be carried out by yet another individual who is blinded to highly granular personal information (e.g. home addresses), thereby mitigating risks to participants' safety.
Shifts towards research involving intensive, mobile health data and open access data-sharing provide unprecedented opportunities for growth in translational research. To guard against the unintended consequences of these advances, researchers should consider adopting novel models for participants' consent, data protection and engagement in the research process. 30,31 The recommendations outlined here point to a participant-centric model that could maximize protection for study participants while supporting research aimed at improving psychiatric outcomes.
Machine learning models: performance versus interpretability
Machine learning is being increasingly applied in psychiatry for diagnosis, treatment selection and clinical administration. 32,33 However, its future is affected by a key ethical dilemma associated with the trade-off between the performance and the interpretability of machine learning models. Interpretability relates to the ease of deciphering how a set of inputs to a model (e.g. patient Ethics of mobile health and machine learning Nicholas C Jacobson et al. characteristics and medical history) result in a particular output or prediction (e.g. a diagnosis or risk assessment). More complex models (e.g. random forests and neural networks) often have greater accuracy, but lower interpretability than simpler models (e.g. naïve Bayes classifiers). 34 From a clinical perspective, models with low interpretability could raise ethical challenges because it may be difficult to understand how input variables contribute to the model's predictions. Moreover, given that a model's predictions could have substantial clinical consequences, such as hospitalization or the administration of medication, its accuracy is vital.
Machine learning models are predominantly conceptualized as support tools and not as replacements for clinicians. 35,36 However, it is not clear whether or how these models should be incorporated into clinical practice. For example, a prognostic model with high predictive accuracy, but low interpretability might result in a clinician knowing a patient is at risk, but not what to target with an intervention. In addition, how clinicians should share information from machine learning models with patients also gives rise to ethical questions. Would patients want to know they are at risk, particularly if they cannot be told why (as factors included in machine learning models generally cannot be interpreted as having a causal impact on outcomes)? Sharing information from an uninterpretable model may adversely affect a patient's conceptualizations of their own illness, cause confusion and prompt concerns about transparency. 37,38 So far, research suggests there is no clear consensus among patients on whether they would want to know this kind of information about themselves, 37 which leaves psychiatrists to balance the potential utility of a machine learning model's predictions against the risk of liability and the patient's reactions. 39 Furthermore, when a model is not interpretable, a clinician's ability to be cognizant of possible fairness issues could be limited. 40 In machine learning, fairness encompasses concerns about how data-driven approaches can reflect and perpetuate biases rooted in social inequality and discrimination. 41,42 A model's predictions can vary systematically across demographic groups if, for example, the data being sampled reflects societal inequalities (i.e. historical bias) or if the sampling methods result in the underrepresentation of certain groups (i.e. representation bias). 43 Consider a machine learning model trained using the electronic health records of medical visits; 44 this model might not be able to accurately predict psychiatric conditions in immigrant populations that avoid interacting with the health-care system. 45 Additionally, clinician bias in the International Classification of Disease's codes or clinical notes can introduce variations in the inputs to a machine learning model that, in turn, bias the model's predictions for minority groups. 46 Further, with less interpretable models, it can be more challenging to detect, track and rectify these different sources of bias.
Although numerous ways of addressing issues of effectiveness and fairness in machine learning are emerging, 47 these are often based on one-time analyses of a single data set. At the forefront of machine learning today, the trend is to allow machine learning to change iteratively over time with each new piece of incoming data. However, the practice of employing continually adaptive algorithms raises questions on how often algorithms should be updated and when reassessment is warranted. Recently, the United States' Food and Drug Administration proposed new regulations for monitoring changes in adaptive algorithms as they continuously learn from real-world data. 48 These regulations require manufacturers to prespecify the changes they anticipate because of online learning and the protocols in place for addressing risks that might result from changes to an algorithm's operations. Assessments of potential risk consider the degree to which an algorithm contributes to the psychiatric decision (e.g. guiding treatment versus assigning a diagnosis) and the severity of the patient's condition (e.g. identifying individuals at risk for developing a psychiatric disorder in the future, versus identifying those at an acute risk for suicide). The emphasis is on transparency, manufacturers should tell end-users about the changes occurring in algorithm performance over time and provide transparent information about algorithm processes in a way that enables clinicians and patients to engage in meaningful risk assessments of the machine learning model. Less interpretable models constrain transparency and thus limit potential contributions from all important stakeholders. Instead of focusing on how machine learning can be used in clinical care within psychiatry, the priority might be first to consider whether predictions from a specific machine learning model are appropriate for informing decisions about a particular intervention. 49
Conclusions
This review highlights the wide range of ethical issues faced by psychiatry researchers in the digital age. Although mobile health and machine learning have the potential to facilitate great advances in, and close access barriers to, psychiatric research and care globally, they also give rise to new ethical questions concerning, for example: (i) the responsibility to monitor naturally occurring adverse events and intervene accordingly; (ii) the need to guard privacy rights and ensure informed consent while conducting scientific research; and (iii) the importance of increasing the transparency of powerful machine learning models to ensure they can be applied ethically and fairly in clinical decision-making. Ultimately, the issues need to be thoughtfully considered by several stakeholders, including regulatory agencies, clinicians, participant-advocacy groups and ethicists, as well as researchers. As the number of mobile health studies increases and mobile technologies evolve, 50 Il formule également des recommandations sur la façon dont les chercheurs en santé mobile peuvent résoudre ces problèmes dans la pratique. À terme, nous espérons que ce rapport favorisera la poursuite des discussions portant sur les moyens de définir des méthodes de recherche adéquates pour la santé mobile en psychiatrie.
Policy & practice Ethics of mobile health and machine learning Nicholas C Jacobson et al. | 4,985.4 | 2020-02-25T00:00:00.000 | [
"Medicine",
"Computer Science",
"Psychology"
] |
Case Report: Histological and Histomorphometrical Results of a 3-D Printed Biphasic Calcium Phosphate Ceramic 7 Years After Insertion in a Human Maxillary Alveolar Ridge
Introduction: Dental implant placement can be challenging when insufficient bone volume is present and bone augmentation procedures are indicated. The purpose was to assess clinically and histologically a specimen of 30%HA-60%β-TCP BCP 3D-printed scaffold, after 7-years. Case Description: The patient underwent bone regeneration of maxillary buccal plate with 3D-printed biphasic-HA block in 2013. After 7-years, a specimen of the regenerated bone was harvested and processed to perform microCT and histomorphometrical analyses. Results: The microarchitecture study performed by microCT in the test-biopsy showed that biomaterial volume decreased more than 23% and that newly-formed bone volume represented more than 57% of the overall mineralized tissue. Comparing with unloaded controls or peri-dental bone, Test-sample appeared much more mineralized and bulky. Histological evaluation showed complete integration of the scaffold and signs of particles degradation. The percentage of bone, biomaterials and soft tissues was, respectively, 59.2, 25.6, and 15.2%. Under polarized light microscopy, the biomaterial was surrounded by lamellar bone. These results indicate that, while unloaded jaws mimicked the typical osteoporotic microarchitecture after 1-year without loading, the BCP helped to preserve a correct microarchitecture after 7-years. Conclusions: BCP 3D-printed scaffolds represent a suitable solution for bone regeneration: they can lead to straightforward and less time-consuming surgery, and to bone preservation.
INTRODUCTION
Dental implant placement can be challenging when an insufficient bone volume is present at the recipient site (Araújo and Lindhe, 2005). Autogenous bone has been described as the gold standard in bone regeneration techniques but, due to its limitations (limited intraoral sources, tendency to rapid and partial resorption and additional surgery with increased morbidity; Yamamichi et al., 2008;Scarano et al., 2011;Iezzi et al., 2012), allografts and xenografts have been developed and proposed as suitable alternatives: they are theoretically available in limitless amounts and in different dimensions and profiles, and can be customized or combined with growth factors, hormones, drugs, and stem cells (Piattelli et al., 1996a;Pettinicchio et al., 2012;Mangano et al., 2015a;Paré et al., 2020).
Different bone substitute materials have been tested but it remains still unknown which graft material could be considered the best (Mazor et al., 2009;Iezzi et al., 2012;Pettinicchio et al., 2012;Danesh-Sani et al., 2016). Biphasic calcium phosphate ceramics (BCPs) have been reported to have a high biocompatibility and a capability to enhance cell viability and proliferation (Castilho et al., 2014;Asa'ad et al., 2016;Zeng et al., 2020). With the improvement of computeraided design/computer-aided manufacturing (CAD/CAM) technologies it has been feasible to analyze the bone deficiency of a patient on a 3D-CT scan and to create bone grafts that fit perfectly into the receiving site (Mangano et al., 2015b;Luongo et al., 2016;Raymond et al., 2018). Several techniques have been used to produce three dimensional scaffolds [e.g., inkjet printing, stereo lithography, fused deposition modeling, and selective laser sintering (Bose et al., 2013;Hwang et al., 2017;Liu et al., 2019;Chung et al., 2020)]. These techniques allow the creation of solid constructs with an excellent pore interconnectivity, high biocompatibility, capabilities of maintaining space and, for bone regeneration procedures, they seem to be able to provide greater osteoconductivity (Carrel et al., 2016;Hwang et al., 2017;Raymond et al., 2018;Kim et al., 2020).
The purpose of the present study was to assess clinically, histologically and under high resolution X-ray tomography a specimen of 30% hydroxyapatite (HA) and 70% tricalcium phosphate (TCP) BCP-3D-printed scaffold, harvested after 7 years of healing.
Case Description
The Ethical Committee of the Hospital of Varese, Italy approved the study protocol (N • 826 del 03/10/2013). In 2013, the patient requested fixed prosthetic rehabilitation due to the lack of the second premolar and first molar of the right upper jaw. As there was lack of bone support, and the patient refused a sinus lift, it was decided to insert a dental implant in zone 1.5 with simultaneous bone regeneration of the atrophic buccal wall. The patient, who signed a written informed consent form, had undergone implant therapy with bone regeneration of the maxilla buccal plate to replace the second premolar, in 2013. Horizontal bone augmentation procedure was performed using 3D-printed biphasic HA-blocks, which were placed on the bone wall and stabilized by sutures. In CBCT 1, it is possible to see the HA 3D printed graft characterized by its particular predefined porous structure. In X-ray and CBCT 2, 4 months after regeneration, the prosthetic rehabilitation was performed with a bridge from 1.4 to 1.7. After 7 years, during which the patient had no clinical control, the patient came back with serious periodontal problems affecting the first upper right premolar (1.4) and the second upper right molar (1.7), as shown in X-ray and CBCT 3. Therefore, the patient underwent another implant surgery, to replace the first premolar in the regenerated region, and a core of regenerated bone was obtained with a trephine.
Scaffold Fabrication
In this study, the ceramic scaffold was made-up by the direct rapid prototyping technique dispense-plotting (Deisinger et al., 2008). The biomaterials was produced by Biomed Center (Bayrouth, Germany) following the systematic approach to the biological evaluation of the medical device, as part of the risk management process present in the ISO standard ISO 10993-1:2018 and according with ISO 14971 and ISO 13175-3: "Implants for surgery-Calcium phosphates-Part 3: Hydroxyapatite and beta-tricalcium, " as shown in the flowchart. 3D printed Biphasic HA chemical composition is manufactured under highly controlled process. A computer-generated scaffold model was designed with a cylinder-shaped outer geometry by using a 3D-CAD software. In the later sintering process, the size of the scaffold prototype was customized to the shrinkage of the ceramic material. Physical rods consisting of paste-like aqueous ceramic slurry were extruded out of a container through a jet and deposited using an industrial robot (GLT, Pforzheim, Germany), to build up the green bodies. In this study HA and TCP powders (Merck, Germany) were combined to get a biphasic powder blend with a HA/TCP weight proportion of 30/70. Thermal treatment of the raw HA powder at 900 • C for 1 h and the addition of a compatible binder/dispersant system of organic additives, of 10.5 wt% relative to the amount of ceramic powder, provided to the aqueous biphasic ceramic slurry its specific rheological behavior. The rod deposition was well-ordered in x, y, and z direction to build 3D scaffolds layer by layer on a deposition platform. The rotation of the direction of the rod deposition by 60 • from layer to layer produced a 3D network with an interconnecting pore arrangement. The assemblies built of ceramic slurry were dehydrated at room temperature and then sintered at 1,250 • C for 1 h. The double packaging and labeling process was carried out in clean rooms (classified as ISO 6). The sterilization of the product was performed by gamma irradiation. Identification and traceability of the devices was also guaranteed.
High-Resolution Tomography
MicroCT experiments were performed in two sessions: (1) at a laboratory-based microCT device (CISMIN Center, Polytechnic University of Marche, Ancona, Italy), achieving morphometric information on microarchitecture of the overall mineralized bone, of the newly formed bone and of the residual biomaterial, (2) at the SYRMEP microCT beamline of the ELETTRA Synchrotron Radiation Facility (Basovizza, Trieste, Italy), achieving quantitative information on osteocyte lacunae size and distribution in the newly formed bone. For laboratory-based microCT, a Skyscan 1174 (SkyScan-Bruker, Antwerp, Belgium) tomographic acquisition was set with the following parameters: voltage: 50 kV; current: 800 µA; pixel size: 6.5 µm; rotation step over 180 • :0.1 • ; exposure time per projection: 0.1 • ; filter: 1 mm of Al. The absorption projection images (8 bit-TIFF) were reassembled using the NRecon software (SkyScan-Bruker, Antwerp, Belgium) to obtain a set of cross-sectional slices (8 bit-BMP), with ring artifact and beam hardening corrections. For the synchrotron-based microCT acquisition, the scans were performed using the following parameters: energy: white beam with peak energy at ∼19 keV; voxel size: (890 nm 3 ); rotation step over 180 • :0.1 • ; specimen-detector distance: ∼100 mm. Due to the coherence of the synchrotron source, the recorded radiographs included phase contrast signals. The method was based on the discrimination between the absorption index β and the refractive index decrement δ of the index of refraction n = 1δ + iβ in the tissues of the biopsy. The reconstruction was performed using Paganin's method (Paganin et al., 2002), together with the usual filtered back projection (FBP) algorithm. In the Paganin's method, the phase was retrieved by assuming a linear correlation between β and δ. The δ/β ratio, in the present experimental protocol, was set to 5.
The commercial software VG Studio MAX 1.2 (Volume Graphics, Heidelberg, Germany) was used to create 3D images and visualize the 3D phase distribution. X-ray contrast variations within samples turned into different peaks in the gray level scale, conforming to the several phases. The volume of each phase was acquired by multiplying the volume of a voxel by the quantity of voxels underlying the peak associated with the relevant phase. The Mixture Modeling algorithm (NIH ImageJ Plugin) was employed to threshold the histograms. Thresholded slices were used to automatically detach the new bone phase from the scaffold phase. The analyzed subvolumes were 3D portions completely embraced in the sample bulk.
The microarchitecture investigation was centered on the Parfitt structural indices (Parfitt et al., 1987): the following morphometric parameters were evaluated for the entire mineralized tissue: specific specimen volume (SV/TVexpressed as a percentage), specific specimen surface (SS/SV-per millimeter), Strut thickness (STh-expressed in micrometers), Strut number (SNr-per millimeter), and Strut spacing (SSpexpressed in micrometers). Varying bone orientation with dependency on mechanical loading, information on the eventual presence of preferential orientation(s) were extracted (Harrigan and Mann, 1984) by calculation of the anisotropy degree index (Tb.DA). Tb.DA was investigated by BoneJ Plugin (Doube et al., 2010) of the ImageJ software (Abramoff et al., 2004;Schneider et al., 2012;Rasband, 2019): it varies between 0 (perfect isotropy) and 1 (strong anisotropy). Finally, trabecular connectivity density (Tb.Conn.D) was calculated: it supplies an overall quantitative evaluation, with greater values for better-connected organizations and lower values for poorly-connected ones. For the calculation of the regenerated bone, the same quantitative descriptors, previously related to the full mineralized tissue were applied in order to quantify: overall Bone Volume (BV-mm 3 ), overall Bone Surface (BS-mm 2 ), Bone Volume to Total Volume ratio (BV/TV-expressed as a percentage), Bone Surface to Bone Volume ratio (BS/BV-per millimeter), Bone thickness (BThexpressed in micrometers), Bone number (BNr-per millimeter), and Bone spacing (BSp-expressed in micrometers). The kinetics of the scaffold dissolution was also examined using again the same quantitative descriptors used to the entire mineralized tissue and to the regenerated bone [i.e.,: overall Scaffold Volume (ScV-mm 3 ), overall Scaffold Surface (ScS-mm 2 ), Scaffold Volume to Total Volume ratio (ScV/TV-expressed as a percentage), Scaffold Surface to Scaffold Volume ratio (ScS/ScV-per millimeter), Scaffold thickness (ScTh-expressed in micrometers), Scaffold number (ScNr-per millimeter), and Scaffold spacing (ScSp-expressed in micrometers)].
Synchrotron-based imaging allowed to achieve information on morphometric properties of the osteocyte lacunar network, with data on the mean lacunar thickness (Lc.Th), the mean lacunar volume (Lc.V), and the lacunar density (amount of lacunae per whole volume-Lc.Nr/TV).
Histology
The biopsy was fixed in 10% buffered formalin and processed (Precise 1 Automated System; Assing, Rome, Italy) to obtain thin ground sections. The specimen was dehydrated in an ascending sequence of alcohol solutions and embedded in an ascending sequence of glycol-methacrylate resin (Technovit 7200 VLC; Kulzer, Wehrheim, Germany). After polymerization, the specimen was segmented, along its longitudinal axis, with a high precision diamond disk at about 150 µm and ground down to about 30 µm with a specifically designed grinding machine. Each slice was stained with acid fuchsin and toluidine blue and analyzed under a light microscope (Laborlux S, Leitz, Wetzlar, Germany) associated to a high-resolution video camera (3CCD, JVCKY-F55B, JVC, Yokohama, Japan) and interfaced with a monitor and PC (Intel Pentium III 1200 MMX, Intel, Santa Clara, CA, USA). This optical system was connected with a digitizing pad (Matrix Vision GmbH, Oppenweiler, Germany) and a histomorphometry software set with image capturing means (Image-Pro Plus 4.5, Media Cybernetics Inc., Immagini & Computer Snc, Milano, Italy). One single well-trained examiner (GI), who was not involved in the surgical treatment, assessed the histological results. The following outcome measures were carried out: percentages of newly formed bone, marrow spaces and residual graft particles. Birefringence was measured as a sign of transverse collagen orientation using polarized light. Collagen fibers were observed by placing the thin bone sections under an Axiolab light microscope (Laborlux S, Leitz, Wetzlar, Germany) equipped with two linear polarizers and two quarter wave plates set to have a transferred circularly polarized light. Collagen fibers aligned perfectly transverse to the course of the light spread (parallel to the specimen slice plane) appeared bright due to a modification in the refraction of existing light, while the collagen fibers aligned along the axis of light spread (perpendicular to the specimen slice plane) looked dark because no refraction happened.
Scaffold Characterization
The sintered dispense-plotted assemblies had a typical mesh like organization with rod diameters of 300 ± 30 µm and pore sizes between the rods of about 370 ± 25 µm. By determining the geometrical density of the sintered scaffolds, a total porosity of about 60% was estimated. Relative bulk density of the sintered specimens was assessed to 99% th.d. by pycnometry. Two main material phases of the sintered ceramic were identified by semi-quantitative XRD measurements: 30% HA, 60% β-TCP, plus a small peak of α-TCP (70% of TCP in total).
High-Resolution X-Ray Tomography
MicroCT images of representative subvolumes of the Testsample (i.e., of the maxilla biopsy grafted with the BCP and retrieved after 7 years, were shown in Figure 1). All tissues, but mineralized bone and residual scaffold, have been made virtually transparent in Figure 1A, while in Figure 1B the Comparisons are made with the BCP scaffold before in-vivo tests (Ctr-Sc). same subvolume was shown with also the newly formed bone made transparent for a better visualization of the residual scaffold, not fully resorbed after 7 years in-vivo. A transversal section and the 3D distribution of the osteocyte lacunae in a representative subvolume were respectively reported in Figures 1C,D. Numerous subvolumes, collected in different areas and completely included in the biopsy, were chosen, producing the microarchitecture data described in Tables 1, 2A.
The study of the microarchitecture in the test-maxillary biopsy (Test-sample retrieved after 7 years in-vivo) was detailed in Table 1: the full mineralized structure (S), the newly-formed bone (B), and the scaffold residuals (Sc) were considered. A comparison was made with the BCP scaffold as produced [i.e., before the in-vivo test (Ctr-Sc)]. After 7 years, against a comparable number of struts, an increase in the specific volume of almost 80% was observed and the average thickness of the struts by more than 100%, together with a decrease in the specific surface of almost 54% and average spacing between struts of over 80%. Furthermore, after 7 years in vivo, a reduction of the 2B | Three-dimensional morphometric investigation of the osteocyte lacunar network in the test-maxilla (Test) retrieved after 7 years in-vivo: comparison with peri-dental bone (Pd-Ctr) and with unloaded bone (UnL-Ctr).
biomaterial volume of more than 23% was observed and the newly formed bone volume was more than 57% of the overall mineralized volume. In this context, it has been widely accepted in literature that jawbones, 6 months after tooth extraction, were perfectly healed in healthy patients (Guralnick, 1968;Jahangiri et al., 1998). Basing on the previously shown data, also the Test-sample had to be considered healed. However, this Testsample did not participate in mastication for 7 years; thus, it was particularly interesting to study possible alterations with the physiological conditions of the peri-dental bone (Pd-Ctr) and with unloaded controls (UnL-Ctr), (i.e., with bone biopsies spontaneously healed in 12 months after tooth extraction but not participating in mastication). This comparative study, shown in Tables 2A,B, was supported by the data of a recent study (Iezzi et al., 2020). Table 2A showed such comparison in terms of microarchitecture quantitative study: it was observed that the Test-sample turned out to be much more mineralized and bulky not only compared to UnL-Ctr, with an increase of the mineralized volume of 121%, but also compared to the peri-dental physiological context (Pd-Ctr), with an increase by over 61%. Interestingly, the anisotropy degree (DA) of the Test-sample resembled that of the peri-dental site and it was shown to be much less oriented than the UnL-Ctr samples. The increasing in terms of mineralized volume of the BCP-based Testsample was correlated to the study of bone architecture at the length-scale pertaining the observation of the osteocyte lacunar network; the same subvolumes investigated for producing the microarchitecture data were also studied for the osteocyte lacunae 3D morphometric analysis (Table 2B): considering the standard deviations, comparable values in Test and in Control sites (both Pd-Ctr and UnL-Ctr), when evaluating Lac.V, Lac.Th, and Lac.Nr. However, the observation of the pure mean values, revealed the same values in Test-sample and the UnL-Ctr samples in terms of lacunar density (Lac.Nr), but an increased mean Lac.Nr in the physiological context of the peri-dental site (Pd-Ctr), in agreement with previous observations (Iezzi et al., 2020).
Histological Results
After microCT testing the biopsy was available for histological evaluation. At low magnification, the sample revealed a complete integration of the scaffold, and only in the most peripheral portion, a small amount of soft tissue was present. Indeed, the residual biomaterial block, constituted by interconnected pores, was filled with bone. This portion was close to a thin layer of cortical bone with very small marrow spaces (at the bottom of the sample) (Figure 2A). At high magnification, the biomaterial was well-incorporated in the mature bone both in areas close to the cortical bone and in the areas far from it. At the bone-biomaterial particles interface, the particles showed a lower density compared to their central portion ( Figure 2B). No gaps were detected at the bone-particles interface, and the bone was always in intimate contact with the particles. The porous structure of the biomaterial was partially modified, and the shape of the particles revealed signs of degradation. Moreover, in one field, close to the residual biomaterial, a multinucleated giant cell was observed, showing that the process of biomaterial resorption happened slowly over time ( Figure 2C). In the small marrow spaces, some blood vessels were present, and in a few fields, foci of bone remodeling were observed ( Figure 2D) with osteoblastic activity. No inflammatory cells were present. The percentage of bone, residual biomaterials and soft tissues was respectively 59.2, 25.6, and 15.2%.
Polarized Light Observations
The same fields of the samples were examined under polarized light and compared to the light microscopic images in order to study the quality of the bone and the orientation of collagen fibers. In all cases, the biomaterial block was surrounded by lamellar bone with oriented parallel collagen fibers (Figures 2E,F), and only in small areas they were randomly oriented.
DISCUSSION
The purpose of the present study was to assess the healing and resorption process of BCP 3D printed bone substitute and the nature and amount of regenerated bone. The newly formed tissues were evaluated by an innovative experimental approach based on histological and X-ray high-resolution tomography (microCT) analysis. MicroCT was widely shown to be a powerful tool for scaffold characterization (Landi et al., 2000;John and Wenz, 2004;Iezzi et al., 2012Iezzi et al., , 2020Giuliani et al., 2018a,b,c), obtaining not only a 3D image of a scaffold, but also providing qualitative and quantitative information on its structure (Renghini et al., 2013;Giuliani et al., 2014Giuliani et al., , 2016. It is possible, starting from CBCT files, to create a 3D prototype of the patient's maxilla/mandible, obtained by transferring the files to specific reconstruction software (Mangano et al., 2015b;Luongo et al., 2016). Potent CAD software can design a custom-made bone graft straightforwardly on this 3D model (Figliuzzi et al., 2013;Mangano et al., 2015b;Luongo et al., 2016;Chung et al., 2020). The file of the 3D designed scaffold is sent to a computer-numeric-control (CNC) machine, which mills the custom-made bone graft of the chosen material (Figliuzzi et al., 2013;Mangano et al., 2015b;Luongo et al., 2016). Finally, the surgeon can easily adapt the customized scaffold in the surgical site, performing a straightforward and less time consuming surgery procedure, with reduced discomfort for the patient (Figliuzzi et al., 2013;Mangano et al., 2015b;Luongo et al., 2016;Kim et al., 2020). Micro-and macro-porous biphasic calcium phosphate (BCP) have been mainly recommended and characterized in oral surgery practices (Piattelli et al., 1996b;Mangano et al., 2013aMangano et al., , 2015bMangano et al., , 2019Giuliani et al., 2014Giuliani et al., , 2016Kim et al., 2020). They are produced by combining HA and beta-TCP in various compositions rates (HA/beta-TCP ratios), and represent the most important BCP ceramics for dental and medical applications (Piattelli et al., 1996b;Mangano et al., 2013aMangano et al., , 2015bMangano et al., , 2019. In literature, successful bone regeneration using biphasic calcium phosphate materials, both granules and blocks, has been reported in some clinical applications for maxillary sinus elevation (Mangano et al., 2013a;Giuliani et al., 2014;Ohayon, 2014). However, most of these reported studies are based on a single time point (6 months), not allowing an accurate assessment of the kinetics of bone growth on the long-term and thus inhibiting a detailed comparison between different morphologies of the scaffold (Scarano et al., 2000;Giuliani et al., 2016). Moreover, most of the existing studies report on 60% HA and 40% TCP, which is characterized by two types of porosity: macroporosity (pores with diameters range 300-600 micron) leads the colonization of ceramic by osteogenic cells, and microporosity (pores with diameters <10 micron) permits biological fluids circulation (Iezzi et al., 2012;Mangano et al., 2019). TCP dissolution leads to more space available for new bone formation, while the HA maintains its role as a scaffold (Mangano et al., 2015b). 3DP offers several advantages over other SFF techniques for scaffold production: 1. 3DP can make scaffolds with high consistency and precise structural anisotropy. 2. 3DP does not implicate high temperature, strong chemicals, and support structures. 3. The high constructing speed of the print head makes the mass production of scaffolds feasible. 4. It is possible to include biological mediators into the scaffolds if the binder is water.
Besides its chemical structure, one of the key parameters in 3D scaffolds is its internal configuration. Pore size would be directly associated to bone formation, since it offers surface for cell adhesion and space for bone ingrowth; pore interconnection would provide the way for cell distribution/migration and permit an efficient in vivo blood vessel development, suitable for bone tissue neo-formation and remodeling. Studies on non-human primates have shown bone formation by bioactive biomimetic matrices scaffolds (Ripamonti et al., 2008). Geometry is a series of recurring concavities that biomimetizes the remodeling cycle of the primate osteonic bone.
Recent studies have shown that microCT is a powerful tool for studying not only the microarchitecture of the jaw (Mangano et al., 2013b;Giuliani et al., 2016Giuliani et al., , 2018b, but also its osteocyte lacunar morphology and density (Giuliani et al., 2018b,c;Iezzi et al., 2020). In this context, it was observed in the BCPbased Test-sample an augmented specific volume and trabecular thickness, together with decreased specific surface and trabecular spacing, with respect to the unloaded control, and the peridental sites. Frontiers in Bioengineering and Biotechnology | www.frontiersin.org FIGURE 2 | (B) (Acid fuchsin-Toluidine blue 12X). (B) At higher-power magnification, the biomaterial particles (P) were in tight contact with the mature bone (B). At the bone-biomaterial particles interface, the particles showed a lower density (black arrows) compared to their central portion (Acid fuchsin-Toluidine blue 40X). (C) Close to the residual biomaterial, which revealed signs of resorption (black arrows), a multinucleated giant cell was observed (MC). (D) In the small marrow spaces (MS), some blood vessels (V), and signs of bone remodeling were present (black arrows) (Acid fuchsin-Toluidine blue 200 and 100X). (E) Mature lamellar bone (LB) with small osteocyte lacunae were observed (black arrows) and many secondary osteons were detected (O). (F) Histological section under polarized light. The collagen fibers of the lamellar bone (LB) was oriented in a parallel way in many fields, and close to the biomaterial particles (P) (Acid fuchsin-Toluidine blue 40X).
CONCLUSIONS
Within the limitations of this study, based on only one patient, results indicate that while the usual unloaded jaw sites, the BCP-based Test-sample preserved a correct microarchitecture even after 7 years without masticatory loading. However, our investigation of the lacunar mean data indicated that the fact that the present specimen was unloaded for 7 years did not affect the mean volume and size of the osteocyte lacunae, but a lower lacunar density was found with respect to the peri-dental biopsies, confirming previous data (Iezzi et al., 2020). More studies on human, with higher number of patients and with long follow up should be conducted to confirm data presented in the present paper.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by The Ethical Committee of the Hospital of Varese, Italy approved the study protocol (N • 826 del 03/10/2013). The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. | 5,936 | 2021-04-15T00:00:00.000 | [
"Medicine",
"Materials Science",
"Biology"
] |
CircVAPA promotes small cell lung cancer progression by modulating the miR-377-3p and miR-494-3p/IGF1R/AKT axis
Background Multiple lines of evidence have demonstrated that circular RNAs (circRNAs) play oncogenic or tumor-suppressive roles in various human cancers. Nevertheless, the biological functions of circRNAs in small cell lung cancer (SCLC) are still elusive. Methods CircVAPA (annotated as hsa_circ_0006990) was identified by mining the circRNA profiling dataset of six paired SCLC tissues and the RNA-seq data of serum samples from 36 SCLC patients and 118 healthy controls. The circVAPA expression level was evaluated using quantitative real-time PCR in SCLC cells and tissues. Cell viability, colony formation, cell cycle and apoptosis analysis assays and in vivo tumorigenesis were used to reveal the biological roles of circVAPA. The underlying mechanism of circVAPA was investigated by Western blot, RNA pulldown, RNA immunoprecipitation, dual-luciferase reporter assay and rescue experiments. Results We revealed that circVAPA, derived from exons 2-4 of the vesicle-associated membrane protein-associated protein A (VAPA) gene, exhibited higher expression levels in SCLC cell lines, clinical tissues, and serum from SCLC patients than the controls, and facilitated SCLC progression in vitro and in vivo. Mechanistically, circVAPA activated the phosphoinositide 3-kinase (PI3K)/protein kinase B (AKT) signaling pathway by modulating the miR-377-3p and miR-494-3p/insulin-like growth factor 1 receptor (IGF1R) axis to accelerate SCLC progression. Furthermore, circVAPA depletion markedly enhanced the inhibitory effects of BMS-536924, an IGF1R kinase inhibitor in cellular and xenograft mouse models. Conclusions CircVAPA promotes SCLC progression via the miR-377-3p and miR-494-3p/IGF1R/AKT axis. We hope to develop clinical protocols of combinations of circVAPA inhibition and BMS-536924 addition for treating SCLC with circVAPA upregulation. Supplementary Information The online version contains supplementary material available at 10.1186/s12943-022-01595-9.
underlying mechanisms of SCLC progression and identify the potential biomarkers and therapeutic targets in SCLC.
To clarify the functions of circRNAs in SCLC tumorigenesis, bioinformatics analysis of differentially expressed circRNAs in SCLC tissues and serum from SCLC patients identified circVAPA as an up-regulated circular RNA. Loss-and gain-of function experiments revealed that circVAPA accelerated cell cycle progression, cell proliferation, and colony formation in SCLC. Mechanically, we found that circVAPA served as a ceRNA against miR-377-3p and miR-494-3p to weaken their inhibitory effects on IGF1R mRNA expression, thus promoting SCLC progression by activating the PI3K/AKT pathway.
Clinical SCLC samples
All SCLC clinical samples and the corresponding nontumor tissues were collected from Anhui Provincial Hospital. Written informed consent was obtained from all patients for this study. All fresh samples were immediately frozen in liquid nitrogen after removing from the operation and stored in liquid nitrogen for further investigation.
RNA preparation and quantitative real-time PCR
The nuclear and cytoplasmic fractions were isolated as described previously [20]. Total RNA from whole-cell lysates or the cytoplasmic and nuclear fractions were extracted using TRIzol (Thermo Scientific) according to the manufacturer's instructions. Complementary DNA (cDNA) was synthesized using the Transcriptor First Strand cDNA Synthesis Kit (Roche). 18S rRNA was used for miRNAs template normalization, and β-actin was used as an internal standard for circVAPA and mRNAs. Real-time quantitative PCR (RT-qPCR) was performed using ChamQ SYBR qPCR Master Mix (Vazyme) on a Roche Light Cycler 96 Real-Time PCR System. Oligonucleotide sequences for primers using in RT-qPCR were listed in Table S5.
Fluorescence in situ hybridization (FISH)
The Cy3-labeled probe against the back-spliced junction in circVAPA was synthesized by RiboBio (Guangzhou, China). FISH was performed using a FISH kit (RiboBio) following the manufacturer's guidelines. The images were visualized using an Olympus SpinSR10 confocal microscope.
Quantification of RNA copy number per cell
Quantification of RNA copy number per cell was carried out as previously described with some modifications [35].
The DNA fragments corresponding to circVAPA, IGF1R, miR-377-3p and miR-494-3p were amplified with cDNA and the amount of the purified product were used to plot standard curves through real-time PCR. R squares for Spearman's correlation coefficient, and P values were calculated by Spearman's correlation test. Total RNA was extracted from 10 5 DMS273 and H82 cells, respectively, and cDNA was subsequently synthesized. The copy numbers per cell in each cell line were calculated based on the specific number of cells and the Ct value using the standard curve.
Plasmid construction and cell transfection
The short hairpin RNA (shRNA) oligonucleotides targeting the junction site of circVAPA were inserted into the pLKO.1 vector (Sigma). Then the constructs were packaged into lentivirus, which were used to infect SCLC cells. The cells were subsequently selected using puromycin resistance for one week. The surviving cells were regarded as stable circVAPA knockdown cells. For circVAPA overexpression, the second, third and fourth exons of VAPA gene and the endogenous flanking sequence including the complementary Alu element pairs were inserted into the backbone vector of pcDNA3. Transfection was carried out using Effectene Transfection Reagent (QIAGEN) according to the manufacturer's protocol. The circVAPA level was assessed using RT-qPCR. Oligonucleotide sequences for primers used in plasmid construction, short interfering RNAs (siRNAs), and shRNA were listed in Table S5. The pGL-3 basic luciferase reporter vector and pRL-TK renilla luciferase vector were purchased from Promega.
Cell viability assay
SCLC cells (3 × 10 3 /well) were seeded into 96-well plates. After transfection or drug treatment for 48 h, these cells were analyzed using the CellTiter-Glo luminescent assay according to the manufacturer's instructions [36]. The multi-label plate reader (Envision PerkinElmer) was used to detect the luminescence signals.
Cell cycle and apoptosis analysis
SCLC cells were cultured in 6-well plates at a concentration of 2 × 10 5 cells per well. For cell cycle analysis, 48 h after transfection, SCLC cells were fixed in 80% ethanol at -20 °C overnight, followed by the staining with PI/ RNase staining buffer (BD Biosciences). Cells were measured for cell cycle distribution by flow cytometry, and the cell-cycle profiles were further analyzed using Mod-Fit software (Verity Software House). For cell apoptosis assay, apoptotic cells were determined as previously described [36]. Cells were stained with FITC Annexin V and PI using the FITC Annexin V Apoptosis Detection Kit (BD Pharmingen) according to the manufacturer's instructions.
Cell colony formation assay
For adherent cells, 1.5 × 10 3 DMS273 cells 48 h after transfection were plated into 6-well plates in triplicates for each condition and then cultured for 3 weeks. Then the colonies were fixed with methanol for 30 min, followed by staining with 1.5% crystal violet for 10 min at room temperature. For suspension cells, 48 h after transfection, 10 3 cells in 1 ml of RPMI1640 containing 10% (v/v) FBS and 0.33% (w/v) agarose were overlaid onto bottom agar consisting of 1 ml of RPMI-1640 containing 10% (v/v) FBS and 0.5% (w/v) agarose in a 6-well culture plate. Then the cells were cultured for 3 weeks. The colonies were photographed and analyzed by Image J.
RNA pull-down assay with a biotinylated circVAPA probe
RNA pull-down was conducted as previously described [35]. Briefly, we designed a biotin-labeled 30nt probe against the back-spliced junction of circVAPA to specifically pull down circVAPA and its intracellular RNA-RNA complex. A biotin-labeled probe with scrambled sequence was set as a negative control. 10 7 cells were cross-linked in ice-cold PBS buffer with 1% formaldehyde for 10 min. Upon PBS buffer removal, these cells were lysed in RNA immunoprecipitation (RIP) buffer on ice for 30 min. After sonication, the cell supernatant was harvested and divided into two equal parts for subsequent RNA pull-down after centrifugation. The biotin-labeled and control probes were incubated with the respective cell lysate for 4 h at 4 °C with gentle rotation. Identical blocked M280 Streptavidin magnetic Dynabeads (Invitrogen) were added to the above lysates and further rotated for 4 h at 4 °C. After washing with RIP buffer and RIP buffer supplemented with 500 mM NaCl, the bound RNA was isolated using TRIzol and used for RNA detection by RT-qPCR assay.
RNA immunoprecipitation
10 7 cells were cross-linked in ice-cold PBS buffer with 1% formaldehyde for 10 min. Then they were harvested and lysed in RIP lysis buffer, incubated with Dynabeads protein G (Invitrogen) conjugated with anti-IgG (CST, #2729) or anti-AGO2 (Sigma, SAB4200085), and rotated at 4 °C overnight. The immunoprecipitated RNAs were extracted by TRIzol reagent and further detected by RT-qPCR with specific primers.
Small cell lung cancer xenograft mouse models
Animal experiments were carried out according to a protocol approved by the Ethics Committee of Hefei Institutes of Physical Science, China Academy of Sciences [36]. Four-week-old nude mice were injected subcutaneously with 10 7 stably transfected sh-circVAPA or sh-NC in DMS273 cells suspended in an equal volume of Matrigel on the both left and right flanks (n = 5 per group). When tumors volume reached 100-200 mm 3 , 0.5% carboxymethyl cellulose sodium (CMC-Na) or BMS-536924 (100 mg/kg) was administered daily by gavage for 16 consecutive days. The width and length of the tumor were measured every day for 5 weeks, and the tumor size was calculated according to the formula: volume (mm 3 ) = (length × width 2 )/2. Thirty days after injection, mice were sacrificed, and tumors were harvested.
CircRNA RNase R and Actinomycin D treatments
For RNase R treatment, 2 μg total RNA was incubated with or without 20 U RNase R at 37 °C for 30 min. For actinomycin D treatment, 2 mg/ml actinomycin D was added to the culture medium to block RNA transcription at indicated time points. After treatment, RT-qPCR was used to assess the expression levels of circVAPA and VAPA mRNA.
Statistical analysis
Statistical analysis was carried out using GraphPad Prism 6.0.1 (GraphPad Software). Data were listed mean ± SD of at least three independent experiments. Differences between groups were analyzed using Student's t-test. A P-value of < 0.05 was considered to be significant.
Identification of circVAPA in SCLC
Integrative analysis of the previously reported upregulated circRNAs based on circRNA profiling of six paired SCLC tissues and the RNA-seq data of serum samples from 36 SCLC patients and 118 healthy controls [37,38] identified circVAPA as a significantly upregulated circRNA (Fig. 1A, B and Table S1, S2, S3). Then, we analyzed the expression of circVAPA in both lung cancer cell lines and human primary SCLC tissues. The endogenous circVAPA was significantly elevated in 3-paired SCLC tissues compared to the corresponding non-tumor controls (paraSCLC) (Fig. 1C). Similarly, circVAPA exhibited remarkably higher expressions in SCLC cell lines than NSCLC cell lines (Fig. 1D), suggesting that circVAPA is a significantly upregulated circRNA in SCLC meriting further investigation. Two SCLC cell lines (DMS273 and H82) were chosen for subsequent experiments due to the highest circVAPA expression levels among six SCLC cell lines (Fig. 1D). CircVAPA (annotated as hsa_circ_0006990 in circBase (http:// www. circb ase. org/) with 338 nucleotides (nt) in length, was back-spliced from exon 2-4 of the VAMP-associated protein A (VAPA) gene ( Fig. 1E), which is located on human chromosome 18p11.22 [39,40]. The putative back-spliced junction fragment of circVAPA was verified by PCR amplification with divergent primers from complementary DNA (cDNA) of SCLC cell lines and confirmed by Sanger sequencing (Fig. 1E). RNase R exonuclease assay examined by RT-qPCR verified that circVAPA was resistant to digestion (Fig. 1F, G, and Fig. S1A), consistent with the characteristics of circRNAs [41,42]. To further evaluate the stability of circVAPA, actinomycin D (an inhibitor of transcription) treatment assay revealed that circVAPA was more stable than VAPA mRNA in SCLC cells (Fig. 1H, I). The function of non-coding RNA, including circRNA, is tightly and closely related to its subcellular location pattern [9,11,43]. FISH assay with a probe targeting the back-spliced junction of circVAPA and RT-qPCR analysis of nuclear and cytoplasmic RNAs revealed the predominately cytoplasmic enrichment of circVAPA in both DMS273 and H82 cells (Fig. 1J, K). Finally, ~ 770 (GAPDH is mainly expressed in the cytoplasm and U6 in the nucleus). k The subcellular localization of circVAPA in DMS273 and H82 cells performed with FISH. Nuclei was stained blue (DAPI) and circVAPA was stained red (Cy3). Scale bar, 20 μm. (All data are presented as the mean ± SD; *P < 0.05; **P < 0.01; ***P < 0.001 by two-tailed Student's t-test). Three independent assays were performed in the above assays and ~ 500 circVAPA copies per cell were determined in DMS273 and H82 cells, respectively (Fig. S1B, C). Taking above results together, our findings demonstrated that circVAPA was a cytoplasmic circRNA and upregulated in SCLC.
CircVAPA promotes SCLC progression in vitro
The aberrant expression of circVAPA in SCLC tissues prompted further research into the role of circVAPA in SCLC progression. RNA interference is a practical approach to investigating the biological functions of Fig. 2 Silencing circVAPA suppresses cell viability, induces apoptosis of SCLC cells, and inhibits cell cycle progression in vitro. a RT-qPCR analysis of circVAPA and VAPA mRNA expression in SCLC cells treated with the corresponding siRNA. SCR, siRNA with scrambled sequences; si-circVAPA 1# and si-circVAPA 2#, two siRNAs specifically against the junction site of circVAPA. b Cell viability was assessed by the CellTiter-Glo assay. c Colony formation assay (DMS273) and soft agar colony formation assay (H82) were used to assess cell survival in SCLC cells transfected with the indicated siRNAs. d The apoptosis rate was analyzed by flow cytometry after depleting circVAPA in SCLC cells. e Effects on cell cycle progression analyzed by flow cytometry after downregulation of circVAPA. f The expressions of apoptosis-related protein (cleaved-PARP) and cycle-related protein (p21) were detected in SCLC cells transfected with the indicated siRNA by Western blot. β-actin was used as an internal reference. C.PARP, cleaved PARP. (All data are presented as the mean ± SD, ns, no significance; *P < 0.05; **P < 0.01; ***P < 0.001 by two-tailed Student's t-test). Three independent assays were performed in the above assays non-coding RNA (ncRNA) of interest. Two independent siRNAs targeting the back-spliced junction site of circVAPA resulted in the effective silence of circVAPA in SCLC cells, whereas no significant changes on VAPA mRNA ( Fig. 2A and Fig. S2A). The CellTiter-Glo luminescent assay revealed that circVAPA knockdown with either siRNA decreased the SCLC cell viability (Fig. 2B). Subsequently, siRNA-mediated circVAPA inhibition resulted in the reduction in colony formation of SCLC cells (Fig. 2C). Flow cytometry analysis demonstrated that the depletion of circVAPA led to cell cycle G0/G1 arrest in SCLC cells and increased the proportion of apoptotic SCLC cells (Fig. 2D, E and Fig. S2B). Meanwhile, the p21 and cleaved PARP protein levels examined by western blot were robustly elevated upon circVAPA knockdown in SCLC cells (Fig. 2F).
Furthermore, we generated the construction of overexpressing circVAPA (OE-circVAPA) with its endogenous flanking sequences including complementary Alu element pairs to gain the function of the ectopic circVAPA. As depicted in Fig. 3A, OE-circVAPA dramatically increased the circVAPA expressions and unchanged b Cell viability was assessed by the CellTiter-Glo assay. c Colony formation assay (DMS273) and soft agar colony formation assay (H82) were used to assess cell survival in SCLC cells transfected with indicated plasmid. d The apoptosis rate was analyzed by flow cytometry after overexpressing circVAPA in SCLC cells. e Effects on cell cycle progression analyzed by flow cytometry after circVAPA overexpression. f The expressions of apoptosis-related protein (cleaved-PARP) and cycle-related protein (p21) were detected by Western blot. β-actin was used as an internal reference. C.PARP, cleaved PARP. (All data are presented as the mean ± SD; ns, no significance; *P < 0.05; **P < 0.01; ***P < 0.001 by two-tailed Student's t-test). Three independent assays were performed in the above assays the VAPA mRNA levels in SCLC cells. In contrast to the effect of circVAPA silencing, overexpression of circVAPA contributed to the increase in cell viability, colony formation, and cell cycle progression, and the decrease in the proportion of apoptotic cells and the protein levels of p21 and cleaved PARP in SCLC cells (Fig. 3B-F, and Fig. S2C). Collectively, our results concluded that circVAPA promoted SCLC progression in vitro.
CircVAPA functions as a sponge for miR-377-3p and miR-494-3p
Up to now, the well-characterized mechanism for cytoplasmic circRNAs is to sequester miRNA to regulate target gene expression [11,44]. Considering that circVAPA is preferentially localized within the cytoplasm, we speculated that circVAPA participated in SCLC progression through a ceRNA mechanism. Since Ago2 is an essential mediator of circRNA-miRNA interaction [44,45], Ago2 RNA immunoprecipitation (RIP) was conducted to validate the binding of circVAPA in DMS273 cells (Fig. 4A). RIP assay confirmed the direct binding of Ago2 to circVAPA and circHIPK3 (a positive control), but not EIciPAIP2 (a negative control) (Fig. 4B) [46,47]. We then employed CircInteractome (https:// circi ntera ctome. nia. nih. gov/), a web tool for exploring the interaction between circRNAs and miRNAs [48], to predict the putative miRNA binding sites of circVAPA. As illustrated in Fig. 4C, 14 putative miRNA binding sites were predicted in circVAPA sequences, among which there are two potential binding sites for miR-494-3p. Afterward, a biotin-labeled oligonucleotide probe antisense to the junction site of circVAPA was synthesized and applied to perform RNA pull-down assay to further evidence the possible miRNA-circVAPA interactions (Fig. 4D). The antisense probe was able to effectively and precisely capture the endogenous circVAPA, as well as co-pulldown miR-377-3p and miR-494-3p compared to the control probe, but no other 12 predicted miRNAs (Fig. 4E). Moreover, RT-qPCR analysis of Ago2 RIP demonstrated that circVAPA was much more enriched after the overexpression of either miR-377-3p or miR-494-3p with the corresponding mimic in DMS273 cells (Fig. 4F). Furthermore, we constructed luciferase reporter gene plasmids where either the linear sequence or the sequence with the mutation of the putative binding sites for miR-377-3p or miR-494-3p of circVAPA was fused to the 3' UTR of luciferase. Dual-luciferase reporter assay verified the direct binding of circVAPA to miR-377-3p/miR-494-3p in 293T cells (Fig. 4G). Notably, two putative miR-494-3p binding sites in circVAPA are both required for their interactions (Fig. 4G). Upon siRNA-mediated circVAPA knockdown, the expression levels for both miR-377-3p and miR-494-3p were markedly increased in DMS273 and H82 cells, while circVAPA overexpression caused the decreased levels of miR-377-3p and miR-494-3p (Fig. S3A). Importantly, we showed that suppressing miR-377-3p or miR-494-3p in circVAPA-depleted cells could rescue the inhibitory roles of circVAPA knockdown on cell viability and colony formation in DMS273 and H82 cells (Fig. 4H, I and Fig. S3G, H). On the contrary, miR-377-3p or miR-494-3p overexpression eliminated the promotive effects of circVAPA overexpression on cell viability and colony formation of DMS273 and H82 cells (Fig. 4J, K and Fig. S3I, J). These results indicated that circVAPA served as a molecular sponge for miR-377-3p and miR-494-3p in SCLC cells.
CircVAPA facilitates SCLC proliferation through regulating IGF1R in vivo and in vitro
BMS-536924, a small molecule inhibitor targeting IGF1R, has been confirmed to suppress IGF1R phosphorylation and block IGF1R-mediated activation of AKT signaling cascades [33]. Addition of BMS-536924 could attenuate the expression of p-AKT and p-S6RP protein, but this negative regulation of AKT signaling cascade was diminished upon circVAPA overexpression (Fig. 6D). Moreover, we established a DMS273 stable cell line with lentivirus shRNA to silence circVAPA and confirmed the effective knockdown efficiency of circVAPA (Fig. S6E). Treatment with BMS-536924 addition or circVAPA inhibition displayed a moderate impact on the reduction of AKT signaling cascade, whereas a combination treatment with both exhibited the maximal repressive effect ( Fig. 7A and Fig. S6B). In support of the western blot results, the BMS-536924 addition alone or circVAPA silencing displayed an appreciable effect on blocking the cell viability and colony formation of DMS273 and H82 (Fig. 7B, C and Fig. S6C, D). However, the combination of BMS-536924 treatment and circVAPA depletion achieved the maximal inhibitory effects on cell viability and colony formation in DMS273 and H82 (Fig. 7B, C and Fig. S6C, D).
To explore the biological functions of circVAPA in SCLC in vivo, we then performed the subcutaneous injection of circVAPA knockdown and control DMS273 cells into nude mice to investigate the in vivo roles of circVAPA. Cells with stable circVAPA knockdown formed significantly smaller tumors' size, volume, and weight than the control (Fig. 7D, E, F and Fig. S6E). IHC staining demonstrated that the levels of Ki67, IGF1R, p-AKT were significantly decreased in the tumors derived from cells with stable circVAPA knockdown compared to the control ( Fig. 7G and Fig. S6F). The IGF1R inhibitor BMS-536924 exhibited the enhancement in the circVAPA-silencing-caused suppressive effects on cell viability, colony formation, p-AKT, and p-S6RP in vitro and tumors' size, volume, and weight in vivo, suggesting that BMS-536924 and circVAPA depletion might achieve a potential synergistic effect on the treatment of SCLC (Fig. 7A-G and Fig. S6A-D, F). These results indicated that circVAPA promoted SCLC proliferation by targeting IGF1R in vivo.
Discussion
SCLC is an aggressive malignancy with high mortality and poor prognosis [2,6]. Even though a significant improvement in chemotherapy efficacy, the clinical outcome of SCLC patients remains poor, mainly due to recurrence and drug resistance [1,2,6]. Therefore, it is essential to identify novel biomarkers and effective therapeutic targets for SCLC. Emerging shreds of evidence uncover that circRNAs are dysregulated in diverse human cancers, and these aberrantly expressed circRNAs may be associated with the oncogenesis and progression of multiple cancers [10,11]. Nevertheless, researches on the role and molecular mechanism of circRNAs in SCLC are still in its infancy.
Previous studies have revealed that circVAPA played an oncogenic role in colorectal and breast cancer [52,53]. Li et al. discovered that circVAPA facilitated colorectal cancer progression by sponging miR-101 [52]. Additionally, we have also found that RNA pull-down with the probe against the back-spliced junction of circVAPA displayed the enrichment of miR-101 and miR-101 did not affect the IGF1R/PI3K/AKT signaling pathway in SCLC (Fig. S3D-F). Zhou's team reported that miR-130a-5p suppressed breast cancer cell migration and invasion, and circVAPA served as a sponge for miR-130a-5p [53]. However, no detailed studies on the role of circVAPA in SCLC were performed. With a series of molecular, cellular and biochemical experiments, we propose a working model (Fig. 7H) that circVAPA promotes SCLC progression in vitro and in vivo by modulating the miR-377-3p and miR-494-3p/IGF1R/AKT axis, expanding the knowledge about circRNAs in SCLC.
As miRNAs exert their regulatory functions by targeting downstream mRNAs, we explored miR-377-3p/ miR-494-3p downstream mRNA. miR-377 and miR-494 have been found to be related to human cancer and exerted important roles in human cancer by their respective target mRNAs [54,55]. For example, Li found that miR-377 expression was significantly downregulated in esophageal squamous cell carcinoma (ESCC), and miR-377 expression was positively correlated with ESCC patient survival [54]. Moreover, miR-377 inhibits the initiation and progression of esophageal squamous cell carcinoma through the negative regulation of CD133 and VEGF [54]. Additionally, miR-494 suppresses gastrointestinal stromal tumor (GIST) cell proliferation via targeting KIT, a critical regulatory protein in the development and progression of GIST [55]. In this study, we have explored the common downstream targets of miR-377-3p/miR-494-3p using the miRWalk and ENCORI prediction tools [49,50]. Of note, IGF1R was predicted to be a potential mRNA target in the downstream pathway of miR-377-3p/miR-494-3p. Then we utilized dual-luciferase reporter assay based on the putative binding sites of miR-377-3p/miR-494-3p on IGF1R, which verified that IGF1R was the common target of miR-377-3p/miR-494-3p.
Numerous circRNAs are significantly associated with clinicopathological characteristics of cancer by regulating the PI3K/AKT signaling pathway [51]. Given that IGF1R plays vital roles in PI3K-AKT signaling cascades [29,30,51], we speculated that the mechanism of action of circVAPA might affect the PI3K-AKT signaling cascades. As a result, we aimed to investigate whether IGF1R and its downstream PI3K-AKT signaling cascade could be activated by circVAPA altered. The effect of circVAPA knockdown on IGF1R and the PI3K-AKT signaling cascades in SCLC cells could be reversed by co-transfection of miR-377-3p/miR-494-3p inhibitors. On the contrary, the impact of circVAPA over-expression on IGF1R and the PI3K-AKT signaling cascades in SCLC cells could be rescued by co-transfection of miR-377-3p/miR-494-3p mimics or IGF1R inhibition. These in vitro experiments revealed that circVAPA might act as a molecular sponge to relieve the suppressive effects of miR-377-3p/miR-494-3p on their downstream target IGF1R.
The IGF1/IGF1R signaling axis-mediated pathway has been implicated in the tumorigenesis and development of multiple malignancies, and IGF1R inhibitor emerged as a potential anticancer agent [27,28]. We revealed that overexpressing circVAPA could recover the reduction in IGF1R activity and eliminate the PI3K-AKT signaling cascades caused by BMS-536924 stimulation in SCLC. Moreover, BMS-536924 could block IGF1R activity and downstream signaling cascades, and this negative regulation could be further enhanced by knocking down circVAPA in vitro. Furthermore, the combination of circVAPA inhibition and BMS-536924 addition exhibited a better therapeutic efficacy in vivo than circVAPA silencing or BMS-536924 alone.
In conclusion, our study demonstrated that circVAPA might serve as an oncogenic circRNA and promote the progression of SCLC. Mechanistically, circVAPA acted as a sponge for miR-377-3p/miR-494-3p to elevate the IGF1R expression to activate the PI3K/AKT signaling pathway. Additionally, the combination of circVAPA inhibition and BMS-536924 displayed a more potent antitumor effect in SCLC. We hope to develop clinical protocols of combinations of circVAPA inhibition and BMS-536924 addition for treating SCLC with circVAPA upregulation.
Conclusions
In summary, our work may provide novel insights into the mechanisms involved in SCLC progression, as well as a promising biomarker for SCLC. We advocate that the circVAPA/miR-377-3p and miR-494-3p/IGF1R/AKT axis may serve as a potent therapeutic target in SCLC. Fig. 7 CircVAPA facilitates SCLC proliferation through regulating IGF1R in vivo and in vitro. a Western blot analysis of the effect of SCLC stable cell line with circVAPA knockdown or the control with or without IGF1R inhibitor (drug BMS-536924) on AKT and its downstream protein expression. b-c Cell viability (b) and colony formation (c) assays of the SCLC stable cell line with circVAPA knockdown or the control with or without IGF1R inhibitor (drug BMS-536924). d-f Therapeutic efficacy of circVAPA depletion and IGF1R inhibitor (drug BMS-536924) as single-agents or in combination in vivo (n = 5 for each group). Tumor weights (d), tumor volume curves (e), and tumor photos (f) of xenograft tumors treated with circVAPA depletion and IGF1R inhibitor alone or in combination. g Immunohistochemistry analysis of IGF1R and p-AKT in tumors. Scale bar, 50 μm. h Model patterns of circVAPA/miR-377-3p & miR-494-3p/IGF1R/AKT axis. Vehicle, negative control cells for silencing circVAPA; sh-circVAPA, stable cell line with lentivirus shRNA to knockdown circVAPA; IGF1Ri, the addition of IGF1R inhibitor (drug BMS-536924). (All data are presented as the mean ± SD; ns, no significance; *P < 0.05; **P < 0.01; ***P < 0.001 by two-tailed Student's t-test). Three independent assays were performed in the above assays | 6,109.6 | 2022-06-06T00:00:00.000 | [
"Biology",
"Medicine"
] |
Mechanical Engineering of Leg Joints of Anthropomorphic Robot
The problem of design engineering of anthropomorphic robot legs is considered. An overview of the existing anthropomorphic robots and an analysis of servomechanisms and bearing parts involved in the assembly of robot legs are presented. We propose an option for constructing the legs of the robot Antares under development. A two-motor layout, used in the knee, ensures higher joint power along with independent interaction with the neighboring upper and lower leg joints when bending. To reduce the electrical load on the main battery of the robot, the upper legs are provided with a mounting pad for additional batteries powering servos. Direct control of the servos is also carried out through the sub-controllers, responsible for all 6 engines installed in the articular joints of the robot
Introduction
Among problems, which developers of mobile generalpurpose and special-purpose robots face nowadays, are issues of robot's flotation ability on the rugged terrain, autonomous movement and control of kinetic equipment [1].A lever-hinge system of human and animals mobility, created by nature, is the most adapted to the natural earth's surface and is suitable for use during movement of the anthropomorphic robot [2].
Because of the lack of a unified methodology and software for engineering lever-hinge systems for anthropomorphic robots, developers are forced to create their own software in the design process of each individual robot [2]- [4].
The aim of this article is to analyze the existing solutions to the construction of lever-hinge mechanisms of the lower extremities (legs) of anthropomorphic robots and to develop the rational structure of legs of the robot Antares.
One of the simplest structures of the biped robot is described in [4].It is made up of two-millimeter aluminum sheet, includes six servos operated by EyeBot controller, and weighs 1.11 kg.When walking the robot reaches a speed of 120 m/h at a maximum angle of 60 degrees between the hips.Such robot architecture with six servos is used in [5] to study the operating angles of the joints of the knee, ankle and hip.
The anthropomorphic robot in the HanSaRam series, which regularly participates in the FIRA league since 2000, is discussed in [6].HanSaRam-VIII (HSR-VIII) robot has 28 servos, weighs 5.5 kg and can move at a speed of up to 12 cm/s.In [7], the anthropomorphic robot Lola, having 7 degrees of freedom per leg, weighs 55 kg with the 180 cm height.The problems of the stability of the robot after stopping as well as a gradual contact of foot parts with the surface when walking are discussed.Elastic materials of the toe and heel of the robot foot ensure reduction of impact force on the robot structure during touching the surface.
For moving on complicated uneven surfaces, impassable for tracked or wheeled robots, more sophisticated non anthropomorphic structures are also being developed, with one [8], six [9], [10] and a large number [11] of lower extremities.
Based on the conducted analysis of structures of anthropomorphic robots, the robot Poppy of the French company INRIA Flowers and Darwin-OP robot of the company Trossen Robotics [12] were determined to be the closest analogues to the robot Antares under development.Let us consider these robots in more detail.
Modular robot design helps the researcher to change the movement of any robot's limb by isolating the desired limb from the rest of the body, almost without affecting the performance.The structure is specially designed for the installation of additional sensors and connection cables.In addition, such a design facilitates periodic robot maintenance service.However, the center of mass of Poppy is located in the solar plexus, which adversely affects load distribution.The robot becomes unstable and cannot move independently (only with the help of human).
Poppy has the same number of degrees of freedom as Antares in the pelvic region, distributing them in a different way, which makes it not so mobile in the knee and ankle joints.The leg joints have only 3 motors (one in each ankle, knee and hip joints).This adversely affects the overall mobility of the robot, judging by the video footage and design files of the robot available from the developers.
The robot Darwin-OP is a robot platform intended for research and development within the framework of educational process.DARwin OP has high performance and dynamic characteristics and a wide range of sensors.The robot communicates with people by using loudspeakers, microphones, cameras, tactile sensors, LEDs, hand gestures.It possesses 20 Dynamixel actuators that ensure free movement of limbs with a given accuracy and strength margin, as the gears are made of metal.The center of mass is located in the center of pelvis, which ensures the correct distribution of the load during walking and inertia, especially in the extremities.A modular robot design helps the researcher to change the movement of any limb of DARwin OP.The structure also allows the installation of additional sensors.
Design of leg joint of the anthropomorphic robot antares
Designing Antares included several steps associated with the development of joints of legs, arms, torso and head.First priority is given to designing the leg joint because of the following reasons: the high complexity of the layout of joint parts; necessity of this joint for robot movement in space; complexity of calculating joint unit due to the assumed highest load of parts relative to all other joints.
Total toe structure includes pelvic joint and two identical leg joints consisting of simpler attachment point to the pelvic, hip, knee, lower leg, ankle and foot units.Joints of robot pelvic and legs are developed in accordance with the proportions of the human body, adjusted for assumed growth.Their total length is 510.7 mm.The length of ankle and hip joints is 20 cm.
Since in the kinematics of the robot these joints act as a lever arm, this constructive solution is reasonable.The difference in length is achieved by connecting lower leg joint to the ankle joint which is, in turn, connected with the foot, while the upper leg joint is linked to the hip joint that is connected to the point of attachment to the hip unit.
Auxiliary batteries (an installation site is provided ) will be installed in the upper legs to reduce the electrical load on the main battery of Antares as well as to control actuator powering, which will help to avoid power supply problems.To save the processing power of the main controller and the computer located in the torso, auxiliary controllers will also be installed in the upper leg joints.These controllers are responsible for the work of all six engines of robotic leg joints.The given design complicates the calculation of the kinematics of the robot motion, but provides more complex movements.
Figure 1. A two-motor knee
Through the use of twin-engine layout in the knee (Fig. 1) a separate leg joint is obtained, which interacts with neighboring upper and lower leg joints, allowing them to be independent of each other when bending (Fig. 2).In addition, the use of two-motor knee simplifies the selection of servo, as in this type of knee the load is divided between two separate motors.Another advantage of this unit is that it facilitates design engineering of the above-mentioned joints of the upper and lower legs.
Interconnected plate lines, linked by screw couplings and cross plates, make up the basis of the leg structure of Antares (Fig. 3).For parts manufacturing, 2, 4 and 10 mm thick aluminum sheets were used.6-mm-thick aluminum rods were applied for the crossties.In the tibial joint a broader transverse plate is used in order to achieve sufficiently reliable structure.This was necessary to ensure that the ankle could be used as efficiently as possible, which required the free use of the internal space of ankle joint.Screw coupling in this case is not only a stiffening rib, but also an arresting stop so that the ankle joint could not be broken and lead to the damage of other components while working.From 10 mm thick aluminum sheets one type of parts is madea special bearing plate used for the assembly of hip and ankle joints.
For motors Dynamixel MX-64, used in the leg joints, the flanges were made that are necessary for linking separate joints parts and components to provide mobility and stability.The flanges are located in a special socket on the motor housing and the bearing and secured by a special cover which prevents the collapse of the structure during motion and because of vibration.An obligatory requirement for the bearing is a height of 3.5 mm, to comply with the centering of axial arrangement of engines in the overall structure, which is important during robot movement.Pelvic mechanism is located in the lower part of the torso of Antares and is designed for the axial legs rotations as well as for accommodating the main battery.The construction includes two bushings, gears with gear ratio 1: 1, two motors Dynamixel MX-28, two 2-mmthick plates, and the flange bearings 8u14 (Fig. 4).The choice of less powerful servos, compared with those used in the construction of the other legs joints, is substantiated by the fact that for axial rotations the high power of engines is not required.Pairwise motor connection provides space saving in the construction, in order to avoid excessive massiveness.This is necessary to ensure the mobility and flexibility of the assembled joints.In addition, the construction of the hip and ankle joints is designed to the highest possible reproduction of functionality of human ankle and hip joints which are similar to a spherical joint with a limited angle of rotation.Thus, to reproduce the human joint structure, it was decided to introduce to these joints two cylindrical hinges with mutually perpendicular axes shown in Fig. 5.
This solution is applied because, at the moment, it is not possible to repeat structure of the spherical joint and make it controllable to the full for the anthropomorphic mechanism, sufficiently reliable and compact, relatively inexpensive and not requiring constant maintenance.Achieving all of this will complicate both the construction itself and the control of the robot.The implemented solution, in turn, simplifies the structure of the robot ankle without compromising ankle capabilities.
Experimental results and discussion
The leg construction provides large steering angles for the motors, which ensures greater flexibility and ductility of the joint compared with the mentioned analogs.The robot can easily do the splits, raise the leg straight or at an angle, bend it at the knee, while keeping the foot parallel to the floor, if necessary.There are a variety of sit-ups rather than some specific predetermined movements seen in the analogues, including touching the floor by the pelvic mechanism without the threat of damaging its joints.Pelvic mechanism provides axial rotation of the robot lower limbs, which has a positive effect on its portability and allows it to rotate on the way with a minimum turning radius, turn in the necessary direction on the spot and perform a wider range of different leg movements than those available to humans.
Pelvic mechanics allows the robot to rotate each leg at 360° around its axis; however, at this stage we limited it up to 270°, since there was no need for such a large range of rotation.The angle of flexion at the hip joint is 120°.The maximum extension angle is 45°.The side lunge of the leg is + 90°.Similar to humans, it is impossible to fully bring the leg back, as the lower limbs will touch each other.The maximum bringing back of the leg is 55°.The legs are brought back up to the moment when they almost touch each other, and this limits the range of the stroke.The side lunge of the leg is limited to an angle of 135°.It was decided to set limits up to 90°.This is due to the fact that the robot should resemble the structure of the human, and this angle allows the robot to do the splits.In this case, there is no need to have a greater range of variations of the leg position in the hip joint.The knee joint comprises two servos combined into one unit.This design solution increases the mobility and strength torque of the knee.This allows the actuator to bend the knee at an angle of 162°.The use of two actuators in the knee joint entails constant synchronization of the motors relative to each other, since one actuator does not provide the same large angle of the displacement in a joint without a significant loss in power and mobility of the structure.The result is that the location of motors corresponds to the perpendicular position relative to the frontal plane of the robot.The ankle design provides the eversionthe displacement of the foot inwards to the sagittal plane as well as the inversionshifting of the foot outwards from the sagittal plane.Changing the generalized coordinate of the ankle joint in roll represents a rotation of the foot relative to the upper limb from the neutral position.The angle of the foot position changes: eversion is 90° and inversion is 90°.Moving toe upwards relative to the neutral position is taken as a positive angle, and downwardsas the negative angle.In this case, the pitch angle of the foot deviates from + 88° to -180°.Fig. Apart from the pelvic mechanism responsible for axial rotations of legs and being a separate unit, in the legs structure we applied actuators Dynamixel MX-64.Each Dynamixel has a unique ID for connection to the common data line, supports the connection TTL, RS485 and others, can be connected to the common control bus; available LED or emergency shutdown (torque-off) functions may be set to predetermined values of temperature, current and voltage.These actuators can be adjusted to move more smoothly.Dynamixel servos can be controlled from a PC or a microcontroller, which is a great advantage in the development of prototypes.Without the battery and the controller installed in the hip joint, the estimated total weight of the metal frame, servos and flange connections is 1.07 kg, of which 756 g account for the Dynamixel MX-64 servos.Construction of the two joints of the leg and pelvic mechanism weighs 2.44 kg.Range of rotation for the hip from -45º to 45º from -45º to 45º Table 1 shows a comparison of the different angles of deflection of legs positions of the human and developed prototype of an anthropomorphic robot Antares.Based on this table, we can conclude that the various actions typical of the human and the robot can be executed by the latter, as the ranges of changes of human positions angles lie within the ranges of changes of robot angles.
It should be noted that, except for the range of the foot roll, all other angles are artificially limited in order to avoid unnecessary contact of the parts lying in a single plane at the time of motion.Since the robot represents a robotic platform intended for scientific research and development within the framework of educational process, its modular nature can help the researcher to change the movement of any limb by isolating the desired limb from the rest of the body, almost without affecting performance.The structure is specially designed for the installation of additional sensors and connection cables.In addition, such a design facilitates periodic robot maintenance service.
Load weight balancing in the legs of the robot taking into account the maximum supposed weight of 8 kg occurs according to the following formula: where, F is the force applied to the lower leg assembly; P is a weight, which keeps the lower leg; the denominator -2, since all the weight is distributed over 2 legs.Fig. 7 shows the torque supplied to the motor shaft of the ankle in the process of raising the legs.The graph shows that the ankle motor has a maximum torque when the robot is in a sitting position.The tension in the construction of the lower and upper leg with a total weight of the robot equal to 8 kilograms is 7.44 * 10 ^ 5 N/m ^ 2.
Conclusion
The conducted analysis showed the presence of anthropomorphic robot models from 30 cm to 180 cm high with a different number of degrees of freedom and kinematic schemes.It was concluded that the closest analogues to the robot Antares under development are Poppy and Darwin-OP robots.A two-motor layout, used in the knee, ensures higher joint power along with independent interaction with the neighboring upper and lower leg joints when bending.The electrical load on the main battery of Antares is reduced by the use of auxiliary batteries installed in the upper legs and powering the servos.The direct servo control is also performed by the auxiliary controllers responsible for the work of all six engines of the leg joints.Studies of the prototype design have demonstrated that individual components and parts have a more than ten-fold safety margin.The developed robot is intended for creation of assistive technologies via human-robot interaction based on multimodal interfaces in cyberphysical intelligent environment.
The study was performed through the grant of the Russian Science Foundation (project no.16-19-00044).
Figure 2 .
Figure 2. Functional capabilities of the two-motor knee Parts made from 2 mm thick aluminum sheet are the basis of the joints of lower and upper legs, foot and pelvic attachment.The joints parts made from 2 mm sheet are structurally designed for power loads and pressure from top.Parts made from 4-mm-thick aluminum sheets are used as stiffening ribs intended for the torsional loads.From these sheets we made transverse upper and lower leg struts and mounting plates of the hip (4 mm thick) and foot.It is also intended to use them to locate the internal components, such as an auxiliary battery, actuators controller of the entire leg joint.Along with the transverse plates, aluminum bars are used as stiffening ribs but only as elements of structural reinforcement.
Figure 3 .
Figure 3. Upper and lower leg structures
Figure 4 .Figure 5 .
Figure 4. Pelvic mechanismThe hip and ankle joints are formed by pairwise motor connection with metal inserts; in the motor housing extra strong plastic or aluminum are used in order to withstand physical loads on the motor housing during movement (Fig.5).
DOI: 10
results of work to change the generalized coordinate of the robot foot.Movement is carried out in full compliance with design feature that limits the angles of pitch and roll.
Figure 6 .
Figure 6.Turning angles of the leg
Table 1 .
The comparison of the angles of the human and the robot Antares side lunge of the leg in the hip from 0º to 45º from 0º to 90º Range of the relative bringing back of the leg in the hip from 0º to 30º from 0º to 55º
Figure 7 .
Figure 7. Torque supplied to the motor shaft | 4,312 | 2016-01-01T00:00:00.000 | [
"Engineering"
] |
Fabrication and Experimental Validation of a Sensitive and Robust Tactile Sensing Array with a Micro-Structured Porous Dielectric Layer
The development of pressure sensors of high sensitivity and stable robustness over a broad range is indispensable for the future progress of electronic skin applicable to the detection of normal and shear pressures of various dynamic human motions. Herein, we present a flexible capacitive tactile sensing array that incorporates a porous dielectric layer with micro-patterned structures on the surface to enable the sensitive detection of normal and shear pressures. The proposed sensing array showed great pressure-sensing performance in the experiments, with a broad sensing range from several kPa to 150 kPa of normal pressure and 20 kPa of shear pressure. Sensitivities of 0.54%/kPa at 10 kPa and below, 0.45%/kPa between 10 kPa and 80 kPa, and 0.12%/kPa at 80 kPa and above were achieved for normal pressures. Meanwhile, for shear pressures, sensitivities up to 1.14%/kPa and 1.08%/kPa in x and y directions, respectively, and below 10 kPa, 0.73%/kPa, and 0.75%/kPa under shear pressure over 10 kPa were also validated. The performance of the finger-attached sensing array was also demonstrated, demonstrating which was a potential electronic skin to use in all kinds of wearable devices, including prosthetic hands, surgical robots, and other pressure monitoring systems.
Good flexibility, high sensitivity, wide measuring range, fast responsiveness, and stable robustness are significant requirements for practical applications of tactile sensing devices. For capacitive sensors, sensitivity can be determined by numerous factors, with dielectric permittivity being a chief one. Compared with commonly applied polydimethylsiloxane (PDMS) and other flexible composites, air has the lowest permittivity as the dielectric material and, thus, provides the highest sensitivity in cases with the same structure. Therefore, this concept has been widely applied in numerous research studies during the last decade, the proposed capacitive sensing element showed decent sensitivity since there was nothing but air between every two electrodes. Such capacitive sensing elements were 2 of 13 usually combined in particular patterns to realize normal and shear forces measurement, and cross-shaped walls or similar constructions were implemented between the sensing elements to fully avoid possible coupling interferences [25][26][27]. However, using air as a dielectric layer would rapidly lead the tactile sensing element into a nonlinear measuring range since two electrodes may easily have contact even under small forces, and if applied forces still increase, the sensing capacitors can be saturated.
For achieving long measurement range and still maintaining high sensitivity, great efforts have been observed, and implementing a dielectric layer into micro-structured patterns on the surface is one effective attempt. Liang et al. [12] presented a flexible capacitive tactile sensor array embedded with a truncated PDMS pyramid array as a dielectric layer. The truncated pyramid array was easily deformed under tiny forces, leading to high sensitivity under a small force and large measurement range. Boutry et al. [13] also arranged pyramid microstructures along nature-inspired phyllotaxis spirals; an eskin that mimicked the interlocked dermis-epidermis interface in human skin provided increased sensitivity and excellent cycling stability. Cho et al. [16] presented a flexible capacitive pressure sensor that incorporated micro-patterned pyramidal ionic gels to enable ultrasensitive pressure detection with a broad sensing range from a few pascals to 50 kPa. The only inadequacy of such construction was that the molds for micro-structures were usually fabricated through the photolithography process, which is time consuming and cost-ineffective.
Thereafter, some other reported approaches enhanced sensor sensitivity by forming porous PDMS rather than patterning microstructures on the surface of a dielectric layer [28][29][30][31]. The particle-template method is one convenient and simple strategy to fabricate a highly deformable PDMS dielectric layer with excellent reproducibility and repeatability. Tang et al. [28] fabricated a new capacitive pressure sensor based on a porous CCTO-PDMS membrane; a thin membrane of 40 µm thickness was made using the doctor blade method, which can be applied to very small scenes. Kim et al. [29] proposed a simple fabrication process of a highly sensitive capacitive pressure sensor using a porous dielectric layer with cone-shaped patterns, which were prepared using microwave irradiation of an emulsion consisting of a sacrificial solvent and a pre-cured PDMS solution-the cone-shaped patterns on the surface would also further enhance the sensitivity.
In this study, we present the fabrication and experimental validation of a sensitive and robust tactile sensing array based on a micro-structured porous PDMS dielectric layer. The porous structures were prepared using perfluorotributylamine (C12F27N, MACKLIN, CHINA) as the sacrificial solvent mixed with pre-cured PDMS for its non-conduciveness and thermal and chemical stability. A cylindrical pillar was patterned in the center of every unit sensing element as a substitution for cross-shaped walls enabling full decoupling of applied normal and shear forces. A sensing array was successfully fabricated and attached to human fingers, and evaluations of sensitivity, repeatability, and response time all validated the proposed tactile sensor to be a potential electronic skin.
Design and Method of Three-Axial Tactile Sensing
The proposed electronic skin includes four layers: a polyimide (PI) sensing film with bottom electrodes, a micro-structured PDMS dielectric layer, a PI sensing film with top electrodes, and a surface layer with pen-cap-like bumps. The electronic skin is made of soft PI films and PDMS, which have been proven to have excellent temperature and chemical stability and mechanical durability. Every four top electrodes and confronted bottom electrodes consist of four sensing capacitors, namely S 1 , S 2 , S 3 , and S 4 , as shown in Figure 1a, making an individual unit sensing element.
shown in Figure 1b and c, the contacted bumps will be compressed or declined. The deflections of bumps will result in possible capacitance variations, which correspond to the applied normal and shear forces. Since the distance between the center points of two sensing electrodes in the diagonal direction is approximately 3.54 mm, bump diameter is set at 3.5 mm for reasonable density while maintaining adequate capacitance at each electrode. We fixed bump heights to 3.5 mm for sufficient shear load sensitivity; taller bumps increase the sensitivity while decrease the linearity. Figure 2 illustrates the fabrication process of a porous PDMS dielectric layer. C12F27N as the sacrificial solvent was used during the fabrication process for its nonconduciveness and thermal and chemical stability. Firstly, the PDMS was prepared by mixing a base gel and a curing agent (Sylgard 184, Dow Chemical Co., Milander, MI, USA) in a weight ratio of 15:1, as shown in Figure 2a, and C12F27N was dispersed in the pre-PDMS solution to fabricate the PDMS-C12F27N emulsion. Increasing the C12F27N ratio of the emulsion leads to a notable porous structure, thus, resulting in better sensitivity. However, we found that during the dispersion phase of C12F27N, it separated during the continuous phase when concentration was above a nearly 40% ratio in our tests, therefore, A cylindrical pillar is patterned in the center of every unit sensing element and separates the top and bottom electrodes, leaving air between the narrow gaps to be the majority of the dielectric layer. Such a micro-structured surface of the dielectric layer enables every unit sensing element to be particularly sensitive to applied forces, the normal and shear components of which would also result in symmetrical impacts on four sensing capacitors, easily allowing for the full decoupling of multi-axial components in further calculations. As mentioned above, using air as the dielectric layer would rapidly lead to the saturation of the proposed sensor element, therefore, the PDMS consists of the dielectric layer fabricated into porous composition, which ensures the dielectric layer can be easily deformed under a small force but will not be completely compressed under a large force, thus, the sensor array can achieve high sensitivity under a small force as well as large measurement range.
Preparation of Micro-Structructured Porous PDMS Dielectric Layer
The surface of the sensing array consists of numbers of pen-cap-like bumps for the reliable traction of objects. When normal or shear forces are applied on the surface, as shown in Figure 1b,c, the contacted bumps will be compressed or declined. The deflections of bumps will result in possible capacitance variations, which correspond to the applied normal and shear forces. Since the distance between the center points of two sensing electrodes in the diagonal direction is approximately 3.54 mm, bump diameter is set at 3.5 mm for reasonable density while maintaining adequate capacitance at each electrode. We fixed bump heights to 3.5 mm for sufficient shear load sensitivity; taller bumps increase the sensitivity while decrease the linearity. Figure 2 illustrates the fabrication process of a porous PDMS dielectric layer. C12F27N as the sacrificial solvent was used during the fabrication process for its non-conduciveness and thermal and chemical stability. Firstly, the PDMS was prepared by mixing a base gel and a curing agent (Sylgard 184, Dow Chemical Co., Milander, MI, USA) in a weight ratio of 15:1, as shown in Figure 2a, and C12F27N was dispersed in the pre-PDMS solution to fabricate the PDMS-C12F27N emulsion. Increasing the C12F27N ratio of the emulsion leads to a notable porous structure, thus, resulting in better sensitivity. However, we found that during the dispersion phase of C12F27N, it separated during the continuous phase when concentration was above a nearly 40% ratio in our tests, therefore, we expected that 30 vol.% of C12F27N would be a suitable concentration for a stable emulsion in the following preparation process. A stable emulsion was obtained under sufficient stirring for 5 min at 2000 rpm. The prepared mixture was poured on an aluminum mold with a dimension of 40 mm × 40 mm × 0.5 mm and placed in the vacuum desiccator to experience a further degas process for 30 min under 30 • C, as illustrated in Figure 2b. Possible residual air bubbles were evacuated in this step. Thereafter, as shown in Figure 2c,d, the mixture was cured on a hot-plate at 80 • C for 2 h, then the cured PDMS was washed for 6 h in deionized water (DI water) using an ultrasonic cleaner to dissolve the C12F27N. Finally, as shown in Figure 2e, the porous PDMS dielectric layer was dried at 60 • C for 1 h to remove the moisture. The photograph of the porous dielectric layer obtained by scanning electron microscope (HITACHI SU8010) illustrated that the pores were well-fabricated and of uniform size, as shown in Figure 2e.
Fabrication of Capacitive Tactile Sensor
The fabrication process of the proposed tactile sensor array is illustrated in Figure 3. Both the top and bottom electrodes were generated applying the FPC method. A doublesided board was selected in the design and electrodes and signal wires were fabricated on different layers to minimize the influences of parasitic capacitance. For the double-sided board, the base film was usually a 12.5 µm-thick PI film and two 18 µm-thick copper layers were bonded to separate sides of the base film by a 13 µm-thick adhesive layer, which made the thickness of the raw double-sided board 74.5 µm. By fabricating the sensing electrode using the FPC method, there is still the risk of possible micro-cracks, open welding, and other defects in unreasonable design or the manufacturing process but with a much lower possibility compared with being generated by the magnetron sputtering method. After drilling holes and plating through holes, electrodes and signal wires on two copper layers were electrically connected. Photoresist dry film was then pasted on the surfaces of the two copper layers and afterwards the designed electrode and signal wires patterns were transferred to the dry film under ultraviolet exposure, leaving the hollowed and transparent part of the desired copper retention area. After being washed by a certain developer, such as sodium carbonate, the uncured dry film was detached, fully exposing the undesired copper area. After developing, the exposed part of the copper layer was removed by etching solution, and the protected pattern was left. Remaining dry film was stripped by strong alkaline solution from the obtained board, making it suitable for the following solder mask printing, electroless nickel immersion gold (ENIG) method, and other finishing processes. Counting the thickness of the cover layers, the full thickness of the FPC board was eventually 110 µm. Another advantage of the FPC method along with the low risk of possible defects was that no extra transfer interface would be required, the signal wires on the FPC board can be connected to the following capacitance scanning circuit directly. signal wires on the FPC board can be connected to the following capacitance scanning circuit directly.
PDMS mixture was poured on an aluminum mold and spin-coated at 450 rpm for 30 s to fabricate the designed surface layer, as plotted in Figure 3a. The weight ratio of base gel and curing agent was 10:1, which resulted in a higher Young's modulus than the dielectric layer. After a degas process for 20 min in the vacuum desiccator, remaining air bubbles were removed in this step. A smooth acrylic sheet was then laminated on the top of poured PDMS and the redundant PDMS was squeezed out by applying a light force on the acrylic sheet by hand. After curing the PDMS, the top PI sensing film was then firmly bonded to the bottom surface of the surface layer with silicone glue (Cemedine 8008) before peeling off from the PDMS mold, as shown in Figure 3b. Lastly, the dried porous dielectric layer and bottom PI sensing film were aligned and bonded to the top PI sensing film with the surface layer using a three-axial manual stage, as in Figure 3c,d, which ensured the precise and firm bonding of different layers of the proposed tactile sensing array. PDMS mixture was poured on an aluminum mold and spin-coated at 450 rpm for 30 s to fabricate the designed surface layer, as plotted in Figure 3a. The weight ratio of base gel and curing agent was 10:1, which resulted in a higher Young's modulus than the dielectric layer. After a degas process for 20 min in the vacuum desiccator, remaining air bubbles were removed in this step. A smooth acrylic sheet was then laminated on the top of poured PDMS and the redundant PDMS was squeezed out by applying a light force on the acrylic sheet by hand. After curing the PDMS, the top PI sensing film was then firmly bonded to the bottom surface of the surface layer with silicone glue (Cemedine 8008) before peeling off from the PDMS mold, as shown in Figure 3b. Lastly, the dried porous dielectric layer and bottom PI sensing film were aligned and bonded to the top PI sensing film with the surface layer using a three-axial manual stage, as in Figure 3c,d, which ensured the precise and firm bonding of different layers of the proposed tactile sensing array. Figure 4a shows the fabricated sensor array, which was composed of 8 × 8 unit sensing elements with overall dimensions of 40 mm × 40 mm. The distance between two central points of the adjacent units showed that the spatial resolution was 5 mm. The fabricated sensor array featured high flexibility and was easily bent by hand, as shown in Figure 4b Figure 5a illustrates the basic composition of the test bench, in which the sensing array was firstly fixed to a glass wafer and then mounted on a three-dimensional force sensor (ME K3D120) with resolution up to 0.01 mN and relative linearity error down to 0.2%FS. A 3D printed (Nova, bena5) loading bar using white standard resin was mounted on a three-axial manual stage (Zolix-AK25A-6520), the front tip of which was designed as a small cylinder with a diameter of 4 mm. To simultaneously apply normal and shear forces on the bumps, the bottom surface of the front tip was fabricated as a hemisphere concave with the same shape as the bump, as shown in Figure 5b. In this case, the loading bar would always make full contact with the top surface of the bump of the unit sensing element. Along with the movement of the stage in the x, y, and z directions, the tip applies normal and shear force to the unit sensing element simultaneously.
Experimental Setup
The proposed tactile sensing array was designed with 8 × 8 common unit sensing elements, that detected 256 capacitors in a single measuring sequence, which was usually conducted based on scanning detection of a common setup, as shown in Figure 5c. The fabricated tactile sensing array in Figure 4 was directly connected to the scanning circuit via a commoditized 16-pin FPC connector (0.5K-AS-16PWB). During a full measurement sequence, all the sensing capacitors should be scanned in sequence, and for every clock cycle, only one analog switch in a row and column would be connected, thus, one specified capacitor was selected, while the rest of the capacitors were shielded by the virtual ground. Capacitance values were measured by a capacitance-to-digital converter (AD7745), whose update rate ranged from 10 Hz to 90 Hz. In this experiment, a sampling rate of AD7745 was set to 50Hz, achieving higher measuring efficiency. Figure 5a illustrates the basic composition of the test bench, in which the sensing array was firstly fixed to a glass wafer and then mounted on a three-dimensional force sensor (ME K3D120) with resolution up to 0.01 mN and relative linearity error down to 0.2%FS. A 3D printed (Nova, bena5) loading bar using white standard resin was mounted on a three-axial manual stage (Zolix-AK25A-6520), the front tip of which was designed as a small cylinder with a diameter of 4 mm. To simultaneously apply normal and shear forces on the bumps, the bottom surface of the front tip was fabricated as a hemisphere concave with the same shape as the bump, as shown in Figure 5b. In this case, the loading bar would always make full contact with the top surface of the bump of the unit sensing element. Along with the movement of the stage in the x, y, and z directions, the tip applies normal and shear force to the unit sensing element simultaneously. Figure 5a illustrates the basic composition of the test bench, in which the sensing array was firstly fixed to a glass wafer and then mounted on a three-dimensional force sensor (ME K3D120) with resolution up to 0.01 mN and relative linearity error down to 0.2%FS. A 3D printed (Nova, bena5) loading bar using white standard resin was mounted on a three-axial manual stage (Zolix-AK25A-6520), the front tip of which was designed as a small cylinder with a diameter of 4 mm. To simultaneously apply normal and shear forces on the bumps, the bottom surface of the front tip was fabricated as a hemisphere concave with the same shape as the bump, as shown in Figure 5b. In this case, the loading bar would always make full contact with the top surface of the bump of the unit sensing element. Along with the movement of the stage in the x, y, and z directions, the tip applies normal and shear force to the unit sensing element simultaneously.
Experimental Setup
The proposed tactile sensing array was designed with 8 × 8 common unit sensing elements, that detected 256 capacitors in a single measuring sequence, which was usually conducted based on scanning detection of a common setup, as shown in Figure 5c. The fabricated tactile sensing array in Figure 4 was directly connected to the scanning circuit via a commoditized 16-pin FPC connector (0.5K-AS-16PWB). During a full measurement sequence, all the sensing capacitors should be scanned in sequence, and for every clock cycle, only one analog switch in a row and column would be connected, thus, one specified capacitor was selected, while the rest of the capacitors were shielded by the virtual ground. Capacitance values were measured by a capacitance-to-digital converter (AD7745), whose update rate ranged from 10 Hz to 90 Hz. In this experiment, a sampling rate of AD7745 was set to 50Hz, achieving higher measuring efficiency. The proposed tactile sensing array was designed with 8 × 8 common unit sensing elements, that detected 256 capacitors in a single measuring sequence, which was usually conducted based on scanning detection of a common setup, as shown in Figure 5c. The fabricated tactile sensing array in Figure 4 was directly connected to the scanning circuit via a commoditized 16-pin FPC connector (0.5K-AS-16PWB). During a full measurement sequence, all the sensing capacitors should be scanned in sequence, and for every clock cycle, only one analog switch in a row and column would be connected, thus, one specified capacitor was selected, while the rest of the capacitors were shielded by the virtual ground. Capacitance values were measured by a capacitance-to-digital converter (AD7745), whose update rate ranged from 10 Hz to 90 Hz. In this experiment, a sampling rate of AD7745 was set to 50 Hz, achieving higher measuring efficiency.
Sensing Array Calibration
Sensitivity, commonly defined as S = (∆C/C 0 )/p × 100%, was calibrated firstly based on the proposed test bench, where p represents the applied normal or shear pressures, while ∆C and C 0 are the capacitance variations of the measured sensor and the initial capacitance value without external forces, respectively. Considering that all unit sensing elements of the sensor array shared the same pattern, the unit sensing element in row 4 and column 4 was selected as the specific calibration object. Capacitance, known as ε 0 ε r A s /g, would not be in a close linear relationship to applied external forces, where ε r denotes the dielectric constant of porous PDMS and ε 0 denotes that of pure air, while A s and g represent the sensing area and gap distance, respectively; thereafter, plotted in Figure 6a, average sensitivity between three different intervals ranging from 1 kPa to 150 kPa in the z direction were calculated to evaluate sensing array performance of applying normal forces. The results imply that sensitivity as a function of applied pressure decreased along with the increase in external pressures, sensitivity up to 0.54%/kPa was obtained when the pressure was below 10 kPa, decreasing to 0.12%/kPa at a pressure of 80 kPa and above.
Sensing Array Calibration
Sensitivity, commonly defined as S = (ΔC/C0)/p × 100%, was calibrated firstly based on the proposed test bench, where p represents the applied normal or shear pressures, while ΔC and C0 are the capacitance variations of the measured sensor and the initial capacitance value without external forces, respectively. Considering that all unit sensing elements of the sensor array shared the same pattern, the unit sensing element in row 4 and column 4 was selected as the specific calibration object. Capacitance, known as ε0εrAs/g, would not be in a close linear relationship to applied external forces, where εr denotes the dielectric constant of porous PDMS and ε0 denotes that of pure air, while As and g represent the sensing area and gap distance, respectively; thereafter, plotted in Figure 6a, average sensitivity between three different intervals ranging from 1 kPa to 150 kPa in the z direction were calculated to evaluate sensing array performance of applying normal forces. The results imply that sensitivity as a function of applied pressure decreased along with the increase in external pressures, sensitivity up to 0.54%/kPa was obtained when the pressure was below 10 kPa, decreasing to 0.12%/kPa at a pressure of 80kPa and above.
Shear forces were relatively lower compared with applying normal forces in most cases; therefore, as shown in Figure 6b and c, average sensitivities between every 10 kPa, ranging from 0 to 20 kPa in the x and y directions, were calculated to evaluate sensing array performance of applying shear forces, respectively. Due to the pen-cap-like bumps on the surface of sensing array, sensitivities of shear forces up to 1.14%/kPa and 1.08%/kPa were obtained in the x and y directions, respectively, when the pressure was below 10 kPa, and slightly decreased to 0.73%/kPa and 0.75%/kPa at shear pressures over 10 kPa, whose performances were considered better than that of normal forces. Shear forces were relatively lower compared with applying normal forces in most cases; therefore, as shown in Figure 6b,c, average sensitivities between every 10 kPa, ranging from 0 to 20 kPa in the x and y directions, were calculated to evaluate sensing array performance of applying shear forces, respectively. Due to the pen-cap-like bumps on the surface of sensing array, sensitivities of shear forces up to 1.14%/kPa and 1.08%/kPa were obtained in the x and y directions, respectively, when the pressure was below 10 kPa, and slightly decreased to 0.73%/kPa and 0.75%/kPa at shear pressures over 10 kPa, whose performances were considered better than that of normal forces.
Compared with the existing three-axial capacitive tactile sensor using PDMS truncated pyramids, a dielectric layer featured sensitivities of 0.93 and 0.92%/kPa in x and y directions at shear pressure below 31.25 kPa, meanwhile it was 1.08%/kPa for normal pressure below 31.25 kPa and 0.123%/kPa between 31.25 kPa and 250 kPa [12]. The developed sensing array provided higher sensitivity only for shear pressures, which we assumed acceptable at the present stage considering the entire fabrication process was simple and of low expense. Another reported capacitive sensor applied in the electronic skin system using a porous dielectric layer featured sensitivity of 2.3%/kPa for normal pressure only below 20 kPa [30], but the thickness of the dielectric layer was 4.5 mm in the presented tests, which was nine time thicker than our proposed sensing array. Overall speaking, the capability of providing good sensitivity for gentle touch or soft contact force and large measuring range at the same time based on the proposed tactile sensing array have been demonstrated during the experiments, which was suitable for possible robotics and prosthetic hand applications.
In addition, fast responses were observed over the broad range of applied normal pressure (4 kPa, 40 kPa, and 120 kPa) in a stepwise manner by repeating loading for 3 s and unloading for 3 s, as shown in Figure 7a. Stable output signals have been acquired during three times of repeated cycles, confirming that the proposed sensing array was capable of operating under a diverse pressure regime with sufficient repeatability, which was mainly attributed to the excellent elastic property of the porous structure and full restoration of PDMS walls after buckling under compressive pressure. During repeated loading, the unit sensing element can always react with the minimum measurement interval-a response time of less than 20 ms was measured, as shown in Figure 7b. Compared with the existing three-axial capacitive tactile sensor using PDMS tru cated pyramids, a dielectric layer featured sensitivities of 0.93 and 0.92%/kPa in x and directions at shear pressure below 31.25 kPa, meanwhile it was 1.08%/kPa for normal pr sure below 31.25 kPa and 0.123%/kPa between 31.25 kPa and 250 kPa [12]. The develop sensing array provided higher sensitivity only for shear pressures, which we assum acceptable at the present stage considering the entire fabrication process was simple a of low expense. Another reported capacitive sensor applied in the electronic skin syste using a porous dielectric layer featured sensitivity of 2.3%/kPa for normal pressure on below 20 kPa [30], but the thickness of the dielectric layer was 4.5 mm in the present tests, which was nine time thicker than our proposed sensing array. Overall speaking, t capability of providing good sensitivity for gentle touch or soft contact force and lar measuring range at the same time based on the proposed tactile sensing array have be demonstrated during the experiments, which was suitable for possible robotics and pro thetic hand applications.
In addition, fast responses were observed over the broad range of applied norm pressure (4 kPa, 40 kPa, and 120 kPa) in a stepwise manner by repeating loading for and unloading for 3 s, as shown in Figure 7a. Stable output signals have been acquir during three times of repeated cycles, confirming that the proposed sensing array w capable of operating under a diverse pressure regime with sufficient repeatability, whi was mainly attributed to the excellent elastic property of the porous structure and f restoration of PDMS walls after buckling under compressive pressure. During repeat loading, the unit sensing element can always react with the minimum measurement terval-a response time of less than 20 ms was measured, as shown in Figure 7b. However, during repeated unloading, as plotted in Figure 7c, the recovery time of the unit sensing element was gradually extended from 40 to 60 ms as the loading pressure increased, which was attributed to hysteresis characteristic of the dielectric layer. Therefore, it can be concluded that the unit sensing element responded quickly to the external pressure and can better recover to the initial capacitance value after the external pressure was unloaded. For the entire tactile sensing array, capacitances of each unit sensing element were measured in sequence, a complete measuring cycle lasted for 5.12 s, and slight recovery time delay played negligible impacts on identifying the overall pressure distribution in real-time. Nevertheless, different efforts still need to be attempted to improve the dynamic response speed as well as the measuring speed of the scanning circuit for faster response times of the tactile sensing array.
Three-Axial Force Sensing
The capacitance variations of the proposed tactile sensing array are mainly determined by gap distance changes, which are closely related to applied forces. When a normal force is applied on the bumps of the surface layer, the contacted bumps will be compressed in the z direction, gap distances of four sensing capacitors attain the same variation, defined as g z . When a shear force in the x direction is loaded, the contacted bumps will be declined to the same direction, the gap distances of S 2 and S 3 decrease with the same variation as g x , while those of S 1 and S 4 increase to the same value. When a shear force in the y direction is loaded, the gap distances of S 1 and S 2 increase with the same variation as g y , while those of S 3 and S 4 decrease. Once gap variations induced by specific forces in diverse directions are clarified, the applied external force can easily be divided into its normal force component F z and shear force components F x and F y simultaneously. In this concern, the relationships between capacitance variations with gap distances changes are established as the following: 1/C S1 − 1/C S10 = ∆g 1 /ε 0 ε r A S = (−g z + g x + g y )/ε 0 ε r A S 1/C S2 − 1/C S20 = ∆g 2 /ε 0 ε r A S = (−g z − g x + g y )/ε 0 ε r A S 1/C S3 − 1/C S30 = ∆g 3 /ε 0 ε r A S = (−g z − g x − g y )/ε 0 ε r A S 1/C S4 − 1/C S40 = ∆g 4 /ε 0 ε r A S = (−g z + g x − g y )/ε 0 ε r A S (1) where ∆g i denotes the gap distance change of each capacitor in a unit sensing element, C Si0 and C Si are the initial and current capacitances of the unit sensing element, respectively.
When capacitance variations of every unit sensing element are detected, gap distance changes in four sensing capacitors can easily be calculated from Equation (1), then gap variations g m (m = x, y, z) derived by normal and shear forces F m (m = x, y, z) can be achieved as the following: In this study, the relationships between applied forces and gap variation should be defined based on acquired data from former sensing array calibration due to the nonlinearity between the forces and capacitance changes, as shown in Figure 6. Therefore, specific relations between F m and g m are claimed as follows after polynomial fitting: R-square of fitted curves of F x , F y , and F z were 0.998, 0.999, and 0.995, respectively, which implies that three-axial external forces can be precisely detected when provided with accurately measured capacitance variations.
Distributed Contact Force Sensing
Accurately detecting the pressure distribution is a critical factor for measuring fine movements as an electronic skin. Compared to a single unit sensing element, the full sensing array is arranged as a matrix, similar to biological skin and, thus, can more effectively identify the overall pressure distribution. Based on the proposed scanning circuits, threeaxial force components (F ax , F y , and F z ) measured by the full sensing array are transmitted to the host computer and displayed in real-time. For demonstration, a plastic box of flat surfaces filled with blades weighing 45 g was placed on the sensing array as shown ni Figure 8a, the weight of which was distributed to contact areas. Measuring results in Figure 8b demonstrate that the sensing array could accurately map the distribution of the capacitance responses corresponding to the shape and weight of the objects, and that would be critical in a real grasping situation.
Micromachines 2022, 13, x FOR PEER REVIEW 10 of 13 R-square of fitted curves of Fx, Fy, and Fz were 0.998, 0.999, and 0.995, respectively, which implies that three-axial external forces can be precisely detected when provided with accurately measured capacitance variations.
Distributed Contact Force Sensing
Accurately detecting the pressure distribution is a critical factor for measuring fine movements as an electronic skin. Compared to a single unit sensing element, the full sensing array is arranged as a matrix, similar to biological skin and, thus, can more effectively identify the overall pressure distribution. Based on the proposed scanning circuits, threeaxial force components (Fax, Fy, and Fz) measured by the full sensing array are transmitted to the host computer and displayed in real-time. For demonstration, a plastic box of flat surfaces filled with blades weighing 45 g was placed on the sensing array as shown ni Figure 8a, the weight of which was distributed to contact areas. Measuring results in Figure 8b demonstrate that the sensing array could accurately map the distribution of the capacitance responses corresponding to the shape and weight of the objects, and that would be critical in a real grasping situation. Furthermore, we also demonstrated the measurements of dynamic pressures to apply the tactile sensing array as an electronic skin. The proposed tactile sensing array was firstly attached to one author's hands with two fingers using double-sided tape, then picking up an empty beverage can and a full one under the same measuring circumstances, as shown in Figure 9a and Figure 9b, respectively. Specific real-time distribution of threeaxial force components among the full measurement regime of the proposed sensing array was well-illustrated. Each unit sensing element responded well to the dynamic motions and maintenance of, and the increase and decrease in, applied external forces in terms of capacitance, variations were all well-detected in real-time. For the empty beverage can, the weight can be neglected, thus, picking up and releasing caused minimal differences in detected shear force components Fx and Fy. However, when the can was full, higher grasping force was required for stead holding. Therefore, compared with picking up an empty one, larger normal force component Fz was detected, meanwhile, as weight of the full can increased, a positive value of Fx was detected, as the results in Figure 9b show. These demonstrations indicate that the proposed tactile sensing array is a qualified electronic skin that can be applied to all kinds of medical instruments that require real-time pressure monitoring. Furthermore, we also demonstrated the measurements of dynamic pressures to apply the tactile sensing array as an electronic skin. The proposed tactile sensing array was firstly attached to one author's hands with two fingers using double-sided tape, then picking up an empty beverage can and a full one under the same measuring circumstances, as shown in Figure 9a,b, respectively. Specific real-time distribution of three-axial force components among the full measurement regime of the proposed sensing array was well-illustrated. Each unit sensing element responded well to the dynamic motions and maintenance of, and the increase and decrease in, applied external forces in terms of capacitance, variations were all well-detected in real-time. For the empty beverage can, the weight can be neglected, thus, picking up and releasing caused minimal differences in detected shear force components F x and F y . However, when the can was full, higher grasping force was required for stead holding. Therefore, compared with picking up an empty one, larger normal force component F z was detected, meanwhile, as weight of the full can increased, a positive value of F x was detected, as the results in Figure 9b show. These demonstrations indicate that the proposed tactile sensing array is a qualified electronic skin that can be applied to all kinds of medical instruments that require real-time pressure monitoring.
Conclusions
In summary, we demonstrated that a porous dielectric layer, fabricated from selfmade PDMS-C12F27N emulsion, was useful for sensing normal and shear pressures over a broad range up to 150 kPa of normal pressure and 20 kPa of shear pressure with high sensitivities. The proposed capacitive tactile sensing array exhibited sensitivities of 0.54%/kPa at 10 kPa and below, 0.45%/kPa between 10 kPa and 80 kPa, and 0.12%/kPa at 80 kPa and above of normal pressures. Meanwhile, for shear pressures, our sensing array displayed sensitivities up to 1.14%/kPa and 1.08%/kPa in X and Y directions, respectively, at 10 kPa and below, and slightly decreased to 0.73%/kPa and 0.75%/kPa when shear pressure was over 10 kPa. The fast capacitance response to applied normal pressure and sufficient repeatability over multiple pressure cycles were attributed to the excellent elastic property of the porous dielectric layer that resulted from its higher air-PDMS ratio and full restoration of PDMS walls. Furthermore, distributed contact force sensing tests validated that our robust, highly sensitive, and broad pressure range tactile sensing array was
Conclusions
In summary, we demonstrated that a porous dielectric layer, fabricated from self-made PDMS-C12F27N emulsion, was useful for sensing normal and shear pressures over a broad range up to 150 kPa of normal pressure and 20 kPa of shear pressure with high sensitivities. The proposed capacitive tactile sensing array exhibited sensitivities of 0.54%/kPa at 10 kPa and below, 0.45%/kPa between 10 kPa and 80 kPa, and 0.12%/kPa at 80 kPa and above of normal pressures. Meanwhile, for shear pressures, our sensing array displayed sensitivities up to 1.14%/kPa and 1.08%/kPa in X and Y directions, respectively, at 10 kPa and below, and slightly decreased to 0.73%/kPa and 0.75%/kPa when shear pressure was over 10 kPa. The fast capacitance response to applied normal pressure and sufficient repeatability over multiple pressure cycles were attributed to the excellent elastic property of the porous dielectric layer that resulted from its higher air-PDMS ratio and full restoration of PDMS walls. Furthermore, distributed contact force sensing tests validated that our robust, highly sensitive, and broad pressure range tactile sensing array was suitable for the efficient detection of a variety of pressure sources under different circumstances, including, but not limited to, finger touching, hand grasping, human impulses, breathing, and other gentle human motions, which merits further study. | 9,105.6 | 2022-10-01T00:00:00.000 | [
"Chemistry",
"Engineering"
] |
Optomechanical circuits for nanomechanical continuous variable quantum state processing
We propose and analyze a nanomechanical architecture where light is used to perform linear quantum operations on a set of many vibrational modes. Suitable amplitude modulation of a single laser beam is shown to generate squeezing, entanglement, and state-transfer between modes that are selected according to their mechanical oscillation frequency. Current optomechanical devices based on photonic crystals may provide a platform for realizing this scheme.
We propose and analyze a nanomechanical architecture where light is used to perform linear quantum operations on a set of many vibrational modes. Suitable amplitude modulation of a single laser beam is shown to generate squeezing, entanglement, and state-transfer between modes that are selected according to their mechanical oscillation frequency. Current optomechanical devices based on photonic crystals may provide a platform for realizing this scheme.
The field of cavity optomechanics studies the interaction between light and nanomechanical motion, with promising prospects in fundamental tests of quantum physics, ultrasensitive detection, and applications in quantum information processing (see [1] for a review). One particularly promising platform consists of "optomechanical crystals", with strongly localized optical and vibrational modes implemented in a photonic crystal structure [2]. So far, several interesting possibilities have been pointed out that would make use of multi-mode setups that can be designed on this basis. For example, suitably engineered setups may coherently convert phonons to photons [3] and collective nonlinear dynamics might be observed in optomechanical arrays [4]. Moreover, optomechanical systems in general have been demonstrated to allow writing quantum information from the light field into the long-lived mechanical modes [5][6][7]. The recent success in ground state laser-cooling [8] has now opened the door to coherent quantum dynamics in optomechanical systems.
In this paper, we propose a general scheme for continuous-variable quantum state processing [9] utilizing the vibrational modes of such structures. We will show how entanglement and state transfer operations can be applied selectively to pairs of modes, by suitable intensity modulation of a single incoming laser beam. We will discuss the limitations for entanglement generation and transfer fidelity, and show how to engineer the mechanical frequency spectrum and pick suitable designs to address these challenges.
Model. -We will first restrict our attention to a single optical mode coupled to many mechanical modes, such that the following standard optomechanical Hamiltonian describes the photon fieldâ, the phononsb l of different localized vibrational modes (l = 1, 2, . . . , N ), and their mutual coupling: (1) Here we are working in a frame rotating at the laser frequency, with the detuning given by ∆ = ω L − ω cav . We omitted to explicitly write down the laser driving, and the coupling to the photon and phonon baths, with damping rates κ and Γ, respectively, although these will of course be taken care of in our treatment. The bare (single-photon) coupling constants g (l) 0 depend on the overlap between the optical and mechanical mode functions. They are generally on the order of ω cav x ZPF /L, where L is an effective optical cavity length that reaches down to wavelength dimensions in photonic crystal cavities, and where x ZPF = ( /2m l Ω l ) 1/2 is the mechanical zero-point amplitude of the respective mode (see Fig. 1 for the illustration of a setup). After going through the standard procedure of splitting off the coherent optical amplitude induced by the laser,â = α + δâ, and omitting terms quadratic in δâ (valid for strong drive), we recover the linearized optomechanical coupling, Here the dressed couplings g l = g (l) 0 α can be tuned via the laser intensity, i.e. the circulating photon number: |α| = √n phot , where we have taken α to be real-valued without loss of generality. We can now eliminate the driven cavity field (noting that δâ is in the ground state) by second-order perturbation theory. Provided we work at large detuning, |∆| Ω l , κ, we retain a fully coherent, whereX l ≡b l +b † l is the mechanical displacement in units of x ZPF . Eq. (3) may be viewed as a "collective optical spring" effect, coupling all the mechanical displacements. The couplings J lk = g l g k /2∆ can be changed in-situ either via the laser intensity or the detuning. Note that if multiple optical modes are driven, the corresponding coupling constants will add.
In general, the couplings in Eq. (3) will induce quantum state transfer between mutually resonant mechanical modes, and entanglement at low temperatures (usually with the help of optomechanical laser cooling). These phenomena have been analyzed in a variety of schemes [10][11][12][13][14] so far, typically with two mechanical modes of interest.
General scheme. -However, here we have in mind a multi-mode situation for continuous variable quantum information processing. To this end, we are interested in having an efficient approach to selectively couple arbitrary pairs of modes, both for entanglement and state transfer. There are several desiderata to address for a suitable optomechanical architecture of that style : (i) The couplings should be switchable in a timedependent manner; (ii) one should be able to easily select pairs for operations; (iii) preferably, only one laser (or a limited number) should be involved; (iv) operation speeds should be large enough to overcome the effects of decay and decoherence; (v) one should be able to scale to a reasonably large number of modes.
Static couplings as in Eq. (3) could be used for selective pairwise operations if one were able to shift locally the mechanical mode frequency, to bring into resonance only the two respective modes. In principle, this is doable via the optical spring effect, but would require local addressing with independent laser beams. This could prove challenging in a micron-scale photonic crystal architecture, severely hampering scalability.
Instead, we propose to employ frequency-selective operations, by modulating the laser intensity (and thus J) in a time-dependent fashion. Entanglement generation by parametric driving has been analyzed recently in various contexts, including entanglement using superconducting circuits [15], trapped ions [16], general studies of entanglement in sets of harmonic oscillators [17][18][19], optomechanical state transfer and entanglement between the motion of a trapped atom and a mechanical oscillator [20] and entanglement between mechanical and radiation modes [21]. Parametric driving can also lead to mechanical squeezing in optomechanical systems [22].
Let us consider two modes (1 and 2) for the moment, where the coupling is 2 J(t)(X 1 +X 2 ) 2 . Assum- ing a 100% amplitude modulated laser drive beam with The resulting time-dependent lightinduced mechanical coupling can be broken down into several contributions, whose relative importance will be determined by the drive frequency ω. The static terms, J(X 1 +X 2 ) 2 , will shift the oscillator frequencies by δΩ j = 2J. In addition, they give rise to an off-resonant coupling (ineffective for |Ω 1 − Ω 2 | J, but with growing influence for |Ω 1 − Ω 2 | J ). On the other hand, the oscillating terms contain There are three important cases. A mechanical beamsplitter (state-transfer) interaction is selected for a laser drive modulation frequency ω = (Ω 1 − Ω 2 )/2. After transforming the full Hamiltonian into the interaction picture with respect to Ω 1 and Ω 2 , the resonant part then readsĤ b.s. = J(b † 2b 1 +b † 1b 2 ). In contrast, for ω = (Ω 1 + Ω 2 )/2, we obtain a two-mode squeezing (nondegenerate parametric amplifier) Hamiltonian, , which can lead to efficient entanglement between the modes. Finally, ω = Ω j selects the squeezing interaction for a given mode,Ĥ sq = (J/2)(b 2 j +b †2 j ). These laser-tunable, frequency-selective mechanical interactions are the basic ingredients for the architecture that we will develop and analyze here.
Limiting factors. -We now start to address the important constraining factors limiting the fidelity of these operations, both for the two-mode and ultimately the multi-mode case. Full simulations incorporating all these effects will be discussed further below. At higher drive powers (as needed for fast operations), the frequencytime uncertainty implies that the different processes dis-cussed above need not be resonant exactly any more, with an allowable spread |δω| J. For example, the parametic instabilities occur for |ω − (Ω i + Ω j )/2| < J. At higher driving strengths, once these intervals start to overlap for different processes, selectivity is lost and the process fidelity suffers. On the other hand, at low operation speeds quantum dissipation and thermal fluctuations will limit the fidelity. This dilemma is the essential problem faced by a multi-mode setup, and we will discuss possible schemes to address it further below. The schematic situation for three modes is illustrated in Fig. 2.
In order to analyze quantitatively the full effects of decoherence and dissipation, we employ a Lindblad master equation to evolve the joint state of the mechanical modes under the influence of light-induced time-dependent driving. The evolution of any expectation value can be derived from the master equation and is governed by: HereĤ already contains the effective interaction (3) For the quadratic Hamiltonian studied here, the equations for correlators, such as b † ib j , remain closed, and these (together with averages b j ) are sufficient to describe the time-evolution of the Gaussian quantum states that will be produced in the course of the dynamics.
In order to analyze selective entanglement, we evaluate the logarithmic negativity as a measure of entanglement for any two given modes (A and B), whereρ AB is the state of these two modes, and the partial transpose T A acts on A only. For Gaussian states, E N can be calculated by obtaining the symplectic eigenvalues of the covariance matrix of the two modes' positions and momenta [23].
In Fig. 3 we show the results of numerical simulations for a situation with three vibrational modes, two of which are to be entangled in the presence of the third one. The entanglement first grows and then saturates at later times, while the phonon number continues to grow exponentially. The plots show the entanglement evaluated at a fixed late time (t = 5.6/J), as a function of parameters. One clearly sees the features predicted above, i.e. the unwanted overlap between different entanglement processes at higher driving strengths (Fig. 3a,b). Increasing the vibrational frequency spacing suppresses these unwanted effects (Fig. 3b). The dependence on the driving strength itself is displayed in more detail in Fig. 3c. There, the threshold J = Γn for entanglement generation at finite temperature is evident, as is the loss of entanglement at large J. Finally, Fig. 3d,e shows the dependence on temperature and mechanical quality factor. It indicates that this scheme should be feasible for realistic experimental parameters (see below). Note that the light-induced dissipation [24] effectively adds to the intrinsic decoherence rate Γn the rate Γ ϕ opt ≈ g 2 0 α 2 κ/∆ 2 = 2J(κ/∆). This is suppressed by a factor κ/∆ which can in principle be made arbitrarily small for larger detuning (at the expense of higher circulating photon number α 2 to keep the same J). For the realistic experimental parameters quoted below, we have κ/∆ = 1/80, such that we have been able to neglect the effects of Γ ϕ opt . Larger arrays. -We now turn to the situation with an array of many modes. It is clear that having evenly spaced mechanical frequencies is impossible without taking any further precautions. This is because then the state transfers between adjacent modes would all be addressed at the same modulation frequency. In fact, there seems to be no layout that allows for selection of arbitrary pairs, avoids resonance overlap, and does not require a frequency interval that grows exponentially with the number N of modes. While one may still realize small arrays in this way, in the limit of large N another approach is needed.
The scheme (Fig. 4) that solves this challenge involves an auxiliary mode at Ω aux , removed in frequency from the array of "memory" modes which now may have evenly spaced frequencies in an interval [Ω min , Ω max ]. All the pairwise operations will take place between any selected memory mode and the auxiliary mode. Then, the state transfer resonances are in the band [(Ω aux − Ω max )/2, (Ω aux −Ω min )/2], and entanglement is addressed within [(Ω min + Ω aux )/2, (Ω max + Ω aux )/2]. To make this work, one needs to fulfill the mild constraint 2Ω max − Ω min < Ω aux < 3Ω min . State transfer between two memory modes now is performed in three steps (swapping 1-aux, aux-2 and aux − 1), as is entanglement (swap 1-aux, entangle aux-2 and swap aux-1). Note that this overhead does not grow with the number of memory modes. Fig. 5 shows the state transfer between an auxiliary mode (originally prepared in a squeezed state) and one of the memory modes. This scheme can potentially be expanded, with the auxiliary modes grouped into arrays (Fig. 4c), and several of such 2D blocks could again be connected via further "higher-order" auxiliary modes, in a hierarchical fasion.
Implementation. -Regarding the experimental implementation, in principle any optomechanical system with several long-lived mechanical modes can be used as a starting point. One promising platform is based on photonic crystals ("optomechanical crystals"), as introduced by Painter et al. [2]. These would be very well suited for the scheme presented here, due to their design flexibility, particularly of two-dimensional structures, and the all-integrated approach, as well as the very large optomechanical coupling strength. Given the current coupling strength achieved there [8], g 0 /2π ∼ 1MHz, as well as a detuning of ∆/Ω = 10 and around 2000 photons circulating inside the cavity (a number reached in recent experiments), we can estimate the induced coupling to approach the damping rate, J ∼ Γ. This corresponds to the threshold for coherent operations, provided one were to cool down the bath to k B T bath < Ω. This is in principle doable (at 20mK), but will likely run into prac- tical difficulties due to the re-heating of the structure via spurious photon absorption or other effects. Otherwise, at finite bath temperatures corresponding to a thermal occupationn ∼ k B T bath / Ω, the light intensity must be increased by a factorn, towards J Γn, to speed up operations and thereby fight thermal decoherence. In that case, the vibrational ground state would be prepared at the start of the pulse sequence via lasercooling, as demonstrated in [8].
In these devices, several localized vibrational and optical modes can be produced at engineered defects in an otherwise periodic array of holes cut into a free-standing substrate (e.g., made of silicon). Evanescent optical and vibrational waves connect adjacent modes (via photon and phonon tunneling, respectively). The typical photon tunnel coupling for modes spaced apart by several lattice constants is [4] in the range of several THz. Thus, hybridized optical modes will form, one of which can be selected via the laser driving frequency as the active common optical mode (the others remaining idle). At the same time, the vibrational modes' frequencies can either be designed to be different or to be equal, in which case delocalized hybridized mechanical modes are produced.
Recently it was shown that a 'snowflake' crystal made of connected triangles (honeycomb lattice) possesses a simultaneous photonic and phononic (pseudo-)bandgap and thus supports wave guides (line defects) and localized defect modes with optomechanical interaction [3]. Placing point defects (heavier triangles/thicker bridges) in the middle of such a crystal structure, a tight binding analysis indicates that the desired mechanical frequency spectrum (Fig. 4) can be generated in principle. In any given system there will be limits to the design of the mechanical spectrum. We briefly mention another option for improving the operational fidelity: pulse shaping and optimal control. Essentially, one wants to make sure that the Fourier transform of the coupling J(t) (or, equivalently, of the time-dependent laser intensity), does not contain spectral weight at any of the resonances ω ± ij = (Ω i ± Ω j )/2, except for the selected one. This implies that the pulse duration is larger than the inverse of the smallest spacing of such resonances. Optimal control techniques (as in [17]) could be employed to numerically search for the optimal pulse shape.
Finally, one essential ingredient of any such architecture will be read-out. Some time ago, we have pointed out [25] how to produce a quantum-non-demolition readout of the quadratures of mechanical motion in an optomechanical setup. A laser beam impinging onto the optical resonance (detuning ∆ = 0) is amplitude-modulated at the mechanical frequency Ω j of one of the modes. The reflected light carries information only about one quadrature e iϕb j +e −iϕb † j . Its phase ϕ is selected by the phase of the amplitude-modulation, while the measurement backaction perturbs solely the other quadrature. In that way, by repeated measurements all the joint correlators of positions and momenta of the mechanical modes may be read out, e.g. in order to verify the fidelity of the operations discussed above. If desired, taking measurement statistics for continuously varied quadrature phases would also allow to do full quantum-state tomography of the set of vibrational modes, and thereby ultimately process tomography.
Conclusions. -The scheme described here would enable coherent scalable nanomechanical state processing in optomechanical arrays. It can form the basis for generating arbitrary entangled mechanical Gaussian multi-mode states. An interesting application would be to investigate the decoherence of such states due to the correlated quantum noise acting on the nanomechanical modes. More-over, recent experiments have shown in principle how arbitrary states can be written from the light field into the mechanics [5][6][7]. These could then be manipulated by the interactions described here. Alternatively, for very strong coupling g 0 > κ, non-Gaussian mechanical states [26] could be produced, and the induced nonlinear interactions (see e.g. [27,28]) could potentially open the door to universal quantum computation with continuous variables [9] in these systems.
We acknowledge an ERC Starting Grant, the DFG Emmy-Noether program and DARPA ORCHID for funding. | 4,250.4 | 2012-02-16T00:00:00.000 | [
"Physics",
"Engineering"
] |
Identifying Uncertainties in Stellar Evolution Models Using the Open Cluster M67
Stellar age estimates are often calculated by interpolating a star's properties in a grid of models. However, different model grids will give different ages for the same star. We used the open cluster M67 to compare four different model grids: DSEP, GARSTEC, MIST, and YREC. Across all model grids, age estimates for main sequence stars were consistently higher than the accepted age of M67, while age estimates for red giant stars were lower. We compared model-generated age and mass values to external constraints as an additional test of the reliability of each model grid. For stars near solar age and metallicity, we recommend using the DSEP model grid to estimate the ages of main sequence stars and the GARSTEC model grid for red giant stars.
INTRODUCTION
Stellar ages used in fields such as galactic archaeology and exoplanet evolution are commonly determined by fitting a star to a model grid.A stellar model grid is a set of evolutionary tracks generated by a modeling code at a range of initial masses and metallicities.These tracks predict physical parameters (e.g.luminosity and temperature) of a star as a function of age, initial mass, and initial composition.Therefore, given a model grid and sufficient observational constraints, the age and mass of a star can inferred.However, the assumptions and calibrations that go into creating a stellar model grid can cause significant differences between different grids' age estimates of a star, sometimes more than 30% (Tayar et al. 2022).
Open star clusters are commonly used to check the accuracy of stellar model grids.In particular, the cluster M67 is often used to calibrate stellar models (Choi et al. 2016) because M67 is a well-studied, nearby old open cluster that is approximately 4.0 Gyr old and is near solar metallicity ([F e/H] = 0.00 ± .05)(Myers et al. 2022).
The age of M67 has been calculated many times.Most commonly, the age of a cluster can be determined by plotting the stars on a color-magnitude diagram and fitting an isochrone to the main sequence turnoff.For M67, Victoria-Regina isochrones give an age estimate of 3.6-4.6Gyr, with an average of 4.0 Gyr (VandenBerg & Stetson 2004).Sandquist et al. (2021) used MIST, PADOVA, and BASTI isochrones constrained by the eclisping binary WOCS 11028 to get an age of 3.5-4.0Gyr.Stello et al. (2016) used asteroseismology of red giants to obtain an age estimate of 3.46 ± 0.13 Gyr.
Model grids can be used to estimate the ages of individual M67 stars, essentially treating them as field stars, which are stars that do not belong to a cluster.With perfect models and data, this would return the same age for each star in M67.Systematic discrepancies in the age estimates can reveal flaws in the model grid or input data.We did this for the model grids DSEP, GARSTEC, MIST, and YREC, with model parameters as presented in Tayar et al. (2022).
The age of M67 has been previously estimated with some of the model grids used in this work.
• In Magic et al. (2010), GARSTEC's age estimate for M67 was determined to be 4.2 Gyr assuming Z/X = 0.0165 and 4.5 Gyr assuming Z/X = 0.0230.2018) exist for all stars considered in this step.For the remaining 311 stars, we used Kiauhoku (Claytor et al. 2020) to fit each star to the model grids based on log(g), [M/H], and T eff .We discarded stars that did not fall within the parameter space covered by the model grids.APOGEE DR17 log(g)values are less reliable for low-mass dwarfs, so we discarded stars with log(g) > 4.5.This left 141, 140, 143, and 140 star age and mass estimates for DSEP, GARSTEC, MIST, and YREC respectively.
RESULTS
The average model-generated age of M67 is 4.72, 4.85, 4.84, and 5.74 Gyr for DSEP, GARSTEC, MIST, and YREC respectively.In Figure 1e, the x-axis is divided into main sequence (MS), subgiant, and red giant evolutionary phases.In Figure 1a-d, the isochrones are close together on the MS, so a small difference in log(g) between two stars results in a large difference in their estimated ages.We compared ages inferred from the models as a function of log(g) for the models (Figure 1e).There is a wide spread of MS age estimates, with a model-wide average of 5.03 Gyr.The model-wide average for red giant stars is 2.64 Gyr, and red giant ages from YREC, DSEP, and MIST are lower than the accepted age of M67, while GARSTEC age estimates are more accurate.
In Figure 1f, the mass estimates from the models are compared to asteroseismic mass estimates from Stello et al. (2016).While there were no shared stars between our data and Stello et al. (2016), we assume that all red giants in a cluster that have evolved as single stars will have similar masses.Of the four model grids, the GARSTEC mass estimates for red giants are closest to the Stello et al. (2016) values.Combined with GARSTEC's reasonable red giant age estimates, this indicates that the GARSTEC-generated model grid is a good choice for estimating parameters of red giant stars near solar age and metallicity.
CONCLUSIONS
When using a model grid to estimate the age of a star, we find that the accuracy of each model grid varies based on the star's metallicity and evolutionary phase.For stars near solar age and metallicity, the GARSTEC model grid is recommended for estimating the age and mass of red giant stars.For main sequence stars, the DSEP model grid was found to be most accurate.More generally, our findings highlight the need for careful verification and calibration of models in the regime in which they are going to be used.
We acknowledge support from the National Science Foundation under grant No. 2243878 through the University of Florida 2023 REU.We used data from SDSS (https://www.sdss.org/collaboration/citing-sdss/).
Figure 1
Figure 1.a, b, c, and d are Kiel diagrams of the open cluster M67.Isochrones generated at [M/H] = 0.00 are shown for 4, 5, and 6 Gyr.Stars from M67 were treated as field stars to estimate their ages, which are shown by the colorbar.1.e shows age as a function of surface gravity for all four models.Vertical dashed lines indicate the approximate boundary between main sequence, subgiant, and red giant stars.The age estimates for each model are fitted with LOWESS non-parametric smooth curves.Notably, the GARSTEC red giant age estimates are near the predicted age of M67, and all models have similar behavior on the subgiant branch.In subfigure f , model-generated mass estimates for M67 red giants are compared to Stello et al. (2016) asteroseismic mass estimates for red giants. | 1,523.2 | 2024-08-21T00:00:00.000 | [
"Physics"
] |
Point-driven modern Chladni figures with symmetry breaking
Point-driven modern Chladni figures subject to the symmetry breaking are systematically unveiled by developing a theoretical model and making experimental confirmation in the orthotropic brass. The plates with square shape are employed in the exploration based on the property that the orientation-dependent elastic anisotropy can be controlled by cutting the sides with a rotation angle with respect to the characteristic axes of the brass. Experimental results reveal that the orientation symmetry breaking not only causes the redistribution of resonant frequencies but also induces more resonant modes. More intriguingly, the driving position in some of new resonant modes can turn into the nodal point, whereas this position is always the anti-node in the isotropic case. The theoretical model is analytically developed by including a dimensionless parameter to consider the orientation symmetry-breaking effect in a generalized way. It is numerically verified that all experimental resonant frequencies and Chladni patterns can be well reconstructed with the developed model. The good agreement between theoretical calculations and experimental observations confirms the feasibility of using the developed model to analyze the modern Chladni experiment with orientation symmetry breaking. The developed model is believed to offer a powerful tool to build important database of plate resonant modes for the applications of controlling collective motions of micro objects.
it is discovered that the driving position in some new resonant modes will turn into a nodal point, whereas this position is always an antinode for the isotropic plates. The peculiar morphology of new resonant modes with a nodal-point driving position is originated from antiphase superposition of nearly degenerate eigenstates which can only happen in systems with broken-symmetry 22 . By including a dimensionless parameter to consider the orientation symmetry-breaking effect in a generalized way, a theoretical model is analytically developed to reconstruct all experimental observations. The numerical reconstructions verify that all experimental resonant frequency spectra and Chladni figures can be satisfactorily described by the developed model. The good agreement between theoretical calculations and experimental results confirms the feasibility of using the developed model to efficiently analyze the vibrating modes and to effectively determine some critical elastic parameters of the anisotropic plates to greatly benefit various applications in practice.
Results
Modelling modern Chladni systems with symmetry breaking by orthotropic plates. The theoretical foundations for orthotropic systems are considered first to offer more general concepts for modern Chladni figures subject to orientation symmetry breaking. Note that in addition to orientation symmetry, plate systems possess translational symmetry which will also affect the resonant modes significantly if it is broken. However, studying the effect of translational symmetry breaking on plate resonance is beyond the scope of this work since the plates used in the experiments are considered to be uniform. The governed equation for the vibration mode ψ of orthotropic thin plates with two characteristic axes can be given by the anisotropic Kirchhoff-Love equation as 23 is the flexural rigidity, and E x y , is the Young's modulus along the x or y characteristic direction. Note that the condition of symmetry of stiffnesses for orthotropic plates ensures ν ν = E E xy y y x x . Due to the orthotropic property, bending waves inside the plate correspond to different acoustic speeds along different propagating directions. To determine the dispersion relation depending on the propagating orientation in the orthotropic plate, the plane-wave solution given by ψ ψ = x y e e ( , ) iK x iK sin y 0 cos P P with the amplitude of ψ 0 and the propagating wave number θ K ( ) P along an arbitrary direction with a rotation angle θ to one of the characteristic axes can be considered. Substituting the plane-wave solution into Eq. (1), the orientation-dependent dispersion relation of the orthotropic plate under the infinite plate approximation can be found to be Equations (2) and (3) clearly show that bending waves propagating along the directions denoted by θ and θ + π/2 to the orthotropic characteristic axes will more or less correspond to different acoustic speed except for the case with θ = π/4. Based on this property, the orientation symmetry breaking induced by the elastic anisotropy for the system can be flexibly adjusted by cutting orthotropic plates into squares with their sides along different angles θ with respect to the characteristic axes. More specifically, for an orthotropic square plate with the sides along θ and θ + π/2 directions, the quantitative measure for orientation symmetry breaking of the system can be simply related to the ratio between the propagating wave numbers as where δ θ ( ) is a dimensionless symmetry-breaking parameter modelling the magnitude of elastic anisotropy and can be reversely evaluated as Once θ is specified, the anisotropic Kirchhoff-Love equation given by Eq. (1) can be subsequently solved to find out the eigenmodes and eigenvalues for constructing the response wave function of the system. However, solving the vibration of free-edge plates has long been a tough problem even for the seemingly simple isotropic square systems 22 . Hence some critical assumptions are required to obtain an analytical expression for approximating the vibration wave function. Typically, the anisotropy of the orthotropic plate is dominated more by the different Young's moduli for the two characteristic axes than by the Poisson effect. Besides, the in-plane shear effect is comparatively small for the vibrating thin plates. Therefore, the cross term for the coupling between x and y directions in Eq. (1) may be neglected. Consequently, the vibrating modes of orthotropic plates can be approximated by straightforwardly considering the overall anisotropic properties with the dimensionless symmetry-breaking parameter δ as where the effective wavenumber K includes contributions from x-and y-propagations as Even though Eq. (6) still cannot be solved analytically for the square plate with free edges, its corresponding mode functions have been confirmed by Rayleigh 22 that can be nicely approximated by the eigenfunctions of free-boundary membrane as long as the wavelength of bending wave is far larger than the thickness of plate 24 . Neglecting the cross term and assuming x-and y-coordinates of the system can be separable once again, the eigenmodes ψ x y ( , ) n n , 1 2 and eigenvalues K n n , 1 2 of the orthotropic square plate with the region in ≤ ≤ x y a 0 , under the free-edge condition can be approximately given by are respectively the mode indices along x and y coordinates of the plate. Using the approximated eigenmodes and eigenvalues, vibrating wave functions of point-driven square plate subject to orientation symmetry breaking can be generalized from previous work 16 x n a y ( , ; ) ( , , ) cos c os (9) n n n n , , with Here ω ρ π vibrating plate has been confirmed to be directly proportional to the number of effective participated eigenstates N eff in the response wave function 24 . It is worthy to note that N eff for the vibrating plate is similar to the concept of acoustic density of states whose increment has been proved to play an important role for the enhancement of acoustic emission 25 . Since entropy is a logarithmic measure of the number of eigenmodes with significant participated probability in the coherent superposition to form the response wave function, the N eff spectrum of the vibrating plate can be related to the entropy S as . A more detailed discussion to calculate the entropy corresponding to a given driving wave number by the weighting coefficient function in Eq. (10) is provided in the section Methods. In order to compare with the results of isotropic plate in previous works more directly, the argument of the expansion coefficient function in Eq. (10) has been changed from frequency ω to wave number k by simply using the dispersion relation with ω = ⋅ C k 2 , where the coefficient C can be evaluated by Eqs (2 and 3) once the orientation angle θ is determined. By specifying the local maxima of N eff spectrum under different δ parameters, the redistribution of resonant peaks of vibrating plates with orientation symmetry breaking can be analyzed. Figure 2 shows the calculated results of N k ( ) eff for the square plates with symmetry breaking parameters δ to be 0, 0.02, and 0.05. The calculated N k ( ) eff spectra can be seen to behave as oscillatory functions whose peak positions correspond to resonant wave numbers that leads the acoustic power transferred efficiency of system to be local maxima. The validity of determining the resonant peak positions by the maximum N eff (or the maximum entropy) may be understood via the concept of energy equipartition in statistical mechanics, i.e. the more the eigenstates participating in the total energy configuration, the higher the energy it can possess since each eigenstate can offer the same energy contribution to the system. This so-called maximum entropy principle has been widely confirmed to be feasible and reliable to predict the collective behavior in multimode systems such as maximum emission for lasers 26 , self-organization for complex systems 27 , wave function localization for disordered systems 28 , and phase transitions for open quantum systems 29 . From the results of the redistributed N eff spectra, some resonant modes (marked by blue downward arrows in Fig. 2) can be found to be so robust with their resonant peak positions remain almost unchanged when the symmetry-breaking parameter δ increases. These nearly unaffected peak positions can be seen to correspond to relatively larger N eff (larger density of states) whose positions are mainly determined by the energy level distribution of the plate. Since the perturbation effect like the orientation symmetry breaking is insufficient strong to considerably shift the positions of clustered energy levels of the system, the number of participated eigenstates in the robust modes only decreases a little as δ increases. In addition, because the dominant participated eigenmodes in the coherent superposition can still have relatively high participated probability under the orientation symmetry breaking, the morphologies of the robust modes can be conjectured to be almost the same. A detailed analysis for the dependence of morphology variation on the eigenstate composition for the resonant modes will be discussed later. On the other hand, some new resonant peaks can be found to emerge at the positions of local minima for the isotropic case as the symmetry-breaking parameter increases. Once δ is sufficient large, N eff for the new resonant modes can even exceed those of the original resonant modes in the isotropic case to become locally dominant states as seen the marks (iv) and (v) in Fig. 2.
To examine the influences of orientation symmetry breaking on the wave patterns, the resonant wave functions of vibrating plates corresponding to driving wave numbers marked by (i)-(v) in Fig. 2 under different symmetry-breaking parameters δ = 0, 0.02, and 0.05 are calculated by Eqs (9-11) and shown in Fig. 3. Consistent with the aforementioned discussion, the overall structures of mode patterns for the robust modes (i)-(iii) remain nearly unchanged but only deform slightly along one coordinate axis as the symmetry-breaking parameter increases. The numerical results validate the fact that the larger the N eff for the coherent superposition, the more stable the structure of the resonant mode against the perturbation. In contrast, the new resonant modes (iv) and (v) with non-zero δ can be obviously found to show totally different morphologies in comparison with the isotropic cases. Unlike typical plate wave functions with a presumable antinode at the driving point because that this position serves as the main excitation source for the plate vibration, it can be intriguingly seen that the driving position will turn into a nodal point in some new resonant modes under orientation symmetry breaking 16 .
In order to analyze the morphology transition with increasing elastic anisotropy more quantitatively, the eigenmode compositions given by the weighting coefficient C k ( ) n n , 1 2 for the cases of the robust resonant mode (k/a = 29.662) and the new resonant mode (k/a = 33.832) are further analyzed (Fig. 4a). For the case of robust mode, the number of eigenstates with significant participated probability can be seen to decrease a little as δ increases, which agrees with the results in Fig. 2. However, the slight decrement of N eff for the robust mode does not influence the global morphology of the wave pattern because the significant participated eigenmodes still have sufficient large contribution to the coherent superposition. The slightly deformed wave patterns along one of the coordinate direction for the robust mode can be explained by the enlarged magnitude differences on the weighting coefficients for participated eigenmodes ψ n n , 1 2 and ψ n n , 2 1 as the symmetry-breaking parameter δ increases. Nevertheless, it can be clearly found that most of the dominant eigenstates in the robust modes remain in-phase in the superposition no matter how the symmetry-breaking parameter increases. On the contrary, the participating eigenmodes ψ n n , 1 2 and ψ n n , 2 1 for the new resonant mode show abrupt change from in-phase to antiphase superposition once there exists non-zero symmetry breaking. The antiphase superposition of eigenmodes has been known to be the main cause for wave patterns with a nodal point at the fixed or driving position for the free-edge plates 22 . However, the relationship between symmetry breaking and the presence of antiphase superposition in plate systems is seldom discussed with an explicit model so far. Using the developed analytical expression for the resonant mode, antiphase superposition induced by orientation symmetry breaking can be easily explained with the conceptual diagram (Fig. 4b). Without symmetry breaking ( = K K n n n n , , ), all degenerate eigenmodes ψ n n , 1 2 and ψ n n , 2 1 are in-phase to correspond to either positive or negative weights in the superposition no matter the driving wavenumber is larger or smaller than the closest eigenvalue K n n , 1 2 . In contrast, once the symmetry breaking causes the degenerate level splitting, antiphase superposition naturally appears as long as the driving wave number is in between the split eigenvalues K n n , 1 2 and K n n , 2 1 . From the other viewpoint, it is the degenerate level splitting to lead to the emergence of new local maximum in N eff spectrum so as to form the new resonant mode. Next the modern Chladni experiment of vibrating orthotropic plates is performed to confirm the developed theory. The main contributed eigenmodes for the robust mode can be found to be in-phase no matter how δ changes, while the dominant eigenmodes for the new resonant mode are clearly seen to be antiphase once δ becomes non-zero. (b) Conceptual diagram for explaining the origin of antiphase superposition from symmetry breaking. Once two degenerate eigenstates have been split due to orientation symmetry breaking, the anti-phase superposition will naturally occur when the driving wave number is tuned to in between the split levels K n n , 1 2 and K n n Experimental verification by orthogonal brass plates. Because of its appropriate stiffness and elastic properties 21 , the orthotropic brass which plays important roles in industry and musical instrument manufacturing was utilized for the modern Chladni experiment. To create thin plates with different elastic anisotropy corresponding to different symmetry-breaking parameters δ, a brass sheet with a thickness of 0.8 mm were cut into three squares with the side-length a = 280 mm and with their sides along the cutting direction in rotation angles θ of 0, π/6, and π/4 with respect to one of the characteristic axes of brass (Fig. 5a). All brass plates were fixed and driven at the square center as seen the experimental setup for modern Chladni figures (Fig. 5b). The solid black lines in Figs 6a-8a show the experimental frequency spectra of the driving efficiency of power delivery η for the brass plates with θ = π/4, π/6, and 0, respectively. According to the orientation-dependent dispersion relation given by Eq. (3), it can be easily deduced that the symmetry-breaking parameter increases as the cutting angle θ deviates away from θ = π/4, i.e. δ δπ δπ > > (0) ( /6) ( /4). Consistent with previous theoretical discussion, several new resonant peaks can be clearly found in the frequency spectra as the symmetry breaking parameter increases (Figs 6a-8a). Subsequently, Chladni nodal-line patterns corresponding to the resonant modes of vibrating brass plates were recorded by using the traditional method. The first row of Fig. 6b shows the experimental Chladni figures corresponding to the resonant peaks (i)-(x) in Fig. 6a of the brass plate with θ = π/4. These resonant modes exactly belong to the robust modes whose resonant peak positions can be clearly seen to be almost unchanged with the symmetry breaking (Figs 6a-8a). Besides, resonant Chladni figures of these robust modes for the case of θ = π/4 can be seen to present highly symmetric morphologies that are quite similar to the results for the isotropic square plate 24 . The comparatively less resonant peaks in the frequency spectrum and the high-symmetry nodal-line patterns of resonant modes implies the brass square plate with θ = π/4 can be certainly viewed as an isotropic system with δ = 0. The first rows of Figs 7b and 8b show Chladni figures corresponding to some new resonant modes marked by (i)-(vi) in Figs 7a and 8a. All these new resonant modes induced by orientation symmetry breaking indeed reveal nodal patterns with the driving position to be a nodal point as the theoretical prediction. Moreover, some Chladni figures of the new resonant modes can also be found to present deformed morphologies that break the reflection symmetry with respect to the square diagonals when the symmetry breaking parameter increases even further (see i, iv, and vi in Fig. 8b).
Reconstructing experimental resonant modes by developed model. For validating its feasibility to
analyze modern Chladni systems with orientation symmetry breaking, the developed model is further exploited to reconstruct all the experimental observations subsequently. Theoretically, the driving efficiency of power delivery η of the vibrating plate can be expressed as the square of the ratio of the reaction amplitude α ω Ψ ′ ′ x y ( , ; ) to the driving amplitude Q that can be explicitly derived as 16 Note that the damping coefficient γ and coupling factor α that are respectively associated with the widths and positions of resonant peaks can be directly determined by best fitting the numerical calculation to the experimental results. By fine tuning the symmetry-breaking parameter δ in the calculation, the overall structures of experimental frequency spectra can be seen to be nicely matched by the numerical results. The best fitting between the experimental and numerical spectra in Figs 6a-8a correspond to symmetry-breaking parameters δ to be 0, 0.014, and 0.022 for the brass plates with cutting angles θ of π/4, π/6, and 0, respectively. Using Eqs (9 and 10) with the driving frequencies to be at the peak positions marked in Figs 6a-8a, the corresponding Chladni figures for the brass plates can be reconstructed by evaluating the inverse of wave patterns ω Ψ x y ( , ; ) 2 as seen in the second rows of Figs 6b-8b. Even though slight differences can be found in detailed structures due to the linear approximation in theory and the manufacturing imperfections of plates, the global morphologies of experimental Chladni figures are satisfactorily reconstructed by the numerical patterns of current model. The good agreement between the numerical reconstructions and the experimental results once again verifies the applicability of the developed model to nicely approximate the resonant behavior of vibrating thin plates subject to orientation symmetry breaking. Finally, the orientation-dependent symmetry-breaking parameter for the brass plate given by Eq. (3) is further calculated with the elastic constants 21 of E x = 107.7 GPa, E y = 126.5 GPa, and ν + E G 2 xy y = 80.3 GPa to compare with the results from reconstructions (Fig. 9). The high consistency between the reconstructing parameters and the results calculated from the elastic theory further verifies that the developed model can be a powerful The high similarity between these results and those in ref. 24 for the aluminum plate implies that the square brass plate with cutting angle θ = π/4 can be regarded as an isotropic system as the theoretical prediction. tool to be combined with the numerically modal-expansion method 30 to analyze the anisotropic elastic constants of orthotropic plates more efficiently.
Discussion
In this study, point-driven modern Chladni systems subject to the orientation symmetry breaking effect have been theoretically and experimentally explored in depth. By cutting the orthotropic brass sheet into squares with their sides in rotation angles with respect to the characteristic axes, vibrating plates with different elastic anisotropy have been systematically explored. It has been confirmed that the resonant spectra reveal explicit redistribution and occurrence of new resonant modes under the orientation symmetry breaking effect which leads the degenerate level splitting of the orthotropic plates. More intriguingly, the driving position in some new resonant modes has been found to turn into a nodal point, whereas this position is always an antinode in the isotropic plates. Using the analytical model developed by including a dimensionless parameter to consider the orientation symmetry breaking of plate in a generalized manner, formation of the peculiar morphologies of new resonant modes from the antiphase superposition has been unambiguously resolved. Furthermore, the developed model has been utilized to reconstruct all experimental observations of resonant spectra and resonant Chladni figures subject to orientation symmetry breaking with high consistency. The good agreement between the theoretical reconstructions and experimental results not only proves the feasibility of the developed model to describe point-driven Chladni systems with orientation symmetry breaking but also provide a powerful tool to use the analytical model to analyze important elastic constants of orthotropic plates in a more time-saving way.
Methods
Response wave function of modern Chladni plates. According to ref. 16 , the response wave functions Compared with the results shown in Fig. 6(a,b), some new resonant peaks can be clearly seen to emerge from the orientation symmetry breaking. The Chladni figures of new resonant modes reveal morphologies with a nodal point at the driving position which is always an antinode in the isotropic plates. Fig. 7(a,b), some new resonant modes can be seen to become locally dominant states as the symmetry breaking parameter increases further. These new dominant modes can be found to reveal deformed morphologies along one of the coordinate axes and with a nodal-point driving position (see the cases of i, iv, and iv). where ∇ 4 is the bi-harmonic operator; D is the flexural rigidity; ρ is the mass density of plate; h is the plate thickness; m d and m p are respectively the mass of driving oscillator and thin plate; Q is the amplitude of driving oscillator; α ∈ [0, 1] is the dimensionless coupling factor which describes the coupling strength between the plate and the driving oscillator. By using the complete set of eigenfunctions ψ x y ( , ) n and eigenvalues ω n given by the homogeneous equation Considering a coherent state that is composed by N eigenmodes with equal probabilities, i.e. = p N 1/ n , the information entropy can be evaluated to be = S N ln whose exponential form can certainly give the number of effective participated eigenmodes.
Measurement of resonant spectra and resonant Chladni figures of thin plates. The setup and
processes for measuring modern Chladni figures at resonance are the same as those mentioned in refs 16,24 . To prepare thin plate systems with different elastic anisotropy corresponding to different symmetry-breaking parameters, the brass sheet with thickness of 0.8 mm was cut into squares with the side-length of 280 mm and with different cutting angles θ to be 0, π/6, and π/4 with respect to the characteristic axes of the orthotropic brass (Fig. 5a). The center of thin plate was fixed with a screw supporter that was driven by an electronically controlled mechanical oscillator with sinusoidal wave of variable frequency. The electronically controlling system consists of a function generator with its signal to be amplified to excite the mechanical oscillation and a digital galvanometer connected in series to the oscillator to probe the effective driving power of the whole plate system (Fig. 5b). From the frequency response of the measured driving power for total vibrating system (thin plate and mechanical oscillator) ω P ( ) can be analyzed to characterize the resonant spectrum of modern Chladni systems 16 . Subsequently, resonant Chladni figures can be recorded at the resonant frequencies resolved from the driving efficiency spectrum by using the traditional sprinkling-sand method.
Data availability statement. All data generated or analyzed during this study are included in this published article. | 5,824 | 2018-07-18T00:00:00.000 | [
"Physics",
"Materials Science",
"Engineering"
] |
N-independent Localized Krylov Bogoliubov-de Gennes Method: Ultra-fast Numerical Approach to Large-scale Inhomogeneous Superconductors
We propose the ultra-fast numerical approach to large-scale inhomogeneous superconductors, which we call the Localized Krylov Bogoliubov-de Gennes method (LK-BdG). In the LK-BdG method, the computational complexity of the local Green's function, which is used to calculate the local density of states and the mean-fields, does $not$ depend on the system size $N$. The calculation cost of self-consistent calculations is ${\cal O}(N)$, which enables us to open a new avenue for treating extremely large systems with millions of lattice sites. To show the power of the LK-BdG method, we demonstrate a self-consistent calculation on the 143806-site Penrose quasicrystal lattice with a vortex and a calculation on 1016064-site two-dimensional nearest-neighbor square-lattice tight-binding model with many vortices. We also demonstrate that it takes less than 30 seconds with one CPU core to calculate the local density of states with whole energy range in 100-millions-site tight-binding model.
We propose the ultra-fast numerical approach to large-scale inhomogeneous superconductors, which we call the Localized Krylov Bogoliubov-de Gennes method (LK-BdG). In the LK-BdG method, the computational complexity of the local Green's function, which is used to calculate the local density of states and the mean-fields, does not depend on the system size N . The calculation cost of self-consistent calculations is O(N ), which enables us to open a new avenue for treating extremely large systems with millions of lattice sites. To show the power of the LK-BdG method, we demonstrate a self-consistent calculation on the 143806-site Penrose quasicrystal lattice with a vortex and a calculation on 1016064-site two-dimensional nearest-neighbor square-lattice tightbinding model with many vortices. We also demonstrate that it takes less than 30 seconds with one CPU core to calculate the local density of states with whole energy range in 100-millions-site tight-binding model.
Introduction. The mean-field approach through the Bogoliubov-de Gennes (BdG) equations is one of the most convenient and efficient ways to describe inhomogeneous superconductivity. In the past two decades, studies about quasiparticle excitations in superconducting systems with junctions or vortices become more important since these systems can be stages for a topological quantum computing with the use of the Majorana quasiparticles [1][2][3]. Recently, the Majorana zero modes have been observed in many systems such as the iron-based superconductor FeTe x Se 1−x [4,5]. Since in the systems with junctions or vortices one needs to use a real-space formulation, the numerical simulation becomes computationally involved. Although there are alternative approaches to inhomogeneous superconductivity like quasiclassical Eilenberger theory or Ginzburg-Landau methods, these methods can not treat discretized quantum modes like Majorana zero modes. In addition to topological materials, there are many interesting systems to be solved such as high-T c superconductors for which the superconducting coherence length is of the order of the Fermi wavelength, or nanoscale superconductivity for which superconducting coherence length is comparable to the system size. Therefore, the need for a fully quantummechanical approach has become imperative.
It is very hard to diagonalize the Hamiltonian in large inhomogeneous systems, since the computational complexity to diagonalize the Hamiltonian matrix is O(N 3 ). Here, N is the matrix size. In the last decade, various kinds of numerical approaches to solve the BdG equations for inhomogeneous systems have been developed [6][7][8][9]. The computational complexities of these approaches are O(N 2 ) for self-consistent calculations and O(N ) for calculating the local quantities like the local density of states (LDOS), respectively. However, if one wants to treat inhomogeneous systems with internal degrees of freedom (e.g. the iron-based superconductors are multiband systems), the computational complexity becomes huge even with the use of supercomputing systems with thousands of CPU cores. Therefore, it is still hard to treat large realistic inhomogeneous systems.
In this letter, by focusing on the fact that the oneparticle local Green's function is constructed locally in real space, we propose the ultra-fast numerical approach to large-scale inhomogeneous superconductors, which we call the Localized Krylov Bogoliubov-de Gennes method (LK-BdG). We show that vectors in the Krylov subspace to calculate the Green's function are localized. The computational complexities in the LK-BdG are O(N ) for selfconsistent calculations and O(1) for calculating the local quantities, respectively. To show the power of the LK-BdG method, we demonstrate a self-consistent calculation on the s-wave superconducting 143806-site Penrose quasicrystal lattice with a vortex and a calculation on 1016064-site two-dimensional s-wave nearest-neighbor square-lattice tight-binding model with many vortices. Finally, the summary is given.
Model. The Bogoliubov-de Gennes equations describe the behavior of electrons and holes in superconductors, which are coupled to mean-fields. A general BdG Hamiltonian is given as H = Ψ †Ĥ Ψ/2. The column vector Ψ is composed of N fermionic annihilation c i and creation operators c † i (i = 1, 2, · · · , N ), . The subscription i in c i or c † i indicates a quantum index depending on spatial site, spin, orbital, etc. For simplicity, we regard i as a spatial site index. The Hamiltonian matrixĤ is a 2N × 2N Hermitian matrix given aŝ arXiv:2001.02362v1 [cond-mat.supr-con] 8 Jan 2020 Here,Ĥ N is a Hamiltonian matrix in normal states and ∆ is a superconducting order parameter. Without diagonalizing the BdG Hamiltonian directly, we can calculate physical observables and mean-fields with the use of the one-particle Green's functionĜ(z) = (zÎ −Ĥ) −1 . The important quantity is the difference of the retarded and advanced Green's function matrices determined asd(ω) =Ĝ R (ω) −Ĝ A (ω). For example, the LDOS at site i and the mean-field c i c j are expressed [6,7]. Here, e(i) and h(i) are 2N -component unit-vector defined as Localized Krylov subspace. We focus on the fact that the vectors e(i) and h(i) are localized in real space. We introduce the order-m Krylov subspace generated by the Hamiltonian matrixĤ and an unit vector b (= e(i) or h(i)) given as We call K m (Ĥ, b) the localized Krylov subspace, since m vectors are localized as follows. The element of the second vector is expressed as If the Hamiltonian matrixĤ is sparse, the number of finite value elements of [Ĥe(i)] k is a few. For example, in the case of the two-dimensional square-lattice tightbinding model with nearest neighbor hopping, there are only four elements in the normal state Hamiltonian matrix. The element of the third vector is expressed as [Ĥ 2 e(i)] k = lĤ klĤli , where the index l for the summation is restricted due to the sparseness of the Hamiltonian. Thus, the number of the finite value elements of m-th vector is ∼ m d , where d is the dimension of the system. With the use of the above discussion, we can reduce the computational complexity of the matrix-vector product: where the number of the indices in the summation does not depend on the size of the system N , as shown in Fig. 1. The first vector h 0 , defined in Eq. (2), is localized in hole-space. Although the amplitude of the vectors spreads in electron-and hole-spaces due to the superconducting order parameter matrix∆, the amplitudes of h 1 , h 2 , h 3 are still localized (See, Fig. 1). We should note that this property is useless for finding eigenvalues and eigenvectors with the use of the Krylov-subspace based methods such as Lanczos or Arnoldi methods, since the eigenvectors are usually not localized so that we should take large m in the Krylov subspace where the elements of the m-th vector are all finite. Chebyshev polynomial method. In the Chebyshev polynomial method, we generate vectors by a recurrence formula: with q 0 = b, q 1 =Kb andK = (Ĥ − bÎ)/a [6,7]. n generated vectors are in the Krylov subspace K n+1 (Ĥ, b). The mean-field c i c j is expressed as , w n = (1 + δ n0 )π/2 and f (x) = 1/(1 + exp(−x/T )). We found that the mean-field c i c j can be calculated with good enough accuracy in the localized Krylov subspace whose order m is much smaller than the dimension of the matrix if the index j is not far from the index i in real space. Therefore, the Chebyshev polynomial method with the localized Krylov subspace to calculate elements of the matrixd(ω) does not depend on the matrix dimension N . We note that Furukawa and Motome have shown that the free energy in the Monte Carlo simulations can be calculated with the use of the similar localized Krylov subspace in fermion systems coupled with classical degree of freedom [12]. According to their paper, we can introduce the truncated localized Krylov subspace, which accelerates BdG simulations more effectively as discussed later.
Lanczos method for a Green's function. Another Krylov-subspace-based method to calculate a Green's function is the Lanczos method [13]. The diagonal elements of Green's function [Ĝ(z)] ii (i ≤ N ) can be calculated by the continued fraction expansion expressed as )), where the coefficients a n and b 2 n are calculated by a recurrence formula: j n+1 =Ĥj n − a n j n − b 2 n j n−1 , with a n = j T nĤ j n /j T n j n and b 2 n = j T n j n /j T n−1 j n−1 , supplemented by b 2 0 = 0, j −1 = 0 and j 0 = e(i). In the Lanczos method for a Green's function, the n-th order Krylov subspace is given as K n (Ĥ, e(i)), which is localized. We note that the Lanczos method with the localized Krylov subspace has been used in the field of the order-N first-principles calculations [14,15]. In this letter, we adopt the Lanczos method to calculate the LDOS, since this quantity is calculated by the diagonal element of the Green's function.
Truncated Localized Krylov subspace. According to the paper written by Furukawa and Motome [12], we introduce the truncated matrix-vector product expressed as We can reduce the range where calculations are restricted, by introducing a threshold . The details are discussed in their paper [12]. In this letter, we adopt = 10 −6 . Demonstration I: quasicrystal with the Penrose lattice. We demonstrate a self-consistent calculation of quasicrystalline superconductors to show the power of the LK-BdG method. Recently, the superconductivity of quasicrystals was discovered in Al-Zn-Mg quasicrystalline alloys [16]. Sakai and Arita have studied possible superconductivity on a Penrose-tiling structure, which is a prototype of quasicrystalline structures [10,11]. They found that there exists a superconducting state with spatially extended Cooper pairs in the attractive Hubbard model. This Penrose-tiling structure that we call Penrose lattice is one of good demonstrations for the LK-BdG, since the inhomogeneous superconducting order parameter naturally appears. We consider the tight-binding model on the Penrose lattice proposed in the previous papers [10,11] (Also see the supplemental materials[17]). We introduce a vortex located at a center. We calculate a s-wave onsite superconducting order parameter c i c i with the interaction U = −3t and the chemical potential µ = −1t at the zero temperature. The Chebyshev cutoff parameter is n c = 200, which is enough to obtain the mean-field. The renormalized parameters a and b for the Hamiltonian matrix are a = 10t and b = 0, respectively. In Fig. 2, we show the self-consistent solution on the 143806-site Penrose lattice (Also see Fig. S1 in the supplemental materials[17]). By comparing with 143806-site and 21106-site systems, we show that a center region can be regarded as a bulk since the LDOS around a center does not depend on the lattice size as shown in Fig. 2(d). While the level spacing of vortex bound states in conventional systems is characterized by ∆ 2 /E F with the Fermi energy E F [20], the energy level in the Penrose lattice is close to zero. The size of the vortex core is small as shown in Fig. 2(a). In the conventional theory for the vortex bound states, a minimum energy level in a small vortex core is high due to a quantum confinement of quasiparticles. Note that this energy level depends on the position of a vortex center [18]. Because there is no Fermi wave length due to the absence of the translational symmetry, this suggests that interesting vortex physics exists in quasicrystalline superconductors.
Let us show the system size dependence of the computational complexity for self-consistent calculations. Figure 3 shows that the computational complexity is O(N ). For example, the elapsed time for one iteration step in 375971-site (143806-site) Penrose lattice with 240 CPU cores on the supercomputing system SGI ICE X at the Japan Atomic Energy Agency [19] is about 1265 (440) seconds.
Demonstration II: Ultra-large 2D tight-binding model. We demonstrate that the LK-BdG method can treat ultra-large 2D tight-binding model. We consider 2D N x × N y square-lattice nearest-neighbor tight-binding model with many vortices at zero temperature. We calculate the s-wave order parameter with U = −2.4t, µ = −1.5t, n c = 200, a = 10t, b = 0. We consider the Peierls phase to introduce vortices perpendicular to the system [7,21]. The symmetric gauge A(r) = (1/2)H × r with H = (0, 0, H z ) is used. Here, we consider m vortices in the system (H z = mφ 0 /(N x N y )) and the type II limit (the magnetic penetration depth λ → ∞). As the initial guess of the superconducting order parameter, we introduce vortices with the Penrose tiling pattern, whose phase singularities are located at vertices of the Penrose tiling. Although this vortex configuration is not a true ground state, we can obtain a similar vortex configuration as a metastable state if the vortex- vortex distance is long enough in the type II-limit superconductor. After 120 iteration steps with solving gap equations [22], we obtain the superconducting gap distribution in the 1008 × 1008 2D tight-binding model. As shown in Fig. 4, the LK-BdG method can calculate superconducting mean-fields and the LDOS in large systems with many vortices.
To compare with a previous method, we measure the elapsed time for calculating the LDOS with a single vortex located at a center. For simplicity, we consider a single vortex whose coherence length is 10 (the unit is the lattice spacing). We consider 4000 energy meshes from −2.5t to 5.5t. The number of the Lanczos iterations is 400 and the smearing factor for the LDOS is 5 × 10 −2 . The calculations with a single CPU core are done on a laptop PC (MacBook Pro (13-inch, 2018, Four Thunderbolt 3 ports) with 2.7GHz Intel Core i7 CPU with 4 cores). As shown in Fig. 5, the computational complexity of our method with the localized Krylov subspace does not depend on N , where N is the dimension of the Hamiltonian matrix. We confirm that the elapsed time in the 5000 × 5000 lattice system, whose matrix dimension is 5×10 7 , is only about 26 seconds on the laptop PC. On the other hand, the complexity of the previous Lanczos-based method is O(N ) as shown in Fig. 5. Summary. We proposed the LK-BdG, the ultra-fast numerical approach to large-scale inhomogeneous superconductors, by focusing on the fact that the vectors in the Krylov subspace for the Green's function is localized. In a self-consistent calculation, the computational complexity is O(N ). We also showed that the computational complexity to calculate the local density of states does not depend on the system size N . The LK-BdG method enables us to open a new avenue for treating extremely large systems with millions of lattice sites.
PENROSE TILING
We show the Penrose lattice that we used in Fig. S1. We regard each vertex of the rhombuses as a site, and put an electron hopping t between two sites connected by the edge of the rhombuses. The Hamiltonian is expressed as where n iσ = c † iσ c iσ . For simplicity, we neglect the Hartree mean-fields. The site-dependent superconducting order parameter is The self-consistent solution of the superconducting mean-fields is shown in Fig. S2. | 3,753.4 | 2020-01-08T00:00:00.000 | [
"Physics"
] |
Electrochemical Behavior of Al-B4C Metal Matrix Composites in NaCl Solution
Aluminum based metal matrix composites (MMCs) have received considerable attention in the automotive, aerospace and nuclear industries. One of the main challenges using Al-based MMCs is the influence of the reinforcement particles on the corrosion resistance. In the present study, the corrosion behavior of Al-B4C MMCs in a 3.5 wt.% NaCl solution were investigated using potentiodynamic polarization (PDP) and electrochemical impedance spectroscopy (EIS) techniques. Results indicated that the corrosion resistance of the composites decreased when increasing the B4C volume fraction. Al-B4C composite was susceptible to pitting corrosion and two types of pits were observed on the composite surface. The corrosion mechanism of the composite in the NaCl solution was primarily controlled by oxygen diffusion in the solution. In addition, the galvanic couples that formed between Al matrix and B4C particles could also be responsible for the lower corrosion resistance of the composites.
Introduction
Aluminum based metal matrix composites (MMCs) have received considerable attention in the automotive, aerospace and nuclear industries due to their light weight, as well as their superior thermal conductivity, high stiffness and hardness [1][2][3]. The common reinforcements added to commercial MMCs are silicon carbide (SiC), alumina (Al 2 O 3 ) and boron carbide (B 4 C). Compared to traditional SiC and Al 2 O 3 reinforcements, B 4 C possesses numerous advantages, specifically a density (2.51 g¨cm´3) [4] that is significantly lower than that of SiC or Al 2 O 3 , an extremely high hardness (HV = 30 GPa) and wear resistance, a remarkable chemical inertness [4][5][6][7] and a special neutron absorption capacity [8]. These features make B 4 C an excellent reinforcement for high performance MMCs. Its applications include hard disc substrates, brakes with a high wear resistance and armor plates with a high ballistic performance [9,10]. In recent years, due to the special capturing neutron ability of isotope B 10 , Al-B 4 C MMCs have been increasingly used as neutron shielding materials when fabricating storage containers for spent nuclear fuels in the nuclear industry [11][12][13][14].
One of the main challenges using Al-based MMCs is the influence of the reinforcement particles on the corrosion resistance [15][16][17][18][19]. Because adding reinforcement particles interrupts the continuity of the aluminum matrix and its protective surface oxide films, the number of sites where corrosion could be initiated increases, making the composite more susceptible to corrosion [20,21]. Singh et al. [22] studied the influence of SiC particles addition on the corrosion behavior of 2014 Al-Cu alloy in 3.5 wt.% NaCl solution. They found that addition of 25 wt.% SiC p to base alloy decreases corrosion resistances considerably. Zhu and Hihara [23] investigated the influence of alumina fiber on the corrosion initiation and propagation of the Al-2 wt.% Cu-T6 metal matrix composite. Results show that the MMC exhibited inferior corrosion resistance as compared to its monolithic matrix alloy. Bhat et al. [24] investigated the corrosion behavior of the 6061 Al-SiCp composite and its base alloy in seawater using the potentiodynamic polarization technique. It was found that the composite corroded faster than its base alloy and that composite corrosion was mainly confined to the interface as opposed to the uniform corrosion observed for the base alloy. Sun et al. [25] also studied the corrosion behavior of 6061 Al-SiCp MMCs in a NaCl solution. With the observation that the pitting degree rose with an increasing SiC content, it is presumed that the pitting corrosion depends on the local SiC distribution and the surface film integrity. Roepstorff et al. [26] reported that the corrosion resistance of metal matrix composites can be affected by three processes: (1) galvanic coupling of the metal and reinforcement; (2) crevice attack at the metal/reinforcement interface; and (3) preferred localized attack on the possible reaction products between the metal and the ceramic.
In contrast to the many research works dedicated to the corrosion behavior of Al-SiC composites, few studies have focused on the corrosion of Al-B 4 C composites. Ding and Hihara [27] investigated the effect of B 4 C particles on the corrosion behavior of 6092-T6 Al MMCs with 20 vol.% B 4 C in a 0.5 M Na 2 SO 4 solution at room temperature. Corrosion initiation and propagation are related to the formation of microcrevices, the localized acidification and alkalization of the solution, and to aluminum containing amphoteric oxides. Katkar et al. [28] evaluated the effect of the reinforced B 4 C particle content in AA6061 on the formation of a passive film in sea water. They reported that the passive film formed on B 4 C particle-reinforced AA6061 alloy because there was a shift in the corrosion potential toward the positive direction compared to the base alloy. In our previous study [29], the Al-B 4 C composite was less corrosion resistant than the base alloy in the NaCl, K 2 SO 4 and H 3 BO 3 solutions. In another study [30], the B 4 C particles exhibited a cathodic character relative to the aluminum alloy in the K 2 SO 4 solution, meaning that the B 4 C particles could form galvanic couples with the peripheral aluminum matrix in the composite.
To have a deep understanding of the corrosion behavior and corrosion mechanism of AA1100-B 4 C metal matrix composites in a 3.5 wt.% NaCl solution, the present study was carried out in open-to-air and deoxygenated conditions. Electrochemical techniques, including potentiodynamic polarization (PDP), electrochemical impedance spectroscopy (EIS) and zero resistance ammetry (ZRA) were used. Besides, the effect of the B 4 C particle volume fraction on the corrosion behavior of Al-B 4 C composites was investigated, and the surface morphology of the composite before and after corrosion was characterized using an optical stereoscope and a scanning electron microscope (SEM).
Preparation of Samples and Electrolytes
The investigated composites were AA1100-16 vol.% B 4 C and AA1100-30 vol.%. Both composites were supplied by Rio Tinto Alcan (Saguenay, QC, Canada) via an ingot metallurgy route [3,9]. The average particle size of boron carbide in the composites is 17 µm and the matrix is a standard AA1100 aluminum alloy except the Ti content. Approximately 1.0-2.5 wt.% titanium was added to both Al-B 4 C composites during the composite fabrication process to reduce the interfacial reactions between the B 4 C and the liquid aluminum [31]. The DC cast ingots were preheated and hot-rolled with multi-passes of cross-rolling to the final 4.3 mm thick sheets. To study the effect of B 4 C particles on the corrosion behavior of the composite, an AA1100 alloy without B 4 C was used as the base alloy. The chemical composition of the AA1100 base alloy is listed in Table 1. Samples were cut into small pieces (20 mm by 20 mm) and unless otherwise stated, sanded with a 3M Scotch-Brite™ (3M, Saint Paul, MN, USA) MMM69412 surface conditioning disc (5 inches in diameter, extra-fine surface finish) before being degreased with acetone and rinsed with nanopure water (15.2 MΩ¨cm). Finally, all specimens were dried with clean compressed air. Analytical reagent grade NaCl was used to obtain the 3.5 wt.% NaCl electrolyte.
Electrochemical Measurements
The potentiostat employed in the present study was a Reference 600 instrument (Gamry Instruments, Warminster, PA, USA). The electrochemical investigations were performed using a 300 cm 3 -EG&G PAR flat cell (London Scientific, London, ON, Canada) with an Ag/AgCl electrode (4M KCl as filled solution) as the reference electrode and a platinum mesh as the counter electrode (CE). All potentials given in this article are referred to the Ag/AgCl electrode. The corrosion cell had a 1-cm 2 orifice as the working surface. Magnetic stirring was employed at the bottom of the cell to increase the mass transfer at the electrode surface.
The potentiodynamic polarization tests were performed in open-to-air and deoxygenated conditions. The deoxygenation process began 1 h before the measurements by purging argon into the solution and continued until the end of the experiment. A potential scan was taken from´250 mV below the E ocp to the potential at which a 1 mA¨cm´2 current density was recorded at a scan rate of 1 mV¨s´1. The EIS curves were obtained by applying a sinusoidal perturbation voltage of 10 mV rms around the E ocp in the 100 kHz to 10 mHz frequency range. The detailed polarization and impedance measurements were described in a previous study [15]. Prior to the polarization and impedance tests, all samples were immersed in the 3.5 wt.% NaCl solution for one hour to ensure a steady open circuit potential (E ocp ). During the galvanic corrosion test, the variations in the galvanic current and potential of the B 4 C wafer and the AA1100 base alloy were recorded continuously for 24 h. In all cases, the tests were duplicated to ensure the reproducibility of the results.
Metallographic Examination
The composite surface morphology was characterized with an optical stereoscope and a scanning electron microscope (SEM, Hitachi SU-70, Hitachi Instruments, Schaumburg, IL, USA) equipped with an energy dispersive spectrometer (EDS). To understand the surface morphology of the composite before and after corrosion, the samples used for the metallographic analysis were polished to a 0.05 µm fine finish.
Microstructure of Al-B 4 C Composites
The microstructure of the AA1100-16 vol.% B 4 C composite is illustrated in Figure 1. In general, the B 4 C particles were distributed uniformly in the Al matrix, and two common reaction-induced intermetallic phases were observed in the composite and randomly dispersed in the Al matrix: AlB 2 (brown, block-like phase) and Al 3 BC (grey phase) [32]. When fabricating the Al-B 4 C composites, the liquid aluminum reacted with B 4 C particles and produced AlB 2 and Al 3 BC intermetallic particles [3,9]. To limit the reaction between the B 4 C particles and the liquid aluminum, 1.0~2.5 wt.% Ti was added to the composites. Afterward, a thin but dense TiB 2 layer (a third reaction product) was formed in situ at the Al/B 4 C interfaces, isolating the B 4 C particles from the liquid aluminum [9,12]. Consequently, all B 4 C surfaces were surrounded with a TiB 2 layer, as observed from the SEM micrographs and the X-ray elemental map in Figure 2. The microstructure of the AA1100-30 vol.% B 4 C composite was very similar to the AA1100-16 vol.% B 4 C composite, except for the increased B 4 C amount.
Materials 2015, 8 4 continued until the end of the experiment. A potential scan was taken from −250 mV below the Eocp to the potential at which a 1 mA·cm −2 current density was recorded at a scan rate of 1 mV·s −1 . The EIS curves were obtained by applying a sinusoidal perturbation voltage of 10 mV rms around the Eocp in the 100 kHz to 10 mHz frequency range. The detailed polarization and impedance measurements were described in a previous study [15]. Prior to the polarization and impedance tests, all samples were immersed in the 3.5 wt.% NaCl solution for one hour to ensure a steady open circuit potential (Eocp).
During the galvanic corrosion test, the variations in the galvanic current and potential of the B4C wafer and the AA1100 base alloy were recorded continuously for 24 h. In all cases, the tests were duplicated to ensure the reproducibility of the results.
Metallographic Examination
The composite surface morphology was characterized with an optical stereoscope and a scanning electron microscope (SEM, Hitachi SU-70, Hitachi Instruments, Schaumburg, IL, USA) equipped with an energy dispersive spectrometer (EDS). To understand the surface morphology of the composite before and after corrosion, the samples used for the metallographic analysis were polished to a 0.05 μm fine finish.
Microstructure of Al-B4C Composites
The microstructure of the AA1100-16 vol.% B4C composite is illustrated in Figure 1. In general, the B4C particles were distributed uniformly in the Al matrix, and two common reaction-induced intermetallic phases were observed in the composite and randomly dispersed in the Al matrix: AlB2 (brown, block-like phase) and Al3BC (grey phase) [32]. When fabricating the Al-B4C composites, the liquid aluminum reacted with B4C particles and produced AlB2 and Al3BC intermetallic particles [3,9]. To limit the reaction between the B4C particles and the liquid aluminum, 1.0~2.5 wt.% Ti was added to the composites. Afterward, a thin but dense TiB2 layer (a third reaction product) was formed in situ at the Al/B4C interfaces, isolating the B4C particles from the liquid aluminum [9,12]. Consequently, all B4C surfaces were surrounded with a TiB2 layer, as observed from the SEM micrographs and the X-ray elemental map in Figure 2. The microstructure of the AA1100-30 vol.% B4C composite was very similar to the AA1100-16 vol.% B4C composite, except for the increased B4C amount.
Potentiodynamic Polarization
The electrochemical behavior of Al-B4C composite and effect of B4C particle content on corrosion were investigated using potentiodynamic polarization and electrochemical impedance spectroscopy (EIS). Figure 3a displays polarization curves of the composite with different B4C contents. The corrosion current density (jcorr) and corrosion potential (Ecorr) are obtained at the intersection point of extrapolation of the cathodic polarization branch and the Ecorr horizontal line, which is shown in the Figure 3b. As you will notice from Figure 3a, it is difficult to find a linear region near Ecorr on the anodic polarization branch of the AA1100 base alloy. Besides, the cathodic branch shows a long and defined linear behavior for over 100 mV; therefore, in this case, as suggested by McCafferty [33], the extrapolation of the cathodic branch method was used. All fittings were done at the liner part of cathodic branch for over 50 mV. The corrosion current density, corrosion potential and cathodic Tafel slopes values are summarized in Table 2.
It shows that the jcorr increases from 0.35 to 11.21 μA·cm −2 when B4C volume fraction increases from 0 to 30 vol.%, suggesting that the corrosion resistance of the composites decreases significantly when increasing the B4C volume fraction. The corrosion potential shifts in the positive direction, but the shift does not continue when increasing the B4C content. According to the mixed potential theory [28], the potential of the composite is expected to shift in the noble direction when increasing the B4C level in the composite. However, increasing the B4C volume fraction in the composite increases the discontinuity of the protective surface oxide films, making the Al-30 vol.% B4C composite more vulnerable to the chloride ions and generating a less noble potential.
Potentiodynamic Polarization
The electrochemical behavior of Al-B 4 C composite and effect of B 4 C particle content on corrosion were investigated using potentiodynamic polarization and electrochemical impedance spectroscopy (EIS). Figure 3a displays polarization curves of the composite with different B 4 C contents. The corrosion current density (j corr ) and corrosion potential (E corr ) are obtained at the intersection point of extrapolation of the cathodic polarization branch and the E corr horizontal line, which is shown in the Figure 3b. As you will notice from Figure 3a, it is difficult to find a linear region near E corr on the anodic polarization branch of the AA1100 base alloy. Besides, the cathodic branch shows a long and defined linear behavior for over 100 mV; therefore, in this case, as suggested by McCafferty [33], the extrapolation of the cathodic branch method was used. All fittings were done at the liner part of cathodic branch for over 50 mV. The corrosion current density, corrosion potential and cathodic Tafel slopes values are summarized in Table 2.
It shows that the j corr increases from 0.35 to 11.21 µA¨cm´2 when B 4 C volume fraction increases from 0 to 30 vol.%, suggesting that the corrosion resistance of the composites decreases significantly when increasing the B 4 C volume fraction. The corrosion potential shifts in the positive direction, but the shift does not continue when increasing the B 4 C content. According to the mixed potential theory [28], the potential of the composite is expected to shift in the noble direction when increasing the B 4 C level in the composite. However, increasing the B 4 C volume fraction in the composite increases the discontinuity of the protective surface oxide films, making the Al-30 vol.% B 4 C composite more vulnerable to the chloride ions and generating a less noble potential. The anodic part of the polarization curve of the base alloy AA1100 reveals an oscillation region beyond which the current density increases quickly, indicating the onset of pitting corrosion. However, for the composite, jcorr increases steeply even under low overpotential, implying that pitting is more easily provoked in composites than in the base alloy. The anodic part of the polarization curve of the base alloy AA1100 reveals an oscillation region beyond which the current density increases quickly, indicating the onset of pitting corrosion. However, for the composite, j corr increases steeply even under low overpotential, implying that pitting is more easily provoked in composites than in the base alloy.
Electrochemical Impedance Spectroscopy (EIS)
To further confirm the polarization results, EIS measurements were carried out for the composites and base alloy in 3.5 wt.% NaCl solution. Prior to the EIS measurement, the variation of E ocp as a function of time was measured and the graph is shown in Figure 4. It is observed that the E ocp is stabled at´710˘5 mV,´575˘3 mV and´607˘5 mV for AA1100, AA1100-16 vol.% B 4 C and AA1100-30 vol.% B 4 C, respectively.
Electrochemical Impedance Spectroscopy (EIS)
To further confirm the polarization results, EIS measurements were carried out for the composites and base alloy in 3.5 wt.% NaCl solution. Prior to the EIS measurement, the variation of Eocp as a function of time was measured and the graph is shown in Figure 4. It is observed that the Eocp is stabled at −710 ± 5 mV, −575 ± 3 mV and −607 ± 5 mV for AA1100, AA1100-16 vol.% B4C and AA1100-30 vol.% B4C, respectively. The impedance spectra obtained in complex impedance (Nyquist plot) and Bode impedance magnitude are displayed in Figure 5. The EIS spectra show a common characteristic: capacitive semicircles in the high and medium frequency range that are related to the aluminum oxide layer and the electrolyte [28]. The biggest HF semicircle is observed for the base alloy, and the diameter of semicircle decreases when increasing the B4C volume fraction in the composite. This observation confirms that incorporating B4C particles into AA6061 breaks the continuity of the oxide layer, decreasing its corrosion resistance. The base alloy also has an additional capacitive semicircle in the low frequency range that may be associated with the charge transfer across the alloy-electrolyte interface [34,35]. However, the Al-B4C composites show an inductive loop with a reduced charge transfer resistance at the low frequency range, revealing the occurrence of pitting on the composite surface.
The Bode impedance magnitude plot is displayed in Figure 5b. The impedance of the composite at the low frequency range decreases when increasing the B4C content. Because the material shows resistive behavior at low frequencies (0.01~0.1 Hz), the impedance at low frequencies could be considered as the resistance. The corrosion resistance of the composite decreases when increasing the B4C content, which validates the previous polarization results.
Similar EIS spectra were obtained for the B4C-reinforced AA6061 alloy in sea water; these data were interpreted using the equivalent circuits shown in Figure 6 [28]. In the first equivalent, Rs is the solution resistance, Rox is the aluminum oxide layer resistance and Rct is the charge-transfer resistance of the alloy. CPE1 is the capacitance between the electrolyte and the alloy, and CPE2 is the capacitance The impedance spectra obtained in complex impedance (Nyquist plot) and Bode impedance magnitude are displayed in Figure 5. The EIS spectra show a common characteristic: capacitive semicircles in the high and medium frequency range that are related to the aluminum oxide layer and the electrolyte [28]. The biggest HF semicircle is observed for the base alloy, and the diameter of semicircle decreases when increasing the B 4 C volume fraction in the composite. This observation confirms that incorporating B 4 C particles into AA6061 breaks the continuity of the oxide layer, decreasing its corrosion resistance. The base alloy also has an additional capacitive semicircle in the low frequency range that may be associated with the charge transfer across the alloy-electrolyte interface [34,35]. However, the Al-B 4 C composites show an inductive loop with a reduced charge transfer resistance at the low frequency range, revealing the occurrence of pitting on the composite surface.
The Bode impedance magnitude plot is displayed in Figure 5b. The impedance of the composite at the low frequency range decreases when increasing the B 4 C content. Because the material shows resistive behavior at low frequencies (0.01~0.1 Hz), the impedance at low frequencies could be considered as the resistance. The corrosion resistance of the composite decreases when increasing the B 4 C content, which validates the previous polarization results.
Similar EIS spectra were obtained for the B 4 C-reinforced AA6061 alloy in sea water; these data were interpreted using the equivalent circuits shown in Figure 6 [28]. In the first equivalent, R s is the solution resistance, R ox is the aluminum oxide layer resistance and R ct is the charge-transfer resistance of the alloy. CPE 1 is the capacitance between the electrolyte and the alloy, and CPE 2 is the capacitance at the interface of the alloy and oxide layer. The equivalent circuit shown in Figure 7b is used to interpret the EIS spectra of the composites. In this equivalent circuit, there are two independent circuits. The first circuit corresponds to the HF capacitive loop that is described with CPE 1 (double layer capacitance of the oxide layer-electrolyte interface) and R ox (charge-transfer resistance of the oxide layer). The second circuit represents the LF inductive loop; this circuit is described by R L (inductance resistance), L (inductance) and CPE 2 (capacitance of pit-electrolyte interface).
Materials 2015, 8 8 at the interface of the alloy and oxide layer. The equivalent circuit shown in Figure 7b is used to interpret the EIS spectra of the composites. In this equivalent circuit, there are two independent circuits. The first circuit corresponds to the HF capacitive loop that is described with CPE1 (double layer capacitance of the oxide layer-electrolyte interface) and Rox (charge-transfer resistance of the oxide layer). The second circuit represents the LF inductive loop; this circuit is described by RL (inductance resistance), L (inductance) and CPE2 (capacitance of pit-electrolyte interface). Because the working electrode deviated from the ideal capacitive behavior due to its surface roughness, heterogeneities, anion adsorption, non-uniform potential, current profile, etc. [36], the constant phase element (CPE) was employed to substitute pure capacitances in the equivalent circuits employed.
All parameters derived from the equivalent circuits are summarized in Table 3. The oxide layer resistance Rox decreases from 23.74 to 0.89 kΩ·cm 2 , and the Rct or RL value decreases from 54.26 to 0.96 kΩ·cm 2 when the B4C content increases from 0 to 30 vol.%. These results confirm the polarization results that the corrosion resistance of the composite decreases when increasing the B4C content. Table 3. Electrochemical parameters derived from the equivalent circuits in Figure 6. Because the working electrode deviated from the ideal capacitive behavior due to its surface roughness, heterogeneities, anion adsorption, non-uniform potential, current profile, etc. [36], the constant phase element (CPE) was employed to substitute pure capacitances in the equivalent circuits employed.
Materials
All parameters derived from the equivalent circuits are summarized in Table 3. The oxide layer resistance Rox decreases from 23.74 to 0.89 kΩ·cm 2 , and the Rct or RL value decreases from 54.26 to 0.96 kΩ·cm 2 when the B4C content increases from 0 to 30 vol.%. These results confirm the polarization results that the corrosion resistance of the composite decreases when increasing the B4C content. Table 3. Electrochemical parameters derived from the equivalent circuits in Figure 6. Because the working electrode deviated from the ideal capacitive behavior due to its surface roughness, heterogeneities, anion adsorption, non-uniform potential, current profile, etc. [36], the constant phase element (CPE) was employed to substitute pure capacitances in the equivalent circuits employed.
Materials
All parameters derived from the equivalent circuits are summarized in Table 3. The oxide layer resistance R ox decreases from 23.74 to 0.89 k٨cm 2 , and the Rct or R L value decreases from 54.26 to 0.96 k٨cm 2 when the B 4 C content increases from 0 to 30 vol.%. These results confirm the polarization results that the corrosion resistance of the composite decreases when increasing the B 4 C content. Table 3. Electrochemical parameters derived from the equivalent circuits in Figure 6.
Corrosion Mechanism Investigation
To study the corrosion mechanism of the composite, polarization tests were conducted in deaerated conditions. The polarization curves are shown in Figure 7 and derived parameters are listed in Table 2. Compared with polarization curves obtained in open-to-air condition, a passive region followed by a well-defined pitting potential at the current density approximately 1 µA¨cm´2 is observed in the anodic branch for both the base alloy and the composites.
Besides, it is found that three materials have almost the same E corr , which means it does not vary with the B 4 C content as observed in open-to-air condition. However, it is more negative than that obtained in open-to-air condition for each material. As seen from Table 2, E corr decreased from´0.70 to´0.93 V Ag/AgCl for the base alloy, from´0.55 to´0.92 V Ag/AgCl for the composite with 16 vol.% B 4 C and from´0.59 to´0.93 V Ag/AgCl for the composite with 30 vol.% B 4 C, respectively. More importantly, the corrosion current density j corr is considerably lower than that obtained in open-to-air condition, i.e., j corr is 0.99 µA¨cm´2 for the composite with 16 vol.% B 4 C, and 2.00 µA¨cm´2 for the composite with 30 vol.% B 4 C. This significant difference in current density shows that the corrosion kinetics of the base alloy (AA1100) and the Al-B 4 C composite in the NaCl solution are limited by the oxygen reduction reaction [33].
In addition, the difference on cathodic part of polarization curves was also observed. As seen from Figure 3, the current density remains constant or exhibits very small increases despite the potential increase of the cathodic branch in open-to-air condition. However, this observation was not found in deaerated condition. Similar behavior was observed by Singh et al. [30] for 2014-SiC p composites and by Dikici et al. [31] for SiO 2 and Fe/TiO 2 coated A380-SiC composites in aerated 3.5 wt.% NaCl solution. They believed that this electrochemical behavior is an indicator of the corrosion mechanism, which is controlled by oxygen diffusion.
Galvanic Current Measurement
Because Al-B 4 C composites have junctions of two electrochemically dissimilar materials, galvanic corrosion between the Al alloy (matrix) and the reinforcing B 4 C particles may occur, degrading the corrosion resistance. As it is technically difficult to conduct a galvanic coupling test using small B 4 C particles, a hot-pressed 99.5% purity B 4 C wafer produced by Ceradyne, Inc. was used. During the galvanic corrosion test, the base alloy AA1100 was used as working electrode #1 (W1) and the B 4 C wafer was working electrode #2 (W2). A positive galvanic current means that working electrode #1 acts as an anode; otherwise, it acts as a cathode. The measured galvanic current and galvanic potential are shown in Figure 8. The galvanic current during the test is always positive, and the stable galvanic current is approximately 11.2 µA. This result suggests that the Al matrix and B 4 C particles in the composite can form galvanic couples in NaCl solution, and the Al matrix acts as an anode and dissolves. A similar result was obtained by Schneider et al. [37], when they studied the galvanic corrosion between pure Nickel and sintered SiC in 3.5 wt.% NaCl solution, i.e., SiC ceramic particles are cathodic sites of the coupling. Abenojar et al. [16] also found that the incorporated amorphous Fe/B particles act as cathode and form a strong galvanic couple with the aluminum matrix. Figure 8. Galvanic current and galvanic potential measured between AA1100 and B4C wafer in 3.5 wt.% NaCl solution.
Pitting Morphology
To identify the initiation sites for the pitting, the AA1100-16 vol.% B4C composite sample was polished to 0.05 μm and polarized to 1 mA·cm −2 . Figure 9 shows that pitting initiated at two sites: (1) the Al-B4C interfaces and (2) the sites away from the B4C particles in Al matrix where intermetallic phases appeared. Pits at the Al-B4C interfaces have an irregular shape, whereas pits in the Al matrix are generally large and hemispherical. Similar types of hemispherical pits were observed on AA5083 in aerated chloride solutions by Abelle et al. [38] and Katkar et al. [28]. They believed that these pits formed due to the simple detachment of the cathodic precipitates due to gravity. As mentioned in Section 3.1, many small intermetallic particles (AlB2 and Al3BC) were dispersed in the Al-B4C microstructure. Those intermetallic particles might be cathodic relative to the surrounding Al matrix. Due to the galvanic effect, the Al matrix around those intermetallic phases dissolves, detaching the particles.
Pitting Morphology
To identify the initiation sites for the pitting, the AA1100-16 vol.% B 4 C composite sample was polished to 0.05 µm and polarized to 1 mA¨cm´2. Figure 9 shows that pitting initiated at two sites: (1) the Al-B 4 C interfaces and (2) the sites away from the B 4 C particles in Al matrix where intermetallic phases appeared. Pits at the Al-B 4 C interfaces have an irregular shape, whereas pits in the Al matrix are generally large and hemispherical. Similar types of hemispherical pits were observed on AA5083 in aerated chloride solutions by Abelle et al. [38] and Katkar et al. [28]. They believed that these pits formed due to the simple detachment of the cathodic precipitates due to gravity. As mentioned in Section 3.1, many small intermetallic particles (AlB 2 and Al 3 BC) were dispersed in the Al-B 4 C microstructure. Those intermetallic particles might be cathodic relative to the surrounding Al matrix. Due to the galvanic effect, the Al matrix around those intermetallic phases dissolves, detaching the particles. Figure 8. Galvanic current and galvanic potential measured between AA1100 and B4C wafer in 3.5 wt.% NaCl solution.
Pitting Morphology
To identify the initiation sites for the pitting, the AA1100-16 vol.% B4C composite sample was polished to 0.05 μm and polarized to 1 mA·cm −2 . Figure 9 shows that pitting initiated at two sites: (1) the Al-B4C interfaces and (2) the sites away from the B4C particles in Al matrix where intermetallic phases appeared. Pits at the Al-B4C interfaces have an irregular shape, whereas pits in the Al matrix are generally large and hemispherical. Similar types of hemispherical pits were observed on AA5083 in aerated chloride solutions by Abelle et al. [38] and Katkar et al. [28]. They believed that these pits formed due to the simple detachment of the cathodic precipitates due to gravity. As mentioned in Section 3.1, many small intermetallic particles (AlB2 and Al3BC) were dispersed in the Al-B4C microstructure. Those intermetallic particles might be cathodic relative to the surrounding Al matrix. Due to the galvanic effect, the Al matrix around those intermetallic phases dissolves, detaching the particles. The formation of pits at the Al-B 4 C interfaces may occur for three reasons: (1) defects existing at the interface between aluminum and B 4 C where Cl´can easily penetrate and attack the Al matrix; (2) the protective TiB 2 layer at Al/B 4 C interfaces that could be preferentially attacked by chloride ions [39]; (3) the galvanic coupling effect between the Al matrix and the B 4 C particles. Consequently, the Al matrix at the interfaces dissolves, and pits form around B 4 C particles. Once a pit is formed, the local chemical environment is substantially more aggressive than the bulk solution, and therefore the matrix is more severely corroded. Additionally, with increased B 4 C particles in the composite, the cathodic and anodic area ratio (A C /A A ) increases due to the cathodic character of the B 4 C particles; a large A C /A A value is detrimental to the Al matrix. Hamdy et al. [40] confirmed the occurrence of galvanic and pitting attacks when they studied the corrosion resistance of ALCOA peak-aged AA6092-SiC 17.5p composite in 3.5 wt.% NaCl. Figure 10 shows the surface appearance of the three materials (the base alloy, AA1100-16 vol.% B 4 C and AA1100-30 vol.% B 4 C) after immersion in 3.5 wt.% NaCl solution for 10 days. The corrosion degree of the test area increases when increasing the B 4 C volume fraction. To evaluate the corrosion of the materials with different B 4 C levels, cross sections from three samples are examined and displayed in Figure 11.
Materials 2015, 8 12 The formation of pits at the Al-B4C interfaces may occur for three reasons: (1) defects existing at the interface between aluminum and B4C where Cl − can easily penetrate and attack the Al matrix; (2) the protective TiB2 layer at Al/B4C interfaces that could be preferentially attacked by chloride ions [39]; (3) the galvanic coupling effect between the Al matrix and the B4C particles. Consequently, the Al matrix at the interfaces dissolves, and pits form around B4C particles. Once a pit is formed, the local chemical environment is substantially more aggressive than the bulk solution, and therefore the matrix is more severely corroded. Additionally, with increased B4C particles in the composite, the cathodic and anodic area ratio (A C /A A ) increases due to the cathodic character of the B4C particles; a large A C /A A value is detrimental to the Al matrix. Hamdy et al. [40] confirmed the occurrence of galvanic and pitting attacks when they studied the corrosion resistance of ALCOA peak-aged AA6092-SiC17.5p composite in 3.5 wt.% NaCl. Figure 10 shows the surface appearance of the three materials (the base alloy, AA1100-16 vol.% B4C and AA1100-30 vol.% B4C) after immersion in 3.5 wt.% NaCl solution for 10 days. The corrosion degree of the test area increases when increasing the B4C volume fraction. To evaluate the corrosion of the materials with different B4C levels, cross sections from three samples are examined and displayed in Figure 11. Uneven layers of corrosion products formed on the outermost surface of the base alloy and composites. The X-ray elemental maps revealed that these layers consist of aluminum oxide and/or hydroxides. The average corrosion product thickness (the corrosion products in the pits were not included in the calculation of the corroded layer thickness above) is obtained using 20 measurements collected from the SEM images and is presented in Figure 12. It is found that the thickness of the corroded layers increases linearly with the B4C content. The thickness of the corroded layer is approximately 1.3 μm in the base alloy and 25.8 μm in the AA1100-16 vol.% B4C composite, which is nearly 20 times that of the base alloy. When increasing the B4C content to 30 vol.%, the thickness reaches 58.7 μm, which is more than 40 times that of the base alloy. These observations indicate that adding B4C particles to the Al alloy can reduce the corrosion resistance in NaCl solution, which accords with the polarization and impedance measurements. Moreover, Figure 11 shows that only the composites suffer severe pitting, which confirms that the Al-B4C composite is more sensitive toward pitting than the base alloy in the NaCl solution. Uneven layers of corrosion products formed on the outermost surface of the base alloy and composites. The X-ray elemental maps revealed that these layers consist of aluminum oxide and/or hydroxides. The average corrosion product thickness (the corrosion products in the pits were not included in the calculation of the corroded layer thickness above) is obtained using 20 measurements collected from the SEM images and is presented in Figure 12. It is found that the thickness of the corroded layers increases linearly with the B 4 C content. The thickness of the corroded layer is approximately 1.3 µm in the base alloy and 25.8 µm in the AA1100-16 vol.% B 4 C composite, which is nearly 20 times that of the base alloy. When increasing the B 4 C content to 30 vol.%, the thickness reaches 58.7 µm, which is more than 40 times that of the base alloy. These observations indicate that adding B 4 C particles to the Al alloy can reduce the corrosion resistance in NaCl solution, which accords with the polarization and impedance measurements. Moreover, Figure 11 shows that only the composites suffer severe pitting, which confirms that the Al-B 4 C composite is more sensitive toward pitting than the base alloy in the NaCl solution.
Conclusions
(1) The polarization and impedance results show that the Al-B4C composites are less corrosion-resistant than the base Al alloy and that the corrosion resistance of the composites decreases when increasing the B4C particle volume fraction. The cross-sectional images demonstrate that the thickness of corrosion products increases linearly with the B4C volume fraction.
Conclusions
(1) The polarization and impedance results show that the Al-B4C composites are less corrosion-resistant than the base Al alloy and that the corrosion resistance of the composites decreases when increasing the B4C particle volume fraction. The cross-sectional images demonstrate that the thickness of corrosion products increases linearly with the B4C volume fraction. (2) Al-B4C composites are susceptible to pitting corrosion in the NaCl solution. Two types of pits are observed on the composite surface after polarization in the NaCl solution: (1) pits with an Figure 12. Relationship of corrosion product thickness and B 4 C volume fraction after 10 days exposure in 3.5 wt.% NaCl solution.
Conclusions
(1) The polarization and impedance results show that the Al-B 4 C composites are less corrosion-resistant than the base Al alloy and that the corrosion resistance of the composites decreases when increasing the B 4 C particle volume fraction. The cross-sectional images demonstrate that the thickness of corrosion products increases linearly with the B 4 C volume fraction. (2) Al-B 4 C composites are susceptible to pitting corrosion in the NaCl solution. Two types of pits are observed on the composite surface after polarization in the NaCl solution: (1) pits with an irregular shape that are preferentially initiated at Al/B 4 C interfaces and (2) hemispheric pits that initiate in the Al matrix where the intermetallic particles appeared. (3) The corrosion of Al-B 4 C composites in 3.5 wt.% NaCl solution is mainly controlled by oxygen reduction in the solution. Moreover, the galvanic couples formed between B 4 C particles and Al matrix is also responsible for the low corrosion resistance. | 9,017.4 | 2015-09-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Investigating the Intrinsic Anisotropy of VO 2 (101) Thin Films Using Linearly Polarized Resonant Photoemission Spectroscopy
: VO 2 is one of the most studied vanadium oxides because it undergoes a reversible metal-insulator transition (MIT) upon heating with a critical temperature of around 340 K. One of the most overlooked aspects of VO 2 is the band’s anisotropy in the metallic phase when the Fermi level is crossed by two bands: π * and d || . They are oriented perpendicularly in one respect to the other, hence generating anisotropy. One of the parameters tuning MIT properties is the unbalance of the electron population of π * and d || bands that arise from their different energy position with respect to the Fermi level. In systems with reduced dimensionality, the electron population disproportion is different with respect to the bulk leading to a different anisotropy. Investigating such a system with a band-selective spectroscopic tool is mandatory. In this manuscript, we show the results of the investigation of a single crystalline 8 nm VO 2 /TiO 2 (101) film. We report on the effectiveness of linearly polarized resonant photoemission (ResPES) as a band-selective technique probing the intrinsic anisotropy of VO 2 .
Introduction
Vanadium oxides are an extremely interesting class of materials that are mostly studied for their reversible metal-to-insulator transition (MIT) [1][2][3][4][5][6][7]. Among them, one of the most studied is VO 2 . The transition temperature is close to room temperature (about 340 K in bulk VO 2 ), and a large jump in resistivity [8,9] makes this material extremely appealing for applications such as energy saving [10][11][12], new-generation electronic devices based on electron correlation and ultra-fast switching [13][14][15]. The nature of the MIT itself has been the grounds of a long debate because of the simultaneous presence of a structural phase transition and a MIT (VO 2 passes from a monoclinic insulator to a metallic metal upon heating), rousing discussion about whether the VO 2 MIT could be classified as a Mott or a Peierls transition [2,[16][17][18][19]. Despite the ongoing debate, experimental evidence has shown that VO 2 MIT can be controlled by the interplay among electronic, orbital, and lattice degrees of freedom [9,18,[20][21][22][23][24]. MIT has been the grounds of great interest because VO 2 belongs to a class of transition-metal complexes known as correlated intermediate valent systems, including lanthanides [25][26][27]. In these systems, the electron wavefunctions 2 of 9 in the metal and ligand orbitals near the Fermi level both possess similar energies but with a small-to-zero overlap due to the Coulomb repulsion. Even with a small overlap, electrons have similar probabilities of being found on one or on the other site because of the existing configuration interaction between multiple possible electronic configurations, in which the redox state of the metal is different [28,29].
In this scenario, strain emerged as a powerful method to control the MIT, and VO 2strained films have been deeply studied [9,23,[30][31][32][33][34][35][36]. Since strains modify interatomic distances changing the vanadium-oxygen overlap, the orbital energy hierarchy needs to be considered, and, in the last analysis, the bands' population at the Fermi level (FL).
Moreover, the metallic phase of VO 2 is strongly anisotropic [36][37][38][39]. This often is an overseen aspect that has to be taken into account for a complete understanding of the MIT mechanism.
The metallic phase of VO 2 has tetragonal lattice symmetry with main axis a, b, and c (see Figure 1). In this phase, FL is crossed by the bands π * and d || [40]. The π * band is generated by the overlap of antibonding O 2p electrons with V 3d electrons lying in the a-b plane while the d || is composed of unpaired V 3d electrons and is oriented along the c axis. The intrinsic anisotropy of metallic VO 2 arises from the orientation of π * and d || which are oriented perpendicularly, one with respect to the other [5,23,40].
Condens. Matter 2023, 8, 40 2 of experimental evidence has shown that VO2 MIT can be controlled by the interplay amo electronic, orbital, and lattice degrees of freedom [9,18,[20][21][22][23][24]. MIT has been the groun of great interest because VO2 belongs to a class of transition-metal complexes known correlated intermediate valent systems, including lanthanides [25][26][27]. In these system the electron wavefunctions in the metal and ligand orbitals near the Fermi level bo possess similar energies but with a small-to-zero overlap due to the Coulomb repulsio Even with a small overlap, electrons have similar probabilities of being found on one on the other site because of the existing configuration interaction between multip possible electronic configurations, in which the redox state of the metal is different [28,2 In this scenario, strain emerged as a powerful method to control the MIT, and VO strained films have been deeply studied [9,23,[30][31][32][33][34][35][36]. Since strains modify interatom distances changing the vanadium-oxygen overlap, the orbital energy hierarchy needs be considered, and, in the last analysis, the bands' population at the Fermi level (FL).
Moreover, the metallic phase of VO2 is strongly anisotropic [36][37][38][39]. This often is overseen aspect that has to be taken into account for a complete understanding of the M mechanism.
The metallic phase of VO2 has tetragonal lattice symmetry with main axis a, b, and (see Figure 1). In this phase, FL is crossed by the bands * and || [40]. The * band generated by the overlap of antibonding O 2p electrons with V 3d electrons lying in the b plane while the || is composed of unpaired V 3d electrons and is oriented along the axis. The intrinsic anisotropy of metallic VO2 arises from the orientation of * and which are oriented perpendicularly, one with respect to the other [5,23,40]. The * and || bands are not equally occupied, with the * band appeari typically less populated with respect to the || band. This unbalance is generated by the different energy position with respect to FL, as explained by Goodenough in his pivo work [40]. The disproportion in electron population is a key parameter to control the M since it is proportional to the electron correlation experienced by V 3d electro [23,31,41,42]. This unbalanced electron population can be tuned by strain [23,41], th influencing the band anisotropy of VO2. Although discriminating the individu contribution of the * and || bands is very important in VO2, isolating the contributi from one of these bands is hard when employing standard spectroscopic technique Using a band selective and chemical selective method is, thus, of paramount importan to obtain insights into the MIT mechanism. The π * and d || bands are not equally occupied, with the π * band appearing typically less populated with respect to the d || band. This unbalance is generated by their different energy position with respect to FL, as explained by Goodenough in his pivotal work [40]. The disproportion in electron population is a key parameter to control the MIT since it is proportional to the electron correlation experienced by V 3d electrons [23,31,41,42]. This unbalanced electron population can be tuned by strain [23,41], thus influencing the band anisotropy of VO 2 . Although discriminating the individual contribution of the π * and d || bands is very important in VO 2 , isolating the contribution from one of these bands is hard when employing standard spectroscopic techniques. Using a band selective and chemical selective method is, thus, of paramount importance to obtain insights into the MIT mechanism.
In this manuscript, we reported our investigation of the intrinsic band structure anisotropy of a single crystalline 8 nm VO 2 /TiO 2 (101) film using a linearly polarized resonant photoemission spectroscopy (ResPES) in order to maximally enhance the orbital selectivity and the photoemission yield.
As we demonstrated in our previous work, this technique allowed us to study the orbital contribution that VO 2 thin films have on MIT [33,34,41,43]. Combining this approach with linear polarization, we obtained the ideal band selective probe for VO 2 intrinsic anisotropy. Orienting the electric field vector E of the incident photon beam along the c-axis of the metallic phase (c r ), the contribution from the d || band to the ResPES spectra was maximum. On the other hand, when E was parallel to a r (thus, perpendicular to c r ), sensitivity to the π * band was maximum [44]. Our experimental results show how the use of linearly polarized ResPES is critical in order to study the anisotropy of VO 2 and the contribution of the different bands populating FL to MIT.
Experimental
The 8 nm VO 2 thick film was deposited on a clean substrate of TiO 2 (101) using RFplasma-assisted oxide-MBE. The base pressure in the deposition chamber was <4 × 10 −9 mbar. The film thickness was controlled by monitoring the deposition time and using a growth rate of 0.1 Å/s. The substrate was heated at a temperature of 550 • C during the deposition. More detailed information on epitaxial film preparation has been reported elsewhere [9,45].
The sample has been characterized using X-ray diffraction (XRD) and resistivity measurements. XRD measurements were performed using a PanAnalytical X'Pert Pro diffractometer (Cu-Kα wavelength). The resistivity measurements were carried out using the Van der Pauw method.
The ResPES and X-ray absorption (XAS) measurements were performed at the NFFA APE-HE beamline at the Elettra synchrotron radiation facility [46]. XAS measurements were acquired in the total electron yield (TEY) at the V L 3 edge at RT and at about 353 K. The light polarization was set to horizontal for all the measurements. ResPES measurements were taken with a Scienta R3000 hemispherical electron energy analyzer spanning the photon energy across the V L 3 edge. More information about the ResPES technique can be found in Appendix A. The energy resolution for XAS was 0.1 eV, while for ResPES measurements, it was 200 meV. The photoelectron's binding energies were calibrated with respect to the Fermi level of a gold reference foil. The sample surface was treated with mild annealing (120 • C, for 50 min) in UHV conditions in order to remove part of the contaminants. A higher temperature was not used in order to reduce the risk of altering the film stoichiometry. The use of the sputtering procedure was also avoided to minimize the risk of inadvertently reducing the film thickness. These limitations had the overall effect of leaving some residual contaminants on the sample surface, which, nevertheless, was not enough to compromise our measurements.
The metallic VO 2 lattice parameters orientation for ResPES and XAS acquisitions are reported in Figure 1.
In order to align the electric field vector of the incident photon beam along the two main axes, c r, and a r, the sample was rotated to an angle α = ±45. In this configuration, we calculated (using the lattice parameters reported in JCPDS no. 76-0675) an effective angle between the electric field vector and the a and c axis of about 13 • . This did not influence our band sensitivity since it depended on the square cosine of the angle occurring between the target band end of the incident electric field [47]. In our case cos 2 (13 • ) was~0.95, which meant that our measurements had 95% of the band's sensitivity with respect to the ideal case in which the incident electric field and orbital orientations were perfectly aligned.
Results and Discussion
The 8 nm VO 2 thick film quality was checked using XRD, and resistivity measurements are reported in Figure 2. The resistivity hysteresis showed a large jump of about three orders of magnitude, pointing out the good quality of the film. The critical temperature T c and the width of transition ∆T were 321 and 8.2 K, respectively, which is in good agreement with similar films investigated in the literature [35,48]. The T c was calculated as the average temperature between the two minima of
Results and Discussion
The 8 nm VO2 thick film quality was checked using XRD, and resistivity measurements are reported in Figure 2. The resistivity hysteresis showed a large jump of about three orders of magnitude, pointing out the good quality of the film. The critical temperature Tc and the width of transition ∆T were 321 and 8.2 K, respectively, which is in good agreement with similar films investigated in the literature [35,48]. The Tc was calculated as the average temperature between the two minima of . The XRD patterns of the insulating and metallic phases are reported in the right panel of Figure 2. The sharp TiO2(101) peak is clearly visible at ~36.1°. The signal from the VO2 film is wide and centered around ~37.35° in both phases. The shape of the VO2 peak points out that the residual strain imposed by the substrate heavily affected the VO2 lattice, which is in qualitative agreement with similar measurements performed on VO2/TiO2(101) thin films [35]. Since the XRD peak was so large, it was hard to exactly locate the peak's position. For the metallic phase, we were able to locate the peak at about ~37.35°, corresponding to d101 = 1.269 Å. The value reported in JCPDS no. 76-0675 for the (101) interplanar distance for bulk VO2 was d101 = 1.277 Å, revealing the presence of a compressive strain of ~0.8% along the (101) direction. In addition, the XRD patterns revealed that the film was single crystalline since no other VO2 peaks could be observed.
The band anisotropy was investigated using ResPES (acquired with photon energy tuned on the maximum of the V L3 edge) and by orienting the sample in order to obtain E || cr and E ⊥ cr (E || ar). With these two configurations, we maximized the signal coming from the || and * bands, respectively. The ResPES spectra are reported in Figure 3's left panel and are qualitatively similar to previous ResPES works [43,49]. All spectra are characterized by a photoemission peak centered around 7 eV, which was generated by the overlap of V 3d and O 2p electrons, and a structure centered around 1.7 eV coming from the unpaired V 3d electrons [50]. The only exception is the off-resonance spectrum, for which the V 3d unpaired signal was too weak to be below our detection threshold, most likely due to the residual contaminants on the sample surface. This may be due to the nature of our ex-situ experiment and the experimental constraints used to clean the sam- The XRD patterns of the insulating and metallic phases are reported in the right panel of Figure 2. The sharp TiO 2 (101) peak is clearly visible at~36.1 • . The signal from the VO 2 film is wide and centered around~37.35 • in both phases. The shape of the VO 2 peak points out that the residual strain imposed by the substrate heavily affected the VO 2 lattice, which is in qualitative agreement with similar measurements performed on VO 2 /TiO 2 (101) thin films [35]. Since the XRD peak was so large, it was hard to exactly locate the peak's position. For the metallic phase, we were able to locate the peak at about~37.35 • , corresponding to d 101 = 1.269 Å. The value reported in JCPDS no. 76-0675 for the (101) interplanar distance for bulk VO 2 was d 101 = 1.277 Å, revealing the presence of a compressive strain of~0.8% along the (101) direction. In addition, the XRD patterns revealed that the film was single crystalline since no other VO 2 peaks could be observed.
The band anisotropy was investigated using ResPES (acquired with photon energy tuned on the maximum of the V L 3 edge) and by orienting the sample in order to obtain E || c r and E ⊥ c r (E || a r ). With these two configurations, we maximized the signal coming from the d || and π * bands, respectively. The ResPES spectra are reported in Figure 3's left panel and are qualitatively similar to previous ResPES works [43,49]. All spectra are characterized by a photoemission peak centered around 7 eV, which was generated by the overlap of V 3d and O 2p electrons, and a structure centered around 1.7 eV coming from the unpaired V 3d electrons [50]. The only exception is the off-resonance spectrum, for which the V 3d unpaired signal was too weak to be below our detection threshold, most likely due to the residual contaminants on the sample surface. This may be due to the nature of our ex-situ experiment and the experimental constraints used to clean the sample surface. This strongly suggests the necessity to access the resonant condition to enhance the signal coming from the V 3d electrons and, thus, properly study the bands around FL. In the monoclinic insulator phase, the spectra were acquired with E || a m (a m being the a-axis of the monoclinic insulator lattice). a m and c r were directed along the same spatial direction so as to also be in the insulating phase where we were maximizing the signal coming from the d || band.
hance the signal coming from the V 3d electrons and, thus, properly study the bands around FL. In the monoclinic insulator phase, the spectra were acquired with E || am (am being the a-axis of the monoclinic insulator lattice). am and cr were directed along the same spatial direction so as to also be in the insulating phase where we were maximizing the signal coming from the || band.
The metallic spectra acquired on resonance showed a two-peaked structure corresponding to the V 3d unpaired electrons. ResPES spectra acquired off resonance and on resonance (as depicted in the right panel) for different sample orientations. The spectra were normalized to the O 2p peak and vertically shifted for clarity. In the insulator phase the lattice of VO2 has monoclinic symmetry and the electric field E is parallel to the a axis (am). The insulated monoclinic a axis and the metallic rutile c axis are oriented along the same direction; in addition, in the insulating phase we probed mostly the || band.
These two peaks were generated by two different screening channels in the valence band of VO2 [51]. The feature centered around 1.7 eV was called the ligand hole (L) and accounted for the local screening of the photo-induced hole by an O 2p electron. The peak at about 0.4-0.5 eV was named the coherent hole (C) and generated by the non-local screening of the photo-hole from V 3d electrons that were free to move in the solid and were, therefore, the final state effect.
It is evident that C was more intense when the electric field was oriented parallel to the cr axis; this suggests that the || was more populated with respect to * pointing out the anisotropic electron population of these bands. To quantify the anisotropy in the sample, we adopted the same approach proposed in reference [41] to calculate the ratio between the screening length in the insulating phase and in the metallic phase for both the electric field orientation. For a ResPES spectrum, the relation between intensity and the screening length was the following [52][53][54].
∝
(1) The metallic spectra acquired on resonance showed a two-peaked structure corresponding to the V 3d unpaired electrons.
These two peaks were generated by two different screening channels in the valence band of VO 2 [51]. The feature centered around 1.7 eV was called the ligand hole (L) and accounted for the local screening of the photo-induced hole by an O 2p electron. The peak at about 0.4-0.5 eV was named the coherent hole (C) and generated by the non-local screening of the photo-hole from V 3d electrons that were free to move in the solid and were, therefore, the final state effect.
It is evident that C was more intense when the electric field was oriented parallel to the c r axis; this suggests that the d || was more populated with respect to π * pointing out the anisotropic electron population of these bands. To quantify the anisotropy in the sample, we adopted the same approach proposed in reference [41] to calculate the ratio between the screening length in the insulating phase and in the metallic phase for both the electric field orientation. For a ResPES spectrum, the relation between intensity and the screening length was the following [52][53][54].
where I is the ResPES spectrum intensity, I 0 is the incident photon flux, and λ is the screening length. The ratio λ m λ i (where λ m is the screening length for the metallic phase while λ i for the insulating phase) could, therefore, be calculated by integrating the ResPES spectra between 0 and 2 eV. The selection of the energy interval 0-2 eV to calculate λ is arbitrary. The only two requirements we followed were to include both the L and C in the integrated region and to minimize the tail coming from the O 2p feature centered at 7 eV. The numerical results did not change significantly if we slightly changed the interval of integration. The results for λ m λ i are reported in Table 1. λ m were calculated with E || c r , and E ⊥ c r , while λ i was calculating using the insulating spectrum reported in Figure 3's right panel with E || a m . The difference between the two sample orientations is evident: when E || c r , λ m /λ i was~20% smaller with respect to E ⊥ c r . This is a direct probe of the intrinsic anisotropy of the VO 2 band structure and the measurement of the different screening capabilities of d || and π * bands.
These preliminary results show the importance of using a band selective probe to study a complex problem such as the VO 2 MIT and that this approach was fundamental in order to accurately take into account the anisotropy degree of freedom.
Conclusions
We show our preliminary results obtained by studying a sample of a single crystalline VO 2 /TiO 2 (101) film of an 8 nm thickness. This film is of extremely good quality. The resistivity measurements show a hysteretic behavior with a critical temperature of 321 K, a transition width of 8.2 K, and a jump in resistivity of~3 orders of magnitude. XRD measurements show that the residual compressive strain imposed by the epitaxial growth was~0.8%. We investigated the intrinsic band anisotropy of metallic VO 2 by using ResPES at the V L 3 edge with linearly polarized light. The combined access to resonantly enhanced the photoelectron yield and the bands' selectivity allowed us to discriminate between the d || and π * bands' contribution at FL. This unique approach allowed our experimental investigation to obtain new insight into the VO 2 band structure with respect to previous studies [4,20]. We observed that the ratio between the screening length of the metallic and insulating phase (λ m /λ i ) for d || and π * bands differed by about 20%, pointing out a strongly anisotropic screening capability in VO 2 . Since the difference in the electron population between the d || and π * bands was a key parameter to controlling the MIT properties, our preliminary results highlight the importance of using a resonantly enhanced band selective spectroscopic tool to study complex systems such as VO 2 . Data Availability Statement: Data available on request. The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
Resonant Photoemission
ResPES is a branch of variable energy photoemission spectroscopy. Using synchrotron facilities, it is possible to vary the photon energy across the photo-absorption resonance of the system, thus exploiting cross-section effects to enhance specific photoemission features. ResPES was realized when the photon energy was tuned to match the absorption edge of one of the elements present at the surface of the sample. In this case, two possible pathways that lead to the same final state were possible: the direct photoemission and the Auger-like emission that followed the relaxation of a photo-excited state. An interference process took place and enhanced the spectral intensity of the resonating states as a function of the photon energy.
Tuning the photon energy across the VO 2 L 3 edge, a 2p electron was excited in the empty 3d band. This excited state decays via an Auger-like process, with a concurrent emission of a photo-electron. This process is represented in Figure A1.
Resonant Photoemission
ResPES is a branch of variable energy photoemission spectroscopy. Using synchrotron facilities, it is possible to vary the photon energy across the photo-absorption resonance of the system, thus exploiting cross-section effects to enhance specific photoemission features. ResPES was realized when the photon energy was tuned to match the absorption edge of one of the elements present at the surface of the sample. In this case, two possible pathways that lead to the same final state were possible: the direct photoemission and the Auger-like emission that followed the relaxation of a photo-excited state. An interference process took place and enhanced the spectral intensity of the resonating states as a function of the photon energy. | 5,887 | 2023-04-26T00:00:00.000 | [
"Physics"
] |
Data-driven Analysis of the Cost-Performance Trade-off of Reconfigurable Intelligent Surfaces in a Production Network
This paper presents a comprehensive study on the deployment of Reconfigurable Intelligent Surfaces (RIS) in urban environments with poor radio coverage. We focus on the city of London, a large metropolis where radio network planning presents unique challenges due to diverse geographical and structural features. Using crowd-sourced datasets, we analyze the Reference Signal Received Power (RSRP) from end-user devices to understand the existing radio coverage landscape of a major Mobile Network Operator (MNO). Our study identifies areas with poor coverage and proposes the deployment of RIS to enhance signal strength and coverage. We selected a set of potential sites for RIS deployment and, combining data from the MNO, data extracted from a real RIS prototype, and a ray-tracing tool, we analyzed the gains of this novel technology with respect to deploying more conventional technologies in terms of RSRP, coverage, and cost-efficiency. To the best of our knowledge, this is the first data-driven analysis of the cost-efficiency of RIS technology in the production of urban networks. Our findings provide compelling evidence about the potential of RIS as a cost-efficient solution for enhancing radio coverage in complex urban mobile networks. More specifically, our results indicate that large-scale RIS technology, when applied in real-world urban mobile network scenarios, can achieve 72% of the coverage gains attainable by deploying additional cells with only 22% of their Total Cost of Ownership (TCO) over a 5-year timespan. Consequently, RIS technology offers around 3x higher cost-efficiency than other more conventional coverage-enhancing technologies.
INTRODUCTION
Reconfigurable Intelligent Surfaces (RIS) have recently emerged as a promising technology for next-generation mobile systems.These structures are known for their ability to reflect radio signals while altering some of their features, such as phase, which enables passive beamforming gains without the need for expensive and energy-consuming baseband processors or signal amplifiers.
Where RIS technology is really expected to outperform conventional Base Station (BS) technology (e.g., small cells, active relays, etc.) is in their energy efficiency.Their predominantly passive nature means they consume much less power, an aspect that is particularly valuable in outdoor environments, where maintaining power sources for base stations can be both logistically challenging and expensive.Another defining advantage of RIS is its minimal infrastructure needs, making it a cost-effective solution for outdoor mobile dead zones.This reduced demand for infrastructure, combined with their energy efficiency, underscores RIS's potential to reshape the future of mobile systems in a cost-effective way.
However, this technology is still in its nascent stage, with no RIS devices available in the market yet, and only a few prototypes discussed in the literature [28,30,37].Hence, the cost-performance trade-off of this new technology in large-scale outdoor mobile network deployments remains an open question.To fill this gap, this paper investigates the integration of realistic RIS technology into a production mobile network in an outdoor urban environment.
We focus our study on a densely populated urban area within London that suffers poor radio coverage, which we empirically identify through performance measurements and user feedback reports from a commercial mobile network operator (MNO).Installing new radio equipment is an expensive process and one that must be carefully planned to optimally meet the capacity and coverage needs of the end users.Hence, we build a RIS model, which we developed from data collected from a real, cost-effective RIS prototype, and then we integrate such data-driven model into Wireless inSite [31] -a state-of-the-art 3D ray-tracing tool widely used in the research community for analyzing site-specific radio wave propagation and wireless communication systems [14,17,38].This step is consistent with the approach that radio planning teams within commercial MNOs follow when planning to deploy new radio carriers and study "what-if" scenarios [7,20].With this, we examine the potential of large-scale RIS technology to improve coverage in these identified areas.We also conduct a comprehensive cost analysis, estimating the Capital Expenditure (CAPEX), the Operational Expenditure (OPEX), and the Total Cost of Ownership (TCO) associated with deploying RIS technology on a large scale.These costs are then compared with those related to using conventional BS technology, allowing us to evaluate the cost-efficiency of RIS technology in improving coverage in areas currently underserved by the incumbent MNO cells.
Existing literature has significantly enriched our understanding of RIS technology, offering valuable insights into optimal placement, configuration, and theoretical models [4,15,16,18,26,29].However, these studies predominantly focus on indoor or small scenarios and are based on idealized models.A brief review of this and other related literature can be found in §6.Our paper is, to the best of our knowledge, the first data-driven study to delve into the practical implications of deploying real-world RIS technology in production mobile networks, thereby offering a novel perspective on the potential of RIS in shaping next-generation mobile systems.Our findings suggest that, in a real urban context, large-scale RIS can achieve 72% of the coverage gains that additional MNO cells may obtain.Remarkably, this is accomplished with just 22% of the TCO over a five-year period.As a result, our work finds compelling evidence that RIS may provide ∼3x higher cost-efficiency than conventional technologies that require costly infrastructure and energy-consuming electronics.
The structure of this paper is as follows.§2 provides background information on RIS, introduces a real RIS prototype, and conducts a cost analysis.§3 utilizes datasets to identify coverage issues in real urban scenarios, validates a ray tracing tool for analysis, and presents deployment options for large-scale RIS structures.Subsequently, §4 compares the cost-efficiency of RIS technology in providing coverage gains in these scenarios with other, more expensive but high-performing, alternatives.§6 reviews the related literature, and §7 concludes the paper.Fig. 2. RIS prototype (from [32]).
RECONFIGURABLE INTELLIGENT SURFACES
In this section, we first provide a brief introduction to Reconfigurable Intelligent Surfaces ( §2.1); we then introduce a real prototype and build a data-driven model ( §2.2); and we conclude the section with a comparative cost analysis of RIS technology and more conventional BS technologies ( §2.3).
A Primer on Reconfigurable Intelligent Surfaces
Reconfigurable Intelligent Surfaces (RIS) are engineered structures that can modify the way radio waves behave when they hit the surface.By changing its configuration, an RIS can control the direction, strength, polarization, and other properties of the reflected radio waves.RIS is designed to be as passive as possible in terms of power consumption; in fact, no RF chains are involved, as well as no amplification or digital signal processors.In the literature, this kind of RIS is commonly defined as "passive" because the signal is neither amplified nor regenerated before retransmission, although it involves a small amount of power for functioning; on the other hand, an active RIS can improve the signal before retransmission on the cost of high power consumptions [39].RIS can be implemented using a variety of different types of surfaces, which can span from very sophisticated metasurfaces to arrays of antennas used as reflectors.A metasurface is made of metamaterials, a type of material that has been engineered to have properties that are not found in naturally occurring materials.They are typically composed of arrays of small metallic or dielectric elements whose behavior can be externally controlled.They are complex to build but can unlock many various features [23].The alternative is to utilize an array of passive reflecting elements, such as small metallic patches or dielectric rods, to reflect impinging radio waves in a specific direction.Such property is usually achieved by adapting conventional beamforming techniques where the individual reflecting elements apply (different) phase shifts to the reflected signals passively.In this way, the multiple reflections can interfere constructively in the desired direction, while they cancel each other out in other directions [11].
RIS technology enables smart environments [10] where the wireless channel is yet another knob subject of optimization.This contrasts with the conventional view of treating the channel as a given (or estimated) parameter.Smart environments will be crucial for the next-generation mobile systems, and can improve the reliability of communication systems by increasing path diversity between BSs and user devices (UE), as depicted in Fig. 1.An example of a specific RIS application is improving the coverage of cellular networks in hard-to-reach areas like underground tunnels or inside buildings.Another example is using RIS in radar systems to improve the accuracy and resolution of location information.Lastly, from a security perspective, using RIS to focus radio waves toward specific indoor locations can enhance the security of private networks.By adjusting the signal strength to be stronger in the desired locations and weaker in non-essential areas, it can be more difficult for malicious individuals to gain unauthorized access to the network by intercepting the signals in unwanted areas [5].We will focus on the former use case in this paper: coverage assistance in complex urban scenarios.
Empirical Modeling of a Reconfigurable Intelligent Surface Prototype
Despite the prospects mentioned above, RIS is not a mature technology; commercial off-the-shelf solutions are lagging and only a few prototypes have been implemented by researchers in the RIS community.In the following, we build a realistic model of RIS technology that we can use in our analysis at scale.To this end, we use the dataset provided by [32] with measurements of an inexpensive RIS prototype. 1We next briefly summarize the RIS design and the measurements provided in the dataset.The interested reader can find more details in [32].
The RIS system consists of multiple boards, each of which provides a 10x10 array of patch antenna elements.Each antenna element operates in sub-6GHz carrier frequency with a bandwidth of 100 MHz and reflects impinging signals with a phase shift controlled by a 3-bit RF switch.Every RF switch is in turn configured by a microcontroller unit (MCU) that, supported by a grid of buses, can access every RF switch in the board to set the desired phase shift on each antenna element.The reconfiguration time of the RIS board is approximately 35 ms and its consumption (mostly due to the MCU) is 60 mW.Fig. 2 depicts a photograph of the prototype.
A dataset with measurements collected in an anechoic chamber is provided by [32].An anechoic chamber is a controlled environment isolated from external electromagnetic interference and with minimal internal reflections.Therefore, the channel between the transmitter and the receiver only consists of a direct line-of-sight (LoS) link.It is important to note that maintaining an LoS channel is crucial for this purpose as, otherwise, it may be challenging to distinguish between the contribution reflected by the RIS and other multipath scattered signal components.
To collect this data, a RIS board was placed at one extreme of the room, on a rotating table attached to an antenna (TX) that transmits OFDM-modulated signals, as shown in Fig. 3.This setup allows setting the angle of arrival (AoA) of the LoS link between TX and RIS and between RIS and a receiving antenna (RX), which is placed in the other extreme of the room and demodulates those signals.The TX and RX are implemented using two horn antennas that operate within a frequency range of 1-8 GHz and show a gain of 13.5 dBi, as well as a voltage standing wave ratio (VSWR) of approximately 1 at the operating frequency of the RIS.The TX is positioned at a distance of 1.1 m from the first top-left element of the RIS, with a fixed azimuth and elevation angles of 90 • and −33 • , respectively.In turn, the RX is located in front of the RIS with an azimuth angle of 90 • and an elevation angle of 3 • , positioned 6.3 m away from the top-left antenna element.The rotating table and the RIS configuration are controlled by an off-the-shelf computer outside the room.
The signal sent to the TX is generated by a dual-channel transceiver, specifically the USRP model B210, which can provide continuous RF coverage between 70 MHz and 6 GHz.On the RX side, another USRP B210 is used to sample and decode the incoming signals.Both USRPs utilize the srsRAN software, an open-source SDR 4G/5G suite from Software Radio Systems (SRS), capable of processing 3GPP-compliant OFDM signals.The TX-side USRP is specifically employed to generate a continuous stream of OFDM QPSK-modulated symbols with a bandwidth of 5 MHz, transmission power of -30 dBm per subcarrier, and numerology that meets the requirements of 3GPP specifications.Meanwhile, the RX-side USRP measures the received power of the reference signal (RSRP), averaged across the signal bandwidth.
The dataset includes measurements with a pre-defined codebook of RIS configurations.Each configuration is designed to orient the primary beam of the RIS reflection pattern toward a specific For each value, which corresponds to an equal rotation angle of the table, the RIS board iterates through all the configurations in the codebook, and RSRP power samples are collected.In total, the dataset contains 6.5M samples.
As the channel within the anechoic chamber remains quasi-static, we conclude that the primary source of noise affecting the RSRP measurements in the dataset stems from imperfections in either the electronic components utilized in the RIS or the constituent parts of the chamber.To enhance the quality of the data, we employed a Savitzky-Golay filter, a widely used method for smoothing data and performing calculations based on noisy input data.Nevertheless, such imperfections are inherent in inexpensive RIS technologies and are usually ignored in the RIS literature, which relies upon idealized RIS models.Hence, building a data-driven 3D reflection model is key to making a realistic analysis of the impact of real-world RIS in production mobile networks, which is our goal.
To this end, using the available data from that measurement campaign, we first re-create 2D reflection patterns for all different RIS configurations in the dataset.In order to recreate 3D reflection patterns, it is crucial to have data from two 2D planes that are orthogonal to each other.In our specific case, as the relative difference in elevation between the TX, RIS, and RX is constant, we can only rely on the azimuth plane (with a fixed elevation).Nevertheless, due to the squared geometry of the prototype, we can take advantage of the symmetry between the azimuth and elevation planes in the reflection patterns for interpolation.As a result, we are able to construct 3D reflection patterns for all the configurations in the RIS prototype, as exemplified in Fig. 4.This information is essential to assess realistic (imperfect) RIS technologies at scale, as we will present later.
Cost Analysis
Finally, we delve into the costs associated with implementing RIS technology for coverage support in outdoor scenarios, compared to the deployment of additional BS technologies.This analysis considers two RIS scales with 40x40 and 80x80 antenna elements, respectively.Note that, at sub-6GHz, these are large-scale structures of 8.18 m 2 and 32.72 m 2 , respectively.
The costs that an MNO must bear can be categorized into capital expenditures (CAPEX) and operating expenditures (OPEX).CAPEX encompasses the one-time costs required to deploy a solution, including acquiring and putting the assets into operation.Conversely, OPEX aggregates the costs associated with running the solution, such as maintenance, electricity, cooling, etc.
When it comes to solutions for coverage support, CAPEX can be broken down into equipment (antennas, baseband processors, etc.), the cost of the tower where the solution will be mounted, if needed, in compliance with the safety regulations of the country (with the EU as a reference), installation costs such as manpower to execute the deployment, and connection costs associated with fiber links for backhauling when required.As for OPEX, these expenses can be divided into rent (leasing of the space where the solution is deployed), operation and management (O&M), connection costs incurred by operating the backhaul network if required, and electricity costs incurred by baseband processors and signal amplifiers.To provide a clearer picture, we compare the costs associated with RIS deployments with the cost of conventional BS technologies: pico-cells, micro-cells, and common massive active antennas with 128 antenna elements.Table 1 dissects these costs, where OPEX are assumed per one year.Given the complexity of the task and the lack of data in the literature regarding cost models for BSs, we decided to include the values for CAPEX and OPEX from the very few available sources.The numbers in bold are the values we label as most reliable; the other values in the same table cell may be related to documents old in time and/or related to previous generations, e.g.3G, but that could be of interest to the reader.Also, our internal connections inside the MNO helped us to determine the reliability of the costs from the different sources.For the RIS, instead, only one cost model is present in the literature [32], therefore the bold numbers refer to that, but we include also our new estimation based on our recent experience.Fig. 5 shows the Total Cost of Ownership (TCO), which sums CAPEX and OPEX for a timespan that ranges between one and five years.
In terms of CAPEX, for BS technologies, the connection costs play a significant role, with an average estimate of €33K as suggested in [33], followed by the tower costs with €20K (for micro and massive antenna cells), the installation costs with €10K (15K as stated in [21,27]), and the equipment costs with €5K as reported in [24] (3K, 12K, and 20K as respectively declared in [21,27,33]).Hence, we also compare the costs of a micro-cell with integrated access and backhaul support (IAB), which uses wireless backhaul to mitigate some of these costs, especially connection costs.In contrast to these solutions, RIS is lightweight and easy to mount almost anywhere, therefore a tower is not really necessary -standard surfaces like billboards or walls suffice -and a simple control channel is required for configuration, inducing minimal connection costs [33].Based on the cost analysis presented in [32] for the same RIS technology that we employ in this paper, the equipment costs are estimated at €12.8K for a large-scale 80x80 structure, and roughly 25% of that for a 40x40 structure.Based on our recent experience, we believe that the price for electronics proposed in [32] can be easily reduced to €1.2, yielding a total price for a unit cell of €1.35, and final prices of €2.1K and €8.5K for a 40x40 RIS and a 80x80 RIS, respectively.It is worth noting that these RIS costs are overly conservative, based on prices for electronic components available at conventional retailers.Mass RIS production may reduce these numbers substantially.
Concerning OPEX, conversely, the main contributor is energy consumption, which can reach peak power values spanning between 6 KW and 9 KW for micro and massive BS technology [9].Given the conservative approach adopted in our analysis, we used the lower bound, 6KW, for the TCO estimation.Note that, though a massive-cell has a much larger number of antenna elements, the overall transmission power, which is distributed among all the available antenna elements, may be the same (and it is the same in our analysis later).Considering the electricity price in the Euro area in 2019 (pre-covid era), around 0.12 €/KWh [13], such consumption translates into €6.3K-€9.5Kyearly.In comparison, the electricity bill of a pico-cell is negligible [6] at around €2.5K, as stated in [24], O&M expenditures are estimated at around €1K as reported in [24], and backhauling of 1k as stated in [33] (5k for both Pico and Micro in [21]).Like with CAPEX, IAB can save 100% of the connection costs and around 50% of the electricity costs (the part associated with backhauling).For RIS, the energy consumption is very small, around 60 mW [32], and the control channel is extremely simple, which translates into a negligible electricity bill and connection costs.To estimate the costs associated with renting space to deploy large-scale RIS structures, we used market values for renting billboards [3].Finally, we expect that RIS O&M costs should be considerably less than those for a micro-cell.However, in the absence of concrete data, we adopt a conservative approach and consider these costs to be on par.
When we aggregate the CAPEX and OPEX over a five-year period, an 80x80 RIS can potentially result in a TCO that is 78% lower than a micro-or massive-antenna cell (56% if IAB is supported) and 57% lower than a pico-cell.A 40x40 RIS further increases those savings by 88.5% and 77%, respectively.These substantial cost-savings render RIS technology especially attractive for large-scale coverage extension.In the subsequent sections of this paper, we will delve into the performance of RIS in extending wireless coverage use cases and compare its cost-efficiency with other alternatives.
DATASETS, TOOLS, AND RIS DEPLOYMENTS
In this section, we first discuss the radio network topology of a major MNO in the UK.We then show that by analyzing radio coverage measurements collected from end-user devices connecting to this MNO, we can identify coverage gaps in its radio network, which impact the service of end-users.To address these gaps, we subsequently validate the effectiveness of a state-of-the-art ray-tracing tool, which enables us to evaluate the deployment of RIS in a representative target area to enhance the overall performance and coverage of the network.
Datasets
For our study, we rely on real-world datasets that we collect from a large commercial mobile network operating in the UK, with a major market share.We detail these datasets next.
Radio Network Topology.For our case study on the integration of RIS in outdoor mobile networks, we consider the topology of a production radio network, owned by a commercial mobile network in the UK.To keep up with the ever-increasing traffic demand, and to meet the end-user service expectations, the main approach operators choose today is the deployment of new radio channels to enhance their coverage within specific areas (e.g., hot spots where traffic demand is soaring).Enhancing radio coverage usually means that the operators install expensive physical hardware in strategic locations, which then they optimally configure in order to integrate within their respective radio access networks.
Our topology dataset captures the geographical location of all the radio cell sites the operator uses, the different radio sectors (i.e., carriers) deployed at each site, and their respective configuration.We confirm that the operator's main goal in terms of radio deployments is maximizing population coverage, thus prioritizing their deployments in areas with the highest population density (see Fig. 6a).This strategy is more obvious in the case of the ongoing 5G roll-out, where early deployment focused first on densely populated major metropolitan areas.
For the remainder of this paper, we focus on the mobile radio deployment the operator owns in London.Radio network planning within this type of large metropolis is a non-trivial task, since different geographical areas present different signal propagation patterns, bringing the challenge of tailoring the deployment to the location characteristics.Additionally, network engineers must account for user mobility, interference, load balancing, handovers, outage, and congestion management -all translating into configurations that are not easily updated afterward.
Radio Coverage Measurements.
In an effort to continuously improve the quality of service, the operator monitors the radio coverage from the end-user perspective through crowd-sourced objective measurements combined with periodic surveys from the customer base, to capture their subjective experience.According to insights from periodic surveys shared by the operator, the radio coverage (or lack thereof) is often invoked as a root cause by subscribers who report low quality of experience.For our study, we focus on two commercial crowd-sourced datasets that the operator provided.Substantiating these with quality of experience measurements falls beyond the scope of this work.The two datasets are similar in that they capture radio signal strength metrics from the end-user device via code embedded in popular apps that run on the end-user equipment.In particular, we focus our analysis on the Reference Signal Received Power (RSRP).This is a metric that represents the average of reference signal power across a specified bandwidth (in the number of Resource Elements).It is a critical parameter that a User Equipment (UE) needs to measure for tasks such as cell selection, reselection, and handover in cellular communication systems.Dataset1 includes the median RSRP per tile unit over a 100x100m grid covering the areas of interest (namely, London, all UK).The median RSRP value per tile is derived from all the measurements captured within each tile over the same before-mentioned period of November 2022.
Dataset2 includes individual measurement samples of the RSRP metric from end-user devices collected during November 2022.The dataset includes more than 600,000 samples, each tagged with geographical coordinates and the corresponding radio sector identity.
These two datasets allow us to capture the coverage the operator provides over the entire country, and further zoom in on London (see Fig. 6b).In both datasets, we capture the wide variation of the RSRP across different geographies.We focus our analysis on London, which represents the main hub of innovation for the operator due to the high population density and increasing service demand.For ease of presentation, in the rest of this paper, we only focus on a specific target area within the city of London, which we select to demonstrate the impact of deploying RIS for coverage improvement of the production network.Nevertheless, our datasets and reach allow us to run a similar study in virtually any other area within the UK.
Ethical considerations.The data collection and retention at network middle-boxes and elements are in accordance with the terms and conditions of the MNO and the local regulations.All datasets we use in this work are covered by NDAs prohibiting any re-sharing with 3rd parties even for research purposes.Further, raw data has been reviewed and validated by the operator with respect to GPDR compliance (e.g., no identifier can be associated with a person), and data processing only extracts aggregated user information at the postcode level.No personal and/or contract information was available for this study, and none of the authors of this paper participated in the extraction and/or encryption of the raw data.
Target area for RIS deployment study
We based our decisions to determine suitable areas with poor radio coverage in the city of London on several factors, including the vicinity to mobile cells, which is a requirement for RIS operation [22], and the two datasets containing analytics on users' device feedback provided by the telco operator.We filtered the two datasets for RSRP below -100 dBm, which we define as bad coverage.This threshold is strongly dependent on the type of area, and it is usually determined with drive tests [36].Since we are working in a residential area, we choose the value -100 dBm as RSRP threshold below which end-users experience service degradation and issue complaints to the operator.Fig. 7 summarizes the coverage of the datasets for November 2022.
From the identified set of areas with poor coverage, we selected an area of 980m x 900m depicted in Fig. 7 for further study in this paper.Nonetheless, our analysis can be extended to all other areas we have pinpointed.We divide the space into a grid, where each sector, or tile, is a square of 0.01 2 area.If we analyze the datasets specifically for this selected region, it becomes clear by looking at the CDF for RSRP in Fig. 6c that this area suffers from poor coverage for the 52% and 37% of the cases for Dataset1 and Dataset2, respectively.
Validation of ray-tracing tool
Fig. 8 illustrates a 3D model of the neighborhood we selected for this study, which we constructed from OpenStreetMap data.The model highlights the locations of two cell sites (CS1 and CS2) from the MNO under analysis.CS1 features six cells at a height of 15.5 meters, while CS2 comprises three cells at a height of 20 meters.Each cell is equipped with a 4G 60 • -sectorial antenna operating in Band 1 with a transmission power of 40 dBm.The figure also proposes potential locations for the implementation of RIS technology, which we discuss later in more detail.
As mentioned above, we employ a ray-tracing tool called Wireless InSite [31] to evaluate the coverage enhancements achieved by RIS technology in this area.Therefore, an essential initial step is to verify the effectiveness of the tool for our analysis.To this end, in Fig. 9 we examine the coverage results that are provided by the nine cells using the ray-tracing tool and we compare these simulated results with the empirical data from the aforementioned datasets, which we depict in grey and white squares in the figure.The ray tracer provides mean RSRP samples at a granularity of 100 m 2 , which we depict as colored circles in Fig. 9.To validate the tool, we compare the RSRP samples from the ray tracer with the overlapping empirical samples from the datasets.Fig. 10 shows a median error of 2.1 and 4.8 dB, with respect to Dataset1 and Dataset2, respectively, which we deem sufficiently small to rely on the ray-tracing solution for our analysis.
RIS deployments
Once validated, the next step is to integrate a RIS model into the ray-tracing tool.Given the relatively recent emergence of RIS technology, it is challenging to capture inherent deficiencies, such as unexpected side lobes, from inexpensive electronics.Therefore, instead of simulating an ideal reflective surface, we use the realistic data-driven RIS model of an actual RIS prototype that we introduced in §2.2.To this end, we replicated an object with the same 3D reflection patterns that we derived in §2.2 (see Fig. 4).By employing such an experimentally-driven model, we can conduct a more realistic analysis using ray-tracing.
The RIS board evaluated in §2 comprises a 10x10 array of inexpensive antenna elements.However, such a small surface is insufficient to provide beamforming gains at the scale of the area we are examining [11].Fortunately, the chosen RIS prototype supports the stacking of multiple boards to create larger structures [32], enabling us to model larger-scale RIS structures in our ray-tracing tool, as depicted in Fig. 11.In our analysis, we select squared arrays of RIS boards with varying sizes, which allows us to build RIS structures ranging from 20x20 antennas to 80x80 antennas.This approach allows us to assess the dimensions and costs required to deploy this technology in real outdoor environments.
The remaining question is to find suitable locations to deploy RIS technology.Three requirements must be met: () there must be good line-of-sight wireless links between an incumbent cell and the RIS structure, () the power that may be harvested by a RIS is high enough to produce meaningful beamforming gains (note that a purely passive RIS is unable to amplify signals), and () it should be close to poor coverage areas in order to be helpful.Given the highly heterogeneous nature of the urban environments under analysis, devising a systematic placement procedure is inherently challenging.Consequently, we identified potential RIS sites by locating relatively tall buildings near areas with poor coverage, which would help meet the requirements mentioned earlier.
As evident from Fig. 9, the incumbent MNO cell sites (represented as red pins on the map) primarily target the main streets and the eastern side of the neighborhood.This area includes a large park and a block of widely spaced houses.The open space is ideal for radio communication, a fact corroborated by the two datasets.However, coverage issues become apparent on the western side, where most areas experience an RSRP lower than -100 dBm, as indicated by both our datasets and the ray-tracing tool.Therefore, we focus on our potential RIS deployments in this part of the neighborhood.This area comprises a cluster of low-rise houses surrounded by just a few buildings that are 30 meters or taller.We select 16 locations for potential RIS deployment that meet the aforementioned requirements, as shown in Figs. 8 and 9.These deployment sites are compatible with the sites the MNO would rent to deploy their own equipment.Table 2 presents the distance between each potential RIS site and the two MNO cell sites.The closest RIS deployment is 66 meters from a cell site, and the furthest is 659 meters away.To the best of our knowledge, at the time of writing, no other work in the literature assessed realistic RIS deployments at this scale.
For each potential RIS site, we deploy a RIS structure of varying sizes, and study the amount of power it can harvest for reflection.Fig. 12 displays these values for four different potential sites (A, E, I, M).The first observation is that power exhibits a logarithmic behavior with the size of the RIS, a phenomenon well-documented in the literature [34].The second observation is that the environment plays a crucial role.For instance, sites I and M, which are at similar distances from a CS, experience significantly different power levels.To gain further insights, we show in Fig. 13 the amount of power that may be harvested (for reflection) by the largest RIS structure as a function of the distance to the closest CS.Sites A, B, and H, which receive substantial power from the southernmost CS, can harvest 20 dBm of power.However, site D, despite being over 600 meters away from the closest CS, can leverage its elevated height (38 meters) to harvest 11 dBm, which is more than the same RIS at site C, only 260 meters away from the closest MNO cell.Some other locations, such as C, M, and J, do not have a clear line-of-sight with an MNO cell, and the power they receive is mainly due to secondary paths, explaining the limited amount of power they can harvest.This irregular pattern underscores the difficulty of implementing a systematic methodology for deploying RIS in urban scenarios.
COST-PERFORMANCE TRADE-OFFS ANALYSIS
We next analyze the potential of RIS technology to enhance wireless coverage in a cost-efficient way.To avoid clutter in our presentation, in this section, we concentrate on eight potential RIS sites, namely C, M, F, I, E, P, H, and A. We selected these sites and ranked them in ascending order based on the power they receive from the best CS, as illustrated in Fig. 13.
We evaluate the network performance gains when deploying at these sites the large-scale RIS boards we previously analyzed in §2.3: a 40x40 RIS and an 80x80 RIS.To provide a comparative perspective, we also evaluate the RSRP, coverage, and cost-effectiveness improvements when deploying three different BS technologies at the same sites instead of RIS technology: • Pico: An inexpensive pico-cell, with 20 dBm transmission power.
• Micro: An active antenna transmitting at 40 dBm, identical to the antennas of the micro-cells incumbent in the mobile network.• Massive: An active array of 128 antenna elements transmitting at 40 dBm.
RSRP gains
We begin our analysis by investigating the power boost that each of the five previously mentioned solutions can provide at the selected sites.For each potential RIS site, we formulate a "scenario" by selecting a point experiencing poor RSRP in the vicinity, and then adjust the selected solution to optimize power at that location.For example, Fig. 14 presents three distinct scenarios for RIS site A (depicted in horizontal subplots).Following this, we measure the gains in terms of RSRP within an area of 125x110 meters surrounding that point.Fig. 14 showcases the impact of all the solutions (displayed in vertical subplots), including the baseline scenario, with no RIS or additional BS, for comparison (represented in the left-most column of subplots).From this example, it is evident that the RIS significantly improves RSRP in all scenarios.More notably, the largest RIS (80x80) delivers RSRP gains that are almost indistinguishable from those provided by a full-fledged BS.
To delve deeper across all the other sites, Fig. 15 presents the distribution of RSRP gains (in dB) over the baseline case, for each of the solutions mentioned above and for three different scenarios at each RIS site.The bottom and top edges of each box in the plot indicate the lower and upper quartiles of the RSRP gains, respectively, while the line in the middle represents the median RSRP gain.The whiskers depict the extreme points in the distribution.
From the figure, we can observe that even at sites with lower amounts of harvestable power, the RIS has a substantial impact, particularly the 80x80 RIS.For instance, the largest-scale RIS provides a median RSRP gain of 12.6 and 25.2 dB at sites C and M, respectively, and a median gain exceeding 40 dB at sites E and P. It is also evident that BS technology, comprised of energy-hungry active RF chains, offers higher RSRP gains than RIS in general.However, it is important to remember that these gains come at a significantly higher cost (we will analyze their cost-efficiency later in this section).Interestingly, in some sites, the performance of an 80x80 RIS surpasses that of a pico cell (e.g., 6.5 dB higher gains in average across sites E, F, I, and P, and more than 10x gains in sites A and H).Though micro and massive-cell antennas provide higher RSRP gains than RIS, the 80x80 RIS attains 65.1% and 61.6% of the median gains achieved by the two benchmarks, respectively, across all sites on average, and reach 84-100% of their gains in sites like H and A. Unlike the active RF chains used by conventional radio technologies, which generate and amplify radio signals, the performance of an RIS is heavily reliant on the amount of power it can harvest from incumbent MNO cell sites.This relationship is illustrated in Fig. 16, which displays the peak RSRP experienced at each site area when different technologies are employed for coverage extension.Indeed, the performance achieved when using BS technology (pico, micro, massive-cell antennas) is practically independent of the site.In contrast, RIS technology exhibits a performance dependency that is strongly correlated with the amount of harvestable power, which is shown in Fig. 17.Interestingly, though the distance between the RIS site and the MNO cell sites plays a role in the amount of power a RIS can harvest, it is not the most critical aspect in highly heterogeneous urban environments -as we can note when comparing Fig. 17 and Fig. 13 -which render simple mathematical models insufficient for this type of analysis.
As previously mentioned, the RSRP gains achieved by BS technology compared to RIS technology come with associated costs.To gain insights into this, we present in Fig. 18 the ratio of RSRP gains to TCO over 5 years, a metric that we refer to as cost-gain efficiency, for each of the eight sites under consideration.To compare more cost-effective models of BS technology (remember §2.3), we also compare solutions with integrated access and backhaul (IAB).However, because micro-cells provide similar RSRP gains to massive antennas, as shown before, we only consider IAB support in the former to avoid clutter in the figure .The figure illustrates that the large-scale 80x80 RIS significantly outperforms conventional BS technology, achieving over 3x higher efficiency on average.Even when Integrated Access and Backhaul (IAB) is supported, an 80x80 RIS still delivers 74% higher efficiency.Notably, the 40x40 RIS further amplifies these efficiency gains, doubling the efficiency of an IAB-capable micro cell on average.Perhaps surprisingly, the efficiency gap between RIS and its benchmarks has a very weak correlation with the power the RIS can harvest, or with the distance between the RIS site and the MNO cell sites.For instance, site P provides over 20 dB more power to the RIS than site F, and a RIS at site P approximately doubles the efficiency achieved at site F. Conversely, site H provides almost 30 dB higher power to the RIS than site M, but yields 56% less efficiency.These observations highlight the importance of using data-driven realistic evaluation tools when analyzing real-world scenarios.Relying on simplified models can lead to significantly erroneous results.
Coverage gains
For an MNO, ensuring broad coverage across the largest possible area is usually a high importance objective.As such, meeting specific performance targets in as many locations as possible -such as an RSRP threshold of -100 dBm in our case -typically takes precedence over achieving raw power gains.Hence, we next study the increased area where the desired coverage is achieved using all the discussed technologies (expressed as a percentage relative to the baseline coverage provided by the incumbent cell sites alone).These results are illustrated in Fig. 19.
On average, BS technology can extend coverage by around 52%, 80%, and 90% for pico, micro, and massive antennas, respectively, at sites C, M, and I.However, at site H, the increments are only 2%, 10%, and 11% respectively, while at site A, the enhancements are 10%, 25%, and 30%, respectively.Despite RIS technology not achieving these specific performance figures, it still provides considerable coverage improvements at most sites.Notably, although a 40x40 RIS yields no gains at site C, it manages to achieve between 30% and 91% of the gains of a pico-cell at various sites (M, F, E, I, P).Remarkably, it surpasses pico-cell performance by 36% and 133% at sites A and H, respectively, due to its superior beamforming capabilities.Conversely, an 80x80 RIS outperforms a pico-cell antenna at all sites except C and M, tripling or even quadrupling the coverage area at certain sites such as A and H, respectively.When compared to a micro or a massive-cell antenna, an 80x80 RIS reaches 72% and 68% of the coverage area gains of these benchmarks, respectively, on average across all sites.
As remarked before, RIS technology can boost performance while significantly curbing costs.We verify this by determining the ratio of area coverage gains (expressed as a percentage) to their associated costs -a measure we refer to as cost-coverage efficiency.These findings are depicted in Fig. 20 for each site considered in this study.
Except for site C, RIS technology provides higher cost-coverage efficiency than its benchmarks across all sites.At site C, the 40x40 RIS fails to enhance coverage, thereby delivering 0% costcoverage efficiency, while the 80x80 RIS, despite offering 29% greater cost-efficiency than a standard micro-cell, reaches 65% of the cost-efficiency of an IAB-capable micro-cell antenna.The main reasons why Site C performs so poorly, especially for the 40x40 RIS case, are attributed to low elevation and suboptimal building orientation towards the base station; this underscores how crucial correct positioning is for an RIS to function at its best.Remarkable gains are recorded at other sites.For example, the 80x80 RIS doubles the cost-efficiency of both micro and massive active antennas at sites M and F, triples it at site I, and quadruples it at sites E, P, H, and A. Compared with an IAB-supported micro-cell antenna, the 80x80 RIS doubles the performance at sites E, P, H, and A, and delivers between 20% and 50% increased cost-efficiency at sites M, F, and I. Interestingly, at certain sites, such as I, E, and P, a 40x40 RIS surpasses its larger 80x80 counterpart.Across all sites, RIS technology presents cost-efficiency gains that vary from approximately 50% more than an IAB-supported micro-cell antenna to four times the cost-efficiency of a conventional micro-cell or massive active antenna.
In conclusion, our findings strongly advocate for RIS as a cost-effective solution for expanding coverage in real-world urban mobile networks.Even with non-ideal RIS models, such as the data-driven approach we explored in this paper, RIS technology typically outperforms the costeffectiveness of alternatives with active antennas.This is achieved through two key factors: () the use of affordable electronic components with minimal energy consumption, and () the vast array of beamforming-capable antenna elements provided by RIS technology, which enable radio and coverage enhancements that closely compete with those of conventional BS technologies relying upon active RF chains.However, our results also highlight that while RIS technology generally offers higher efficiency in terms of RSRP gains per unit cost at all analyzed sites, these may be insufficient to meet coverage targets.Indeed, we found that certain RIS scales (e.g., 40x40 RIS) may not yield any coverage enhancements at some sites (6% of the sites in our study), or may offer lower cost-coverage efficiency than BS technology at others (10% in our study).This underscores the need for appropriate data-driven methods, like those employed in this paper, to accurately select coverage-enhancing sites and the most suitable technology to this end.
TOWARDS REAL-WORLD DEPLOYMENTS
While RIS technology is continuously evolving to address current challenges, it brings to operators' attention its immense potential, especially in terms of considerably reducing their energy footprint when deploying the next generation of communication systems.Multiple telco operators have recently shown interest in testing this technology, either with radio network planning exercises (using ray-tracing approaches) or in field trials (using prototypes in a test network) [2,25].
When evaluating 5G communication technologies, standard development organizations (SDOs) such as 3GPP [40] and ETSI [12] usually rely on a map-based hybrid channel modeling approach.In general, ray-tracing methods -such as the one we used in this paper -are useful for mobile operators to plan large-scale deployments.It is therefore not surprising that radio planning teams in telcos worldwide use them.For instance, Atoll is a tool currently being used by operators in several large telcos in Europe, including Orange, Vodafone, and Telefonica [8], and operators such as Huawei [19] and Telefonica [1] routinely use these tools in network planning.
In our paper, we took a step further when using ray-tracing to analyze the potential of RIS technology on large-scale deployments.Instead of using ideal reflector models, we captured the imperfections of realistic RIS equipment and integrated this model into the ray-tracing framework, as explained in §2.2.We believe that this approach provides compelling evidence of the potential of RIS technology in real environments as RIS hardware is purposely intended to be low-cost technology, which is prone to imperfections.We will further improve this approach by considering higher density and more accurate empirical radio coverage measurements to calibrate the raytracing modules we employ.This will help us generate an even more accurate evaluation of the benefit of deploying RIS in operational networks.
In light of the results we presented in this paper, our next step is to validate some of our insights with already-planned field trials with Telefonica and Telecom Italia in 2024.We will conduct these in testing and controlled environments, which represents the natural next step before deployment in large-scale production RANs.These trials will help us answer multiple practical questions about the manufacturing and installation of these devices, which is not trivial.When considering RIS installation in a commercial network, finding the optimal location is important; in practical terms, the deployment is also conditioned by negotiating new deployment sites for the telcos.The planned pilots will help us gauge the distance between the optimal identified installation locations and the locations we can use under the constraints of the real-world environment.
RELATED WORKS
The existing body of literature on RIS has provided valuable insights into this promising technology.For instance, the importance of RIS placement has been thoroughly discussed in [16], [18], and [15], which have theoretically explored the ideal distance between the BS and RIS.
The authors of [4] have made a significant contribution by addressing the challenges of determining the optimal RIS placement and configuration without making unrealistic assumptions about the available Channel State Information.However, their work is confined to indoor scenarios, limiting its applicability in broader contexts, and assuming idealized RIS models.In contrast, our study extends this analysis to real-world, large-scale outdoor RIS-aided mobile networks, providing a more comprehensive understanding of RIS technology's potential.The research conducted by [26] introduces a novel mathematical formulation for the coverage planning problem.While their theoretical approach provides a solid foundation, our work complements this by providing empirical evidence from a production mobile network, offering a more practical perspective on the deployment of RIS technology.In [29], the authors delve into the complexity requirements of large RIS deployments, providing valuable insights into the optimal configuration of these systems.However, their work is based on simulations, whereas our study is grounded in real-world data, offering a more accurate assessment of RIS technology's effectiveness.Finally, [35] employs the QuaDRiGa channel model to optimize the positioning and orientation of a RIS.While their methodology is practical, it lacks the empirical validation provided by our study.
The existing literature is somewhat sparse when it comes to cost-efficiency analyses.Notable work is presented in [22], where the authors explore the costs associated with deploying RIS and ultra-dense small-cells for indoor mmWave coverage enhancement.Their findings suggest that RIS may offer cost savings, but only with sufficient small-cell densification.However, their analysis is grounded in simple propagation models and primarily applies to smaller systems.
In conclusion, while the existing literature has made significant strides in understanding RIS technology, our paper is, to the best of our knowledge, the first to analyze the cost-effectiveness of real-world large-scale RIS technology in production mobile networks.This unique focus allows us to provide valuable insights into the practical application of RIS technology, contributing to the ongoing development of next-generation mobile systems.
CONCLUSION
RIS technology is a promising solution for next-generation mobile systems, especially in a vying landscape where operators are striving to dramatically reduce their energy footprint and optimize the cost of running their infrastructure.To the best of our knowledge, this paper is the first to bring compelling evidence towards harvesting the huge potential of RIS technology in the realistic outdoor deployments of a commercial mobile operator.In this paper, we showed that, in real-world urban mobile networks, RIS can achieve 72% of the coverage extension gains of conventional base station technologies based on active antennas, but at only 22% of the total cost of ownership over a five-year period, offering around three times higher cost-efficiency.
To quantify the benefits upon deployment in a production radio network, we aligned our approach with the methodology that operational radio planning teams follow: we combined different empirical datasets provided by a commercial radio network in the UK to evaluate coverage, we used a realistic data-driven RIS model, and we employed a state-of-the-art ray tracing tool that we validated with real data to answer complex "what-if" questions regarding the deployment of the RIS in real-world outdoor scenarios.To the best of our knowledge, our results provide the first estimation of the benefits we can expect from RIS when exploited in a commercial radio network deployment.As future work, we are planning live outdoor trials to experimentally evaluate RIS prototypes in the network of commercial operators.
Fig. 3 .
Fig. 3. Experimental setup (from [32]).Fig. 4. 3D RIS radiation pattern.and unique direction in space.Specifically, the main beam is scanned within the azimuthal range of [−90 • , 90 • ] and the elevation range of [−45 • , 45 • ], with a step size of 3 • in both cases.As a result, the codebook consists of a total of 1891 distinct configurations.The turntable is set to move within the azimuthal range of [−90 • , 90 • ] with a step size of 3 • .The angle between the surface of the RIS and the RX is denoted as .For each value, which corresponds to an equal rotation angle of the table, the RIS board iterates through all the configurations in the codebook, and RSRP power samples are collected.In total, the dataset contains 6.5M samples.As the channel within the anechoic chamber remains quasi-static, we conclude that the primary source of noise affecting the RSRP measurements in the dataset stems from imperfections in either the electronic components utilized in the RIS or the constituent parts of the chamber.To enhance the quality of the data, we employed a Savitzky-Golay filter, a widely used method for smoothing data and performing calculations based on noisy input data.Nevertheless, such imperfections are inherent in inexpensive RIS technologies and are usually ignored in the RIS literature, which relies upon idealized RIS models.Hence, building a data-driven 3D reflection model is key to making a realistic analysis of the impact of real-world RIS in production mobile networks, which is our goal.To this end, using the available data from that measurement campaign, we first re-create 2D reflection patterns for all different RIS configurations in the dataset.In order to recreate 3D reflection patterns, it is crucial to have data from two 2D planes that are orthogonal to each other.In our specific case, as the relative difference in elevation between the TX, RIS, and RX is constant, we can only rely on the azimuth plane (with a fixed elevation).Nevertheless, due to the squared geometry of the prototype, we can take advantage of the symmetry between the azimuth and elevation planes in the reflection patterns for interpolation.As a result, we are able to construct 3D reflection patterns for all the configurations in the RIS prototype, as exemplified in Fig.4.This information is essential to assess realistic (imperfect) RIS technologies at scale, as we will present later.
Fig. 5 .
Fig. 5. TCO for a range of timespans for deploying a unit of conventional BS technology and RIS.
Fig. 6 .
Fig.6.We explore (a) the network deployment strategy of maximizing population coverage: the deployment density of radio sectors (i.e., radio antennas, per technology generation) the operator installs in different types of geographical areas (i.e., major/minor urban area, city, rural town, etc.) correlates with the median population density per area type, as published by the Office of National Statistics in the UK.We also show (b) the ECDF of the RSRP measurements we collect over different geographical areas (London, UK), in both Dataset1 (DAT1) and Dataset2 (DAT2).Finally, we focus on (c) the coverage within a specific target area, where we corroborate the measurements from both datasets we consider.
Fig. 8 .
Fig. 8. 3D model of a London area, location of cell sites, and potential RIS sites.
Fig. 9 .
Fig. 9. Baseline RSRP with ray-tracing tool and with empirical datasets, location of cell sites, and potential RIS sites.
11 Fig. 10 .
Fig. 10.ECDF of the error between the empirical RSRP samples in our datasets and the samples provided by a ray-tracing tool.
Fig. 11 .
Fig. 11.Multiple RIS boards stacked together to form a larger RIS structure.
Fig. 12 .Fig. 13 .
Fig. 12. Power harvested by RIS structures of different scales and at different locations.
Fig. 18 .
Fig. 18.Cost-gain efficiency achieved by RIS technology and conventional BS technology at different sites.
Fig. 19 .Fig. 20 .
Fig. 19.Area coverage gains across eight sites with three different scenarios and solutions.
Table 1 .
CAPEX and OPEX for deploying a unit of conventional BS technology and RIS.
Table 2 .
Distance between potential RIS sites and MNO cell sites. | 12,433.4 | 2023-11-27T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Moving Voxel Method for Estimating Canopy Base Height from Airborne Laser Scanner Data
: Canopy base height (CBH) is a key parameter used in forest-fire modeling, particularly crown fires. However, estimating CBH is a challenging task, because normally, it is difficult to measure it in the field. This has led to the use of simple estimators for our method were 1.74/2.40, 2.69/3.90 and 0.46/0.71, respectively, while with traditional LiDAR-based metrics, the results were 1.92/2.48, 3.34/5.51 and 0.44/0.65. Moreover, the use of Lorey’s mean as a CBH estimator at the plot level resulted in models with better predictive value based on the leave-one-out cross-validation (LOOCV) results used to compute the RMSE cv values.
Introduction
The last two decades have seen an increasing trend in forest fire frequency and the amount of land burned [1]. Extreme drought and the accumulation of fuels are the two major factors responsible for this increase [2][3][4][5]. For example, in the European region alone, the number of forest fires taking place annually is estimated to be 65,000, burning approximately half a million ha of forest [3]. Most of these fires (about 85%) take place in the Mediterranean region alone (mostly Portugal, Spain and Greece) [3][4][5][6][7]. Similarly, forest fires burn an average of 3.7 million ha of forest in the U.S. each year [1].
Forest fires can have a number of catastrophic consequences, including human casualties, destruction of property and forest assets, financial implications (fire suppression and post-fire rehabilitation costs), as well as ecological impacts. For example, in 2003, large forest fires in the districts of Castelo Branco, Portalegre and Santerm in Portugal led to the death of 21 people and damages estimated at over one billion euros [6,7], with more than one thousand people wounded. In another example, large forest fires in Greece led to the deaths of 80 people in 2007 and burned 1710 buildings. The estimated damage caused by these fires was 1.5 billion euros [8]. Forest fires also ravage boreal forests. Recently (summer 2014), large forest fires (the largest witnessed in four decades) raged in central Sweden, leaving at least one person dead and burning around 37,000 ha of forest [9,10].
Forest fires also play an important role in global carbon dynamics [11][12][13] by releasing a large amount of carbon dioxide (CO 2 ) gas into the atmosphere. Accumulation of CO 2 gas in the Earth's atmosphere contributes significantly to climate change [14,15]. As forest fires can cause serious damage and have other undesirable consequences, it is important that proper proactive measures be taken in order to: (1) minimize the risk of such disasters happening; and (2) be able to predict the behavior of fires when they break out.
Minimizing the risk of forest fire disasters is commonly done through fuel treatment practices, such as thinning or prescribed fires, so as to reduce the amount of fuel accumulated over time [16]. Since less fuel will be available to burn when fire breaks out, the intensity of the fire, as well as its rate of spread will be greatly minimized [17,18]. This will not only make the task of containing the fire less challenging, but it will also make the fire less destructive. In addition, being able to predict the behavior of fire (e.g., intensity and rate of spread) is important for fire managers, because it will enable them to make informed decisions in fire suppression (e.g., mobilization and allocation of resources). To this end, fire behavior and growth simulation models, such as FARSITE(see [19]), are indispensable for fire managers. These models combine spatial and temporal information on topography, fuels and weather with existing models for surface fire, crown fire, spotting, post-frontal combustion and fire acceleration into a two-dimensional fire growth model [19].
To successfully model the behavior of forest fires or minimize their risk, however, good knowledge of the spatial distribution of fuels in a particular area is required [20]. On the one hand, fire managers need to know areas with excessive fuel loads, so that they can arrange resources for thinning and prescribed fires; on the other hand, to benefit from fire behavior simulation tools, such as FARSITE, several data layers pertaining to the fuel characteristics (metrics) in a particular area are needed [19]. Examples of these metrics include canopy bulk density (CBD), canopy height (CH) and canopy base height (CBH) [21][22][23]. CBD refers to the amount of fuel per unit volume (measured in kg/m 3 ). CH is the highest height at which the canopy fuel density is greater than a critical threshold (normally 0.011 kg/m 3 ), and CBH refers to the lowest height at which canopy bulk density exceeds a threshold of 0.011 kg/m 3 [21].
CBH describes the minimum amount of fuel required to propagate the fire from the surface fuel layer to the canopy fuel layer and therefore plays a role as the most important factor in crown fire initiation [23,24]. Crown fires are special in that they spread several times faster than surface fires, and they burn more severely and with larger flames, making them more destructive and difficult to control. Additionally, they can occur in a variety of forest types [25,26]. As a consequence, there has been an increasing amount of literature on modeling CBH in recent years. Much of this literature is based on current state-of-the-art remote sensing technologies, particularly light detection and ranging (LiDAR) [27,28]. Unlike passive remote sensing technologies, such as aerial photographs, LiDAR is an active remote sensing technology capable of penetrating forest canopies and providing 3D information about the canopy structure [29].
In a study conducted by [23], for example, CBH was estimated for loblolly pine forests at the plot level using both allometric equations and a software package known as CrownMass/FMAPlus. Unlike many studies, this study used Lorey's mean to estimate CBH at the plot level for a total of 50 sample plots. Lorey's mean is a weighted mean, which uses tree basal area as a weighting factor; thus, bigger trees contribute more to the mean [30]. In this study, the difference in CBH estimated using the two methods was relatively small (1.5 m), with Lorey's mean giving a higher CBH estimate. In another study, [31] used a data fusion (LiDAR and imagery) approach for estimating CBH and other canopy fuel parameters. This study investigated which remote sensing dataset (LiDAR or imagery) could estimate CBH more accurately and whether the fusion of the two could produce more accurate CBH estimates. The results of this study showed that LiDAR alone provides more accurate CBH estimates (R 2 = 0.78, RMSE = 1.63 m) compared to imagery (R 2 = 0.31, RMSE= 3.60 m), whereas fusion of the two led to a small improvement in performance (R 2 = 0.84, RMSE = 1.44 m).
LiDAR was also successfully used to estimate CBH in the studies by [21,22]. In the former study, LiDAR metrics and field-measured fuel metrics were used to build regression models for predicting CBH to develop maps for critical canopy fuel parameters, including CBH. The regression model for predicting CBH developed in this study had R 2 and RMSE values of 0.77 and 3.9 m, respectively. In the latter study, LiDAR data were partitioned into cells, and cluster analysis was performed on each classified vegetation cell to discriminate between understory and overstorey layers. CBH was taken to be the first percentile of the overstorey layer.
Although LiDAR has been used in many studies to estimate CBH and other critical canopy fuel parameters, two major limitations are consistently reported by these studies. One, most of the models proposed in these studies are species specific (e.g., [21,23,31]), and two, many studies report challenges in measuring canopy fuel parameters in the field. The consequence of the former is that regression models built in those studies cannot be applied directly to forests with different species, i.e., they are limited to the area sampled in the respective studies and are likely to give unreliable results when used outside the sampled area. The consequence of the latter, on the other hand, is that there has not been a standard way for measuring canopy fuel parameters in the field, and hence, different studies adopt different approaches for measuring canopy parameters in the field, particularly CBH. Since measuring CBH accurately in the field is quite a difficult task [32], common practice has been to use the arithmetic mean or weighted (Lorey's) mean of tree crown base heights (C r BH) in a plot (e.g., [23,31,33]), due to the fact that these two quantities are easy to measure or calculate.
Despite these challenges, previous studies have shown that LiDAR has a high potential to estimate crown fuel parameters with a high degree of accuracy. To this end, standardization of field measurement practices is of great importance. This importance is due to role of field measurements in calibrating regression models used to estimate CBH from LiDAR data. This paper seeks to address this challenge by proposing new LiDAR metrics for estimating CBH. The proposed metrics are derived (measured) directly from LiDAR height information. Unlike the common practice of using LiDAR height percentile information, the proposed metrics are not percentile-based. In particular, this paper aims to: (1) develop and test new LiDAR metrics for estimating CBH; and (2) use the developed metric as an independent variable in regression models to compare the different independent variables used to estimate CBH in the field, namely the arithmetic mean, Lorey's mean and percentile scores.
Study Area
The study area is located about 340 m above sea level in Eastern Finland at the Koli Forest, which belongs to the Lieksa municipality (about 63 • 05 40 N, 29 • 48 31 E) (see Figure 1). The area is known for its white quartzite cliffs, steep topography and traditional landscapes. Over 70% of the region's surface area is forest land, and 20% is water. The forest is dominated by conifers (65% pine, 25% spruce, 7% birch and 3% other species). The main tree species are Scots pine (Pinus sylvestris L.), and Norway spruce (Picea abies (L.) Karst). The area is sparsely populated with a total area of 21,585 km 2 and a population of 175,000, which results in a population density of 9.8 inhabitants per square kilometer. Figure 1 shows a map of Finland and the location of the study site. The forests in the study site feature both natural and managed forests, with varying degrees of intensity. Conservation in the area is relatively young and was imposed less than twenty years ago. Both forest classes contain undergrowth.
Forest fires in Finland are mostly caught early on, because the country is still populated densely enough, and monitoring flights are frequent in the short hot season. However, as an example of neighboring Sweden from 2014 shows, strong winds and canopy fires can still be a devastating condition also in Finland.
Experimental Data
Two kinds of experimental data were used in this study, namely LiDAR data and field data. The following is a description of the data.
LiDAR Data
The LiDAR data used in this study were obtained free of charge from the National Land Survey (NLS) of Finland (www.maanmittauslaitos.fi). The data were acquired in 2014 using an average flight altitude of 2 km with a scan angle of ±20 degrees. The resulting average LiDAR pulse density was 0.5 pulses per square meter, with an offset of approximately 1.4 m between the measurements. The mean error for height was 15 cm at most, while the horizontal accuracy was 60 cm. The beam footprint at ground level was 50 cm in diameter.
Information recorded for each LiDAR pulse includes the class of the pulse (ground or vegetation), flight line number, time stamp of the outgoing pulse, X-, Y-and Z-coordinates, intensity and the back-scattering order of the pulse. An automatic classification of the pulses into ground and vegetation returns was performed, and the results were checked against a stereo model from aerial imagery by the NLS.
LiDAR data used in the current article are of the same kind as the operational laser scanning of the forests in Finland. In 2010, Finland decided to scan the entire country by airborne laser in ten year cycles. The density chosen for this is roughly 0.5 returns per square meter. In the past, the scanners have mostly used single pulse mode, but currently, multiple pulse mode is widely adopted. As the goal of the current research is to calculate forest fire potential maps for the whole country, it has not been possible to adjust scanning parameters, such as pulse density.
Field Data
Field measurements for crown fuels were collected in April 2014 for 26 circular plots equally representing the dominant fuel types in the study area. Each plot covered an area of 256 m 2 (radius = 9.03 m). A survey-grade Trimble GPS receiver was used to navigate to the plots and to georeference plot centers, acquiring data for plot centers from an average of at least 100 Global Navigation Satellite System (GNSS) locations.
Plot boundaries were measured using the Haglöff Vertex Laser Range Finder. The same instrument was used to measure the height of each tree. The diameter at breast height (DBH) was measured for all trees using a diameter tape. For each tree with a diameter at breast height (DBH) ≥ 8 cm in a plot, the following properties were recorded: tree class, species, DBH, height, crown base height (CBH) and crown class (dominant, co-dominant, intermediate and suppressed).
CBH was considered to be the distance between the ground and the lowest live branch in the crown of a tree. Small isolated branches with leaves, separated from the main crown, were not considered as indicating crown base height. The Haglöff vertex was used to measure CBH. The crown class of each tree was recorded as described above. Table 1 presents a summary of the field plots used as ground data.
In addition to CBH measurements for individual trees in each plot, five pictures were taken from the plot center facing in the four cardinal directions (N, S, E, W) and the sky. These photos were taken to help as a visual aid later when analyzing and interpreting the experimental data. Figure 2 shows examples of plot photos.
Methods
The method for estimating CBH from LiDAR data proposed in this paper is based on the idea of a moving voxel. A voxel, or volumetric pixel, is an analogy of a pixel in 3D. The use of the voxel in estimating CBH and other forest properties from LiDAR data has been reported in several past studies. In these studies, the emphasis had been to use the voxel to characterize the vertical structure of the canopy by dividing the LiDAR data into vertical bins (voxels) and counting the number of LiDAR hits in each bin (e.g., [34,35]). This paper takes a different approach and uses a moving voxel to locate gaps in the LiDAR point cloud (and hence, in the respective forest) and then estimates the height of these gaps from the ground. This information is then used to derive LiDAR metrics, which are used to estimate CBH as independent variables in a linear regression. The main assumption behind the method is that tree crowns tend to block most of the LiDAR pulses falling on them, thus creating a partial gap underneath the crown (see Figure 3). The idea is to use a moving voxel to locate gaps and estimate their height relative to the ground. It turns out that, as will be shown in the next section, the heights of these gaps strongly correlate with field-measured CBH values and are the basis for the LiDAR metrics used in estimating CBH in this paper.
To estimate CBH from LiDAR data, three main steps are performed: (1) initialization and data pre-processing; (2) searching for gaps and estimating their height (gap mapping); and (3) LiDAR metric generation and CBH estimation. (c) Figure 3. Illustration of how a partial gap is formed (marked with a G) below a tree crown (a) in a LiDAR point cloud (b) due to most of LiDAR pulses being blocked by the crown and the absence of reflecting objects between the crown and the ground; (c) shows a partial gap formed below the canopy in a real LiDAR point cloud.
Initialization and Data Pre-Processing
The initialization and data pre-processing step sets the stage for subsequent steps. In this step, the LiDAR data are normalized, i.e., the elevation of every point is subtracted from the DTM of the area, so that each point represents height (from the ground), and then, all points with a height of less than 0.5 m (considered ground points) are discarded. Next, parameters governing the operation of the method are determined and initialized. These parameters include: voxel width, voxel height, step size and point threshold. Voxel width specifies the width and length of the voxel (i.e., the base), while voxel height specifies the height of the voxel.
Step size specifies the distance (in meters) that the voxel moves horizontally (in the x-y plane), while point threshold specifies the maximum allowed number of points in a voxel to be considered a gap. Suitable values for these parameters for a given LiDAR point cloud are determined by experimenting on the field and LiDAR data. Values of these parameters used in this study were 8 m, 2 m, 1 m and 3 points for voxel width, voxel height, step size and point threshold, respectively. The choice of voxel width is influenced by point spacing in the LiDAR point cloud. If the width is too small relative to the point spacing, there will be too many false gaps, and if the width is too big, small gaps will be missed. Similarly, voxel height and step size are chosen, such that both small and large gaps are detected. Finally, if the point threshold is too small, very few gaps will be detected; if it is too high, there will be a large number of false gaps.
Gap Mapping
In this step, gaps in the LiDAR point cloud are located, and their heights relative to the ground are estimated. This is achieved by the use of a moving voxel in the search space. The search space is taken to be the box enclosing the pre-processed LiDAR point cloud with its origin at the point (Easting min , Northing min , 0), where Easting min and Northing min refer to the smallest easting and the smallest northing values in the point cloud, respectively (marked as P 1 and P 2 , respectively, in Figure 4). Two kinds of movement are employed in the search space: (1) horizontal movement; and (2) vertical movement. Horizontal voxel movement is used to detect gaps, while vertical voxel movement is used to estimate the heights of the detected gaps.
Horizontal Voxel Movement
The goal of the horizontal voxel movement is to locate gaps in the search space. Starting at the origin of the search space, the voxel is first repeatedly moved along the x-axis (easting) using steps equal to the step size. At each step, points enclosed in the voxel are counted. A gap is detected when two conditions are met (see Figure 5): first, if the number of points in the voxel is less than or equal to the point threshold and, second, when the number of all of the points above the voxel is greater than the point threshold (see Figure 5a). The latter condition ensures that the gaps detected are not due to the absence of vegetation in the corresponding locations (see Figure 5c). After a gap has been detected, the next step is to estimate its height. This is achieved by using vertical voxel movement.
Vertical Voxel Movement
The aim of the vertical voxel movement is to estimate the heights of the gaps that have been detected. To estimate the height of a gap, the voxel is repeatedly moved upwards in steps equal to the voxel height until the number of points in the voxel exceeds the point threshold (see Figure 5). The height of the gap is then given by voxel height × N .
The outcome of the gap mapping step is a gap height raster with cell size equal to the step size and origin (top left corner) at (Easting min + voxel width 2 , N orthing max − voxel width 2 ); note the starting point of the voxel in Figure 4. Figure 6 shows a portion of the gap height raster. Note that to speed up processing, searching for gaps can be confined to the space extending a few meters from the plot boundary, as shown in Figure 7. Figure 8 shows the degree of correlation between the estimated gap heights and the field-measured CBH values in 24 of the field plots. The plotted values for gap heights in each plot were obtained by taking the highest gap height values corresponding to the number of plots and matching them to the field-measured individual tree CBH values based on their magnitude (such that the highest goes with the highest, etc.). Plots with ID numbers 3 and 5 do not appear in the figure, because no information could be extracted from the LiDAR data corresponding to these plots. A possible reason for this anomaly is the fewer number of trees present in the plots (see Table 1 and Figure 9) and consequently fewer LiDAR points. Following this anomaly, subsequent analysis and results reported in the following sections are based on only the 24 plots shown in Figure 8.
LiDAR Metrics Generation and CBH Estimation
In this step, LiDAR metrics for use in CBH estimation (independent variables) are generated. These metrics are then combined with field-measured CBH values to form a dataset used for estimating CBH through linear regression.
LiDAR Metric Generation
To generate LiDAR metrics, the gap heights raster produced in the previous step is used. Points corresponding to each field-measured plot are extracted from the raster by taking all of the points that satisfy the equation: where R is the radius of each plot (9.03 m); r and s are the x-and y-coordinates of the center of the plot, respectively; and X and Y are the easting and northing values of the points in the gap heights raster. This is similar to placing a hypothetical cylinder of the same radius as the plots on the plot, such that the axis of the cylinder passes through the plot center, and taking all of the points in the cylinder.
Because of a small value for the step size used while generating the gap heights raster, there will be a high degree of duplication in the values extracted for each plot. Therefore, the next step is to remove duplicates from the values. To remove duplicates, the values are sorted (in either descending or ascending order) to bring equal values together into groups and picking one value from each group (see Figure 10). After duplicates have been removed, the following metrics (percentiles) are computed from the remaining values in each plot: g 25 , g 50 , g 75 and g 90 . These metrics serve as independent variables for estimating CBH.
CBH Estimation
To estimate CBH, several regression models were fitted. For the purpose of model fitting, several variables representing CBH at the plot level were derived from the field data. These variables served as dependent variables in regression models and include: (1) Lorey's mean (LOR); (2) the arithmetic mean (AVG); (3) the 40th percentile (P40); and (4) the 50th percentile (P50).
For comparison reasons, we fitted models using both the metrics introduced in this paper and traditional percentile LiDAR metrics (the coefficient of variation (CV), percentage of first returns, maximum height, mean height and the 25th, 50th, 75th and 90th percentiles) as independent variables.
Model fitting was done in the MATLAB TM computing environment [36], whereby forward stepwise regression was used to automatically select variables for each model. Variable selection was based on the F-test. The minimum p-value for a variable to be removed was 0.1, while the maximum p-value for a variable to be added was set to 0.05. Leave-one-out cross-validation (LOOCV) was used to assess the predictive value of each regression model. For this purpose, the root mean squared error for cross-validation (RMSE cv ) was used.
Results
CBH estimation results obtained by using the LiDAR metrics introduced in this paper as independent variables are shown in Table 2 and in Figure 11. Results obtained using traditional percentile LiDAR metrics as independent variables are shown in Figure 12 and in Table 3. The CBH estimation results using traditional percentile LiDAR metrics are based on [21]. In each case, the best model for each dependent variable (i.e., LOR, AVG, P40 and P50) obtained using stepwise regression is shown. The results shown in Table 3 were computed based on [21].
Effect of Voxel Width and Point Threshold
To study how the different dependent variables perform under various voxel dimensions and point thresholds, the effect of voxel dimensions was studied by varying the value of voxel width from 1 to 10 m, while keeping the point threshold and voxel height constant at three points and 2 m, respectively, and observing how the model RMSE for each dependent variable was affected. Similarly, The effect of point threshold was studied by varying the value of point threshold from 1-10 points. Figure 13 shows the effect of voxel width and point threshold on the model RMSE on the four dependent variables.
Results in Figure 13 show that LOR outperformed the other variables in both cases with lower and more consistent RMSE values in both cases. In the case of voxel width (Figure 13a), the best (more consistent) values for RMSE are obtained in the 5-8-m range. Outside this range, the RMSE values vary greatly among the variables. This behavior can be explained by the effect very large/very small voxels have. The effect of a large voxel for a given point threshold is that legitimate gaps will not be detected (false negatives), while the effect of a small voxel is that illegitimate gaps will be detected (false positives). For the LiDAR data used in this study, suitable values of voxel width are in the range 5-10 m. On the other hand, all four variable are affected in a similar manner with changes in point threshold (Figure 13b). Suitable values for point threshold in this case are those in the range of 3-6 points. These results further demonstrate the suitability of LOR for representing CBH at the plot level.
Discussion
Comparison of the four dependent variables (LOR, AVG, P40 and P50) showed that LOR gave the simplest model in either case (using traditional percentile LiDAR metrics and the proposed metrics) (see Tables 2 and 3). Furthermore, in both cases, the LOR-based model had the smallest RMSE cv , which was very close to the corresponding RMSE value. This implies that, in both cases, the LOR-based models have better predictive value over models based on the other dependent variables. This observation is in agreement with results reported in previous studies (e.g., [23]). This observation further supports the robustness of Lorey's mean in CBH estimation over the commonly-used arithmetic mean. However, Lorey's mean should be used with caution due to its tendency to be affected by big trees. This fact implies that in some cases, the CBH estimates obtained using Lorey's mean may be higher than the actual CBH, that is the minimum canopy bulk density required for the propagation of a surface fire to the crown could be reached at a lower height than that estimated using Lorey's mean.
With the exception of LOR, the remaining dependent variables gave more or less similar results in both cases, with higher RMSE and less consistent RMSE cv values being evident with models based on the traditional percentile LiDAR metrics. This similarity can be explained by the high degree of correlation among the variables, as shown in Figure 14. This observation implies that the use of different field estimates for CBH due to the lack of standardized field methods for estimating CBH does not have a profound effect on the final CBH estimation results. Despite this fact, LOR and AVG should be used with caution, because the former tends to be biased towards big trees, while the latter is susceptible to outliers. Point-cloud based voxels can also be seen as another way of defining CBH in a relatively objective way. The lowest degree of correlation is seen between P40 and LOR (0.88) (Figure 14b), while the rest of the variable pairs exhibit higher correlation values of over 0.9. With this high level of correlation, it is expected that models based on either of these pairs of variables exhibit a high degree of similarity. In this respect, the two models based on the proposed metrics that used AVG and P50 as independent variables (Table 2) exhibited a higher degree of agreement compared to similar models using traditional percentile LiDAR metrics (see Table 3).
Performance of the models based on the traditional percentile LiDAR metrics (see Table 3) compares well with results obtained in previous studies (e.g., [21,34]), although there are significant differences in the number and type of independent variables in the models. For example, [21] used the same percentile LiDAR metrics for estimating CBH and obtained similar results (R 2 = 0.77, RMSE = 3.9 and RMSE cv = 4.1), but some of the variables (coefficient of variation (CV) and percentage of first returns (D)) did not appear in the regression models reported in this paper. A possible explanation for this difference could be differences in the distribution (characteristics) of the LiDAR data used in the two studies, which, in turn, is affected by tree species and the season of data collection, among other factors. Conversely, models based on the proposed metric (see Table 2) gave better results based on all four criteria (RMSE, RMSE cv , R 2 and p-value). These models had smaller RMSEs, higher R 2 and smaller and more consistent RMSE cv values.
Although our results compare well with previous similar studies, the main limitation of this study is the small number of field sample plots used (24 plots), which is one possible source of model error. In contrast, previous studies have used significantly larger numbers of field sample plots (e.g., [21] (101 plots); [34] (62 plots); [23] (50 plots)). In another example, [37] used the Sparse Bayesian regression implemented in ArboLiDARTools [38] to build a linear model to estimate CBH from cumulative percentile variables of the LiDAR point cloud and validated the results with laser range-finding and a hypsometer on the ground in 250 sample plots. The RMSE of CBH estimated from LiDAR was 1.03 m.
With more sample plots, and therefore more redundancy in the training data, we anticipate better results also for the current method. Other possible sources of error include measurement error and instrument error.
Conclusions
This paper has proposed new LiDAR-based metrics for estimating CBH. Several field-based plot-level tree CBH variables, namely Lorey's mean, arithmetic mean, the 40th and 50th percentiles, have been compared in order to find if there are any significant differences in using one variable over another.
Results obtained in this paper showed that the use of Lorey's mean to estimate CBH leads to a slight improvement in accuracy compared to the other variables; no significant differences, however, were found among the rest of the variables. The use of Lorey's mean over the other variables, however, will depend on the availability of the information required to compute Lorey's mean, namely diameter at breast height (DBH). This is because Lorey's mean is a weighted average with the basal area of individual trees as the weighting factor; therefore, bigger trees contribute more to the mean. However, since CBH is the minimum amount of fuel required to propagate the fire from the surface fuel layer to the canopy fuel layer, the use of Lorey's mean has the potential to overestimate CBH due to the influence of bigger trees. This means the minimum canopy bulk density required to propagate surface fires into the crown can be reached at a lower height than the CBH obtained using Lorey's mean. Therefore, based on this fact, Lorey's mean should be used with caution.
The method for estimating CBH proposed in this paper gave better results (lower and more consistent RMSE and RMSE cv values and higher R 2 values) compared to the use of traditional percentile LiDAR metrics, which have been widely used in previous studies. The main advantage of this method is that the metrics used for estimating CBH are derived from the estimated heights of gaps below trees as directly calculated from LiDAR data. The gap heights give an estimate of the distance of the lowest tree branches from the ground and correlate strongly with field-measured CBH values of individual trees. Moreover, a by-product of processing, which is a raster of gap heights, gives valuable information about the vertical structure of the forest stand below the canopy, i.e., which areas are closed (contain ladder fuels) and, hence, may need immediate attention (e.g., thinning), or identifying areas with low fuel volumes that could be modified to create a fuel break with relatively little manual labor and cost. On the other hand, the main limitation of the proposed method is that it is not suited for areas with pronounced understory layers (e.g., tropical rainforests). This is because the method is suited for detecting fuel breaks, which originate from the ground.
The method for estimating CBH from LiDAR data proposed in this paper gave better results over the use of traditional percentile LiDAR metrics; therefore, the method can potentially be applied to other fire-prone areas provided that suitable parameters are determined from the LiDAR data. The main limitation of the study was that the number of sample plots used (24) was relatively small compared to similar previous studies. Therefore, it would be interesting to conduct further tests on the method using larger numbers of sample plots in different kinds of forests and different seasons. | 7,940.8 | 2015-07-15T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
DEVELOPMENT OF POETRY TEACHING MATERIALS BASED ON CREATIVE PROCESS
This study aims to overcome the problem of the quality of Indonesian language learning outcomes in high schools (SMA), especially the topic of poetry texts which are still low. The results of this study are intended to assist the government in improving the quality of the implementation of the 2013 curriculum at the high school level. The weakness of applying the 2013 curriculum is the limitations of teaching materials, including poetry text teaching materials. Learning this material is expected to encourage students to express their thoughts, feelings, and ideas through beautiful, rhythmic language, literary values but not offending others. The method used in this research is research and development with five selected literary writers who are productive in producing poetry texts. Next, in testing the developed teaching material, students from SMA Negeri 1 Manonjaya Tasikmalaya were chosen. Teaching material developed based on the poet's creative process in producing literary works of poetry text is combined with the composition of basic competencies based on the curriculum with scientific presentation. Teaching material which is evaluated based on content, presentation, language, and graphic criteria by academics and practitioners meets the eligibility criteria as teaching material in high school. Based on the testing of the application of teaching materials, it was found that the teaching materials were able to encourage students to produce quality poetry texts. Indonesian Language learning also takes place effectively in achieving goals.
Introduction
The development of poetry text teaching materials in high schools is very important because in their teens, the ability to express ideas, thoughts, and feelings is directed at the development of creativity. Competencies developed as mandated in the curriculum are to recognize, examine, and produce poetic texts. Learning outcomes with poetry text material is to produce works in the form of poetry texts. However, there are still many poetry works made by students which are plagiarism, works that are lacking in, works that can offend other parties. High school students must be able to avoid poetry activities that do not reflect the attitude of a student who entrusts a moral message to his readers through poetry. Therefore we need good teaching materials that are relevant and in accordance with the current development of society. Development of teaching materials that are relatively new is still being done (Brian, 2012: 143;Du Toit, 2014: 25) including those excavated from the field and the environment.
Poets are poetists whose works have been accepted as excellent literary works of art. Poets write poetry by applying creative and imaginative processes, armed with an understanding of a work. Teaching material extracted and developed from the creative process carried out by the poet in carrying out the creative process will be able to challenge and encourage high school students to develop the ability to express ideas, feelings, and thoughts properly. Poetry text teaching material presented based on the creative process (Du Toit, 2014: 25;Vass, 2001: 102) is not yet available so the results of this study will be very useful, both for scientific development and for learning Text Poetry in high school. This research starts from a study of the need for teaching materials in high school and then a descriptive study of the creative process by the poet. By using the results of the study of the 2013 Curriculum concept, the results of the study of the need for teaching materials, and the results of the study of the poet's creative process, prototypes of Poetry Text teaching materials are developed based on the creative process. The prototype that was compiled was then validated by experts and practitioners, then revised and finally conducted a trial. The testing activities are carried out in the form of learning to high school students in accordance with the teaching material that they are supposed to learn.
Teaching Material
Teaching materials are materials used by students to be able to learn. Teaching material is a set of information that must be absorbed by students through enjoyable learning (Iskandarwassid and Sunendar, 2011: 171). This means that in preparing teaching materials students are expected to really feel the benefits of teaching materials or materials after they learn them. Thus, teaching material is a set of learning tools or tools that contain learning materials, methods, boundaries, and ways to evaluate systematically and attractively designed in order to achieve the expected goals, namely achieving competence and subcompetence with all its complexity (Lestari, 2013 : 1).
Teaching materials should make it easier for students who have difficulty in understanding learning material, are able to meet the needs of students, information is presented to be learned by students which contains all the material or theory of learning, is complete, so that students no longer need to look for other sources of material, follow technological developments, and makes it easier for users when they want to use it (Jannice, 2009: 33;Hapsari, 2016: 22). Teaching material is one of the most important parts in the learning process because there is a number of information, instructions, processes, and evaluations that support learning activities (Nag et. Al., 2018;Hamdani, 2011;Kusmana et al. 1919) to reach the goal. Therefore, every material, both instruction and information exposure; presentation; use of language; and the writing graphics are helpful and friendly to the wearer. Good teaching materials not only contain knowledge, but are developed in a quality way and use a theoretical foundation. For this reason, in order to produce teaching materials that are capable of carrying out their functions and roles in effective learning, teaching materials need to be designed and developed using the latest approach.
The development of the latest teaching materials uses the Content and Language Integrated Learning approach or CLIL (Doiz, 2014: 209-224), with stages: (1) establishing context, (2) examining models / examples; (3) guided construction; and (4) independent construction through scientific procedures (scientific) through the 5M pattern consisting of: observing, questioning, gathering information, reasoning, and communicating (Kusmana, 2016: 9;Yani, 2014: 110). Based on CLIL, the teaching materials used to develop students' competence in producing poetry texts can be developed from examining the processes carried out by poets in producing poetry.
Poetry
Poetry text is one of the teaching materials that can be used by teachers to develop students' basic competencies. Poetry text is one of the outcomes of student learning outcomes in Indonesian subjects in high school. Poetry according to Waluyo (2003: 1) is a literary work with a language that is condensed, shortened, and given a rhythm with a unified sound and selection of words (imaginative). Poetry is a form of work that expresses the thoughts and feelings of poets imaginatively and contemplatively (Setiawan, 2017;Taisin, 2014). Poetry can represent the thoughts and feelings of the writer expressed through the wrapping of language formed the physical and inner structure of the writer through a particular language. Suminto A. Sayuti (2008: 3) states that poetry is a form of language expression that takes into account the aspects of the sounds in it, which expresses the imaginative, emotional, and intellectual experience of the poet drawn from his individual and social life ; expressed by a certain choice of technique, so it can evoke certain experiences in the reader or audience. From this understanding we can understand that poetry was created by a poet to convey a message to the reader either implicitly or explicitly to fulfill the inner satisfaction of a poet writer or poet. In Malay, there was only one term known as "rhyme" which meant poezie or gedicht. Poezie (poetry) is a type of literature that is paired with the term prose. Suryaman (2005: 20) states that poetry is the work of emotions, imagination, thoughts, ideas, tones, rhythms, sensory impressions, word order, figurative words, density, and mixed feelings with the attention of the reader. So poetry is the expression of one's heart whether it is sad, happy, and happy and the poem must use figures of speech so that the poem is interesting and the reader feels as if he has experienced what is happening in the contents of the poem. As according to Pradopo (2012: 7), poetry is an expression of thought that evokes feelings that stimulate, imagination of the five senses in rhythmic wording. The poem is a recording and interpretation of important human experience, then transformed into the most memorable form. Another opinion put forward by Warsidi (2009: 22) which states poetry as literary inventiveness is a manifestation of the experiences of poets expressed sincerely, as is, truly, and full of imagination (imagination) with a language that is unique to sincerity, sincerity, wealth imagination and distinctive language also result in a variety of experiences expressed to be alive and captivating.
Creative Process
The creative process is the stages produced by a quality work and has a difference with other works. The work produced requires time and stages in the process so that it becomes a creative work. Creative processes refer to the sequence of thoughts and actions that lead to creative products (Lumbart, 1994). Poetry text is one of the creative products, because in the process of its creation it cannot be done immediately without a process. This is in line with what was stated by Noor (2012: 230-232) that a poet never departs and empty space or emptiness in creating poetry.
In producing creative work, a poet performs the process of contemplation by connecting his experience and thoughts as a reality with expressions that can also be thought and felt by others, even though they are different. Thus, all experiences that occur to the poet, both spiritual and physical will be described visually through creative words. As a creative work, poetry has characteristics that reflect the poet's creativity in carrying out the creative process (Aninditaet.all., 2017 ;Nag et.al .: 2018;Kusmana, et.al. 2019). The poet's creativity is a process of internalizing the reality faced or experienced by disclosure to the reader. Thus, good poetry is the result of a creative process that illustrates the thoughts and feelings of a poet in interpreting reality into a work that can be read by others. Environment and atmosphere play an important role in the process of a poet in creating poetry (Noor, 2012: 262-266;Setiawan, 2017: 88-99) states that. Even in getting ideas from where and whenever, but to write them into creative work requires a special atmosphere.
In the poetry text there is something that can be described, either explicitly or implicitly. The picture not only presents the atmosphere, but also depicts color, weather, sound, and even smell. Viewpoints in seeing, interpreting, and describing something are aspects related to the poet's perception and subjectivity. Everything related to natural phenomena is a metaphor that can be used in expressing the experience of the soul in poetic words. Poetry works are fiction because events experienced by the poet are in words and no longer in their daily lives (Damono, 2012: 265-266;Setiawan, 2017). Therefore, literary works in the form of poetry cannot be measured to the size commonly used in everyday life. The creative work of poetry can be understood intelligently because it is written in the form of words, but it would not make sense if returned to real life. The creative work uses imagery to symbolize reality in the form of poetic words and can be enjoyed by readers. Poetry as a creative work that has the value of creativity was produced by a poet as a creative process.
Method
This research uses the Research and Development (R&D) research method as developed by Borg & Gall (1983). The results of this study are valid and effective teaching material products (Sukmadinata, 2012) for use in learning. The development research model used with the ADDIE model consists of five stages, namely Analysis, Design, Development, Implementation, and Evaluation (Aldoobi, 2016: 68). Thus, the research procedures undertaken consist of the research phase of analyzing the need for teaching materials in high school, analysis of competency standards, analysis of the results of interviews with productive poets, development of teaching materials, validation of teaching materials, and testing of teaching materials. The next stage of development consists of developing teaching materials, validating and revising teaching materials for sub-materials identifying poetry texts. The evaluation phase of teaching materials is based on limited trials to find out the effectiveness of the use of teaching materials in learning poetry text material.
There are two categories of research subjects, namely research subject analysis of the availability of teaching materials and the results of interviews with poets about the creative process in producing quality poetry texts, and analysis of the need for developing poetry text teaching materials that are preferred by students. From this, the research subjects used were five Indonesian poets who were productive in producing poetry texts. Meanwhile, the subject of research at the time of product validation through the prototype assessment of poetry text teaching materials based on the poet's creative process was Indonesian academics and education practitioners. Furthermore, the research subjects in conducting a prototype trial of teaching materials were students at SMA Negeri 1 Manonjaya Tasikmalaya.
The instrument used in this study was an interview to explore the creative process carried out by poets in producing quality poetry, analytical guidelines for analyzing poetry texts, guidelines for teaching material validation to measure the validity of teaching materials, and tests used to measure learning success in using prototype teaching materials poetry texts based on the creative process. The data collected from the interview results were analyzed to obtain a synthesis of the process of writing poetry, while the data from the analysis of poetry texts was used as a starting point for learning poetry texts to high school students. Test result data from the implementation of learning trials to measure the effectiveness of the use of teaching materials are processed using t-tests or significance tests of two means.
Result
Based on the results of interviews with the poets obtained information about the source of ideas in writing poetry. The source of ideas for writing poetry is based on impressive everyday events. These everyday problems are considered disturbing the poet's conscience but he can only express it through words or poetry. It is also possible that the source of the idea of the poem came from an everyday event but was quite memorable or impressive to the poet he received through the five senses. Poetry ideas can also be sourced from a life based on poetry readings and experiences.
The idea of writing poetry aside from social issues that bother poets or become something that is quite memorable for the poet himself. For example, the idea of religious poetry comes from the experience of worship since childhood. Poetry with social topics originates from social problems which deeply disturb the poet's conscience. The idea of poetry can also be sourced from the poet's empathy in the social environment he observes, so that the idea of writing poetry is lifted from everyday experience or problems from those close to the poet. Even the idea of poetry is in the form of a poet's view of the nature of life and all its contents.
The idea of writing poetry can also come from objects observed by the senses but impress the poet. The idea, for example about panoramas, songs, music, culinary, books read, films watched, or memorable experiences while traveling or while traveling. However, the idea of writing poetry can also arise because there are competition activities, so that they are adjusted to the theme of the competition. Usually, from a competition the topic or theme of the competition is determined so that the poet uses the source of ideas from the theme determined by the committee.
With regard to the creative process, from the results of interviews with poets about the creative process they have the view that the understanding of the elements of poetry must be owned by the poet's writer. Therefore, understanding literary elements is a must for a writer, before writing literature. Understanding poetic elements can color the poetry products he writes. Understanding the elements of photography, rhyme, rhythm, images, diction, and language style. A poet writer must also understand the characteristics and forms of poetry, so that when writing poetry his creativity does not go too far out of the conventions of a poetry feature. Poet writers must also understand the patterns of writing rhymes, gurindam, carmina, poetry, or free poetry. A poet writer should also be able to understand the types and forms of poetry, for example there are forms of symbolic poetry, narrative poetry, and can distinguish between poetry and prose.
From the understanding of the literary elements, the characteristics of poetry, the types of poetry, and the forms of poetry, the poetry writer's creativity grows in producing a work of poetry. Poems made by poet writers are inspired by the characteristics, types and forms of poetry that have been circulating so far. It is possible, novice poet writers have the creativity to work based on their analysis of the works of poetry that exist today. From this analysis poetry can be produced based on the poet's culmination of phenomena and mastery of the types, shapes and characteristics of poetic texts. Thus, in general poets write poems that are beautiful, good, and their contents are stable because they understand the nature of poetry, understand the characteristics of poetry, types and forms of poetry. However, there are also poets who when producing poetry do not depart from an understanding of the elements of poetry, but are based on the poet's intuition of the phenomena witnessed or experienced which are expressed into beautiful expressions.
The creative process undertaken by poets in writing poetry is (1) absorbing information; (2) cultivate and pursue; (3) get or produce creative ideas; (4) reflecting creative ideas into work; (5) elaborating. In the initial stage, the poet absorbs information obtained both from natural (external) phenomena and based on the thoughts and feelings within the poet (inside). From the results of absorbing this information, then a poet conducts a settlement (incubation) with a Create elaboration poetry imagination poetry experienc knowledge of poetry point of view from his own opinion and from the viewer's view. This process depends on the sharpness of the poet in reflecting information into poetic form. When writing poetry, the poet uses his own knowledge of poetic texts and poetic ideas or ideas as works of art. The final stage of the creative process of producing poetry is the editing process which is highly determined by the poet's knowledge of the building elements of a poem and the poet's experience in producing poetry as a creative work. From the editing stage produced a quality poem as the end result of a poet's creative process.
The process of pondering or settling a poet in creating poetry is determined by intellectual ability, insight, and literary experience. In settling phenomena or thoughts and feelings associated with instincts and the sharpness of feelings of a poet in processing and pondering problems. Therefore, at this stage there are poets who in a short time can produce poetry from the creative process but there are also poets who need a longer period of time.
At the editing stage of poetry as an initial creative product, a poet uses his knowledge of the use of poetry-building elements. Knowledge of these elements can beautify poetry so that the application of diction which has a beautiful rhyme in a poem or even produce an atmosphere of creative poetry. The editing process also depends very much on the experience of the poet in producing poetry. From the experiences experienced by the poet in displaying the creative work, a beautiful poem will be produced that is also pleasant to read or presented to the public.
Based on the exposure of the poet's experience in writing poetry it can be illustrated that the creative process undertaken is: (1) capturing information, both external phenomena (outside) or thoughts and feelings of self (inside); (2) processing information to settle and incubate; (3) produce poetry texts with stimulus from poetic ideas and poetic knowledge; (4) editing based on reflection on the fulfillment of poetry-building elements so that they can be understood and enjoyed by readers. The intended creative process can be described as the following picture.
Discussion
Based on the exposure of the creative process carried out by a poet associated with basic competencies in the curriculum, teaching materials can be made that combine the two. Poetry text teaching materials whose competency development starts from knowledge to skills with its output is poetry text by students combined with creative processes. Therefore, in developing poetry text teaching materials for high school students it is necessary to consider the creative process carried out by poets. In general, poetry text teaching materials are developed based on the understanding of poetics on basic competencies that must be mastered.
The basic competencies set out in the curriculum are: (3.16) identifying the atmosphere, themes, and meanings of some poems contained in anthology poetry books or collections of published poems that are played or read; (4.16) demonstrating (reciting or musicalizing) a poem from the poetry anthology or collection of poetry by paying attention to vocals, expressions, and intonations; (3.17) analyze the building elements in poetry; and (4.17) Writing poetry by paying attention to the building elements. These basic competencies are combined with the creative process carried out by the poet into teaching material. The merger can be made in the concept map as follows. Poetry text teaching materials for high schools as described in the concept map above are then validated by language learning experts and practitioners or Indonesian language teachers in high schools. Validation is based on a review of content compatibility with curriculum, presentation, language, and graphics. Of the four components of the validation obtained scores from the validators. Based on the results of their validation it is known that the average score of the validation results reached 96.75 out of a total score of 100. This means that the teaching material developed included in the category is very feasible to be used in learning for high school students.
From the test results of poetry text teaching materials developed based on the creative process carried out by the poet, it is known that from the results of the trial using the pre-test and post-test design, the t-value obtained is greater than the value of the table. This means that the difference in average scores achieved by students after following the learning of poetry texts through teaching materials developed based on the creative process is declared to be trustworthy.
Poetry text teaching materials for high school students that are still limited can be enriched by the efforts of teachers in developing teaching materials based on the creative process carried out by poets in producing poetry works. Teaching materials developed must be in accordance with the conditions of the development of civilization, so students can easily understand the teaching material presented in learning. Based on the experience of poets in the creative process, it turns out knowledge of poetry and literary experience has a role in producing creative works (Kusmana et.al. 2019; poetic knowledge in poets can be used for the process of settling phenomena or experiences experienced. Meanwhile, knowledge of the building blocks of a poem become material for the poet to reflect on or make improvements at the stage of revision of the work of poetry.
The creative process of each poet is different, but in general it can be illustrated that the creative process is almost the same, namely getting ideas from phenomena or experiences, then experiencing the incubation process, then when producing works with stimuli from the imagination process, and the final part is revising by means of reread, change diction to have rhymes, change the order of lines, and see the full meaning so that creative work is obtained.
The development of poetry text teaching materials based on the creative process is an alternative provision of teaching materials that are suited to the needs of students and teachers. Learning with material based on poetry texts has outcomes so that students produce poetry texts as creative works. Poetry produced by students is more varied than learning with teaching materials contained in textbooks. Teaching material developed along with the creative process carried out by poets in producing poetry. Learning oriented towards student work products needs to be done based on the experience of the process of producing the creative work so that the stages of producing a product are in line with that carried out by professionals. However, the basic competencies that must be achieved as stated in the curriculum remain the main material as the minimum competencies students must have.
The development of teaching materials needs to be judged by their quality based on the assessment of the content or presentation, language, and graphics used in the teaching materials. From the four components, the teaching materials undergo an adjustment process with a basic framework as fulfilled in the development of textbooks. Development of teaching materials is basically the task of a professional teacher, but not all teachers have these competencies, therefore the results of research on the development of teaching materials become an alternative for teachers in choosing teaching materials varying in implementing learning.
Product-oriented learning as a pedagogical concept of genre can increase student enthusiasm. By using stage (1) build context; (2) introduction of creative work models; (3) scaffolding to produce models; and (4) producing creative work independently. Stages of learning like this are in accordance with the application of teaching materials developed based on the poet's creative process in producing poetry. Therefore the same response also occurs during the learning trials using teaching materials that are developed based on the creative process, the responses of students are very enthusiastic and the resulting work products are more varied. Poetry text teaching materials that are added to the creative process in producing works are in line with similar learning from teaching materials developed based on the poet's experience in producing poetry texts (Kusmana, Jaja, and Mutiarasari: 2019). The students have different responses to learning that uses teaching materials taken from textbooks, students are encouraged to be more creative in producing poetry texts that they make.
Conclusion
Based on the explanation and discussion of the results of this research and development it can be concluded that the creative process carried out by the poet (1) absorbs information from the senses, experiences, the results of thoughts about something that has the potential to become poetry texts; (2) processing information until it undergoes an incubation process; (3) Contemplating to create creative work; (4) Doing elaboration and reflection on the creative work that is produced; (5) elaborating or testing a work of poetry. The creative process is generally carried out by poets in producing quality poetry text works.
Teaching material that is developed based on the creative process carried out by the poet gets validation as a teaching material that has eligibility, both in terms of content, presentation, language, and graphics. Based on the validation conducted by education experts and practitioners or experienced Indonesian language teachers, it is known that the teaching material developed has more varied contents and can motivate students to produce quality literary works. From the presentation component of teaching materials get an assessment that the presentation of teaching materials is more varied and can arouse students' literary competence, both verbally and in writing. The language used in teaching materials gets an assessment that is very in accordance with the abilities and catching abilities of high school students so that the teaching material is easily understood by students. Likewise, the graphic component gets a good rating, by presenting photos or pictures of poets, there are even examples of poetry readings that can be downloaded by students through their own devices so that they can be opened when they come home from school.
The application of poetry text teaching material developed from the poet's creative process to high school students obtained effective results. The application of teaching materials in classroom learning experiments is better than learning in control classes that use teaching materials available in textbooks. The outputs of the experimental class poetry text learning are more varied and in the form of poems written by students who have literary quality and value when compared to poetry produced by students from the control class. Students' responses to learning by the teacher using poetry text teaching materials that are developed based on the creative process are very positive and can even motivate students to continue to develop their creativity. | 6,654.2 | 2020-02-03T00:00:00.000 | [
"Education",
"Linguistics"
] |
Multi-task Deep Learning of Myocardial Blood Flow and Cardiovascular Risk Traits from PET Myocardial Perfusion Imaging
Background Advanced cardiac imaging with positron emission tomography (PET) is a powerful tool for the evaluation of known or suspected cardiovascular disease. Deep learning (DL) offers the possibility to abstract highly complex patterns to optimize classification and prediction tasks. Methods and Results We utilized DL models with a multi-task learning approach to identify an impaired myocardial flow reserve (MFR <2.0 ml/g/min) as well as to classify cardiovascular risk traits (factors), namely sex, diabetes, arterial hypertension, dyslipidemia and smoking at the individual-patient level from PET myocardial perfusion polar maps using transfer learning. Performance was assessed on a hold-out test set through the area under receiver operating curve (AUC). DL achieved the highest AUC of 0.94 [0.87-0.98] in classifying an impaired MFR in reserve perfusion polar maps. Fine-tuned DL for the classification of cardiovascular risk factors yielded the highest performance in the identification of sex from stress polar maps (AUC = 0.81 [0.73, 0.88]). Identification of smoking achieved an AUC = 0.71 [0.58, 0.85] from the analysis of rest polar maps. The identification of dyslipidemia and arterial hypertension showed poor performance and was not statistically significant. Conclusion Multi-task DL for the evaluation of quantitative PET myocardial perfusion polar maps is able to identify an impaired MFR as well as cardiovascular risk traits such as sex, smoking and possibly diabetes at the individual-patient level. Supplementary Information The online version contains supplementary material available at 10.1007/s12350-022-02920-x.
INTRODUCTION
Advanced medical imaging has boosted our capacity to diagnose both subclinical and clinical cardiovascular pathology without the constant need for invasive procedures. It has improved disease characterization and has proven helpful for prognostic evaluation. In the last decades, state-of-the-art imaging has increased its temporal and spatial resolution at a pace influenced by that of computational development (Moore's law) offering a stream of data of which processing and interpretation may overwhelm the analytical workflows of both researchers and clinicians. 1 Yet, it is suspected that the information contained in the images resulting from techniques such as coronary computed tomography angiography and positron emission tomography (PET) may not be fully harnessed through conventional analyses, which currently translates image attributes into simple and univariate proxies (e.g. calcium score for the former and summed stress score for the latter). Such biomarkers, albeit pragmatic and certainly interpretable, may omit a substantial proportion of the information contained in the images. As such, developments in imaging quality may have only marginally enhanced our understanding of the dynamics of cardiovascular disease.
Deep learning (DL) corresponds to a series of machine learning algorithms based on (convolutional) neural networks and has revolutionized image recognition in various fields of knowledge. DL can boost performance in image analysis through artificial learning of complex high-dimensional patterns in large datasets, 2 which then are used to optimize classification tasks. DL has already delivered exciting breakthrough proofs of concept when applied in several pathological conditions including coronary artery disease as studied through SPECT (CAD). [3][4][5][6][7] Furthermore, it has been suggested that DL analysis of standardized medical imaging, such as retinal images, may allow the characterization of chronic diseases that signify added cardiovascular risk through comorbidity. 8 Presently, studies on the implementation of DL for the identification of myocardial ischemia in PET imaging are lacking. And it is unknown whether DL analysis of myocardial perfusion images may provide insights into patterns associated with the presence of cardiovascular risk traits. Hence, the present report evaluated the performance of DL in the identification of an impaired myocardial flow reserve (MFR) and cardiovascular risk traits to explore complex DL-derived patterns associated with such factors in quantitative PET myocardial perfusion imaging polar maps at the individual patientlevel.
Study Population
From the population referred to quantitative PET myocardial perfusion imaging due to suspected myocardial ischemia between 2015 and 2017 at the department of nuclear medicine of the Northwest Clinics, Alkmaar, The Netherlands, the data of 1,185 patients was retrospectively collected and included in the present analysis. Patients with prior myocardial infarction (MI) or revascularization (either through PCI or CABG) were excluded from the present study.
All patients provided written informed consent for the use of their anonymous data for scientific purposes. In addition to the standard imaging protocol and clinical management, no measurements or actions affecting the patient were performed. The study was approved by the institutional research department and performed in accordance with the Declaration of Helsinki. The approval of the local ethical committee for the present study was not necessary since the study does not fall within the scope of the Dutch Medical Research Involving Human Subjects Act (section 1.b WMO, 26th February 1998).
Clinical Data
Demographic (sex and age) and cardiovascular risk traits (hypertension, dyslipidemia, smoking and type 2 diabetes mellitus) were extracted from the electronic file system.
PET Data Acquisition and Quantitative Perfusion Analysis
Every patient underwent a two-phase, namely rest and adenosine stress, PET scan with the use of 13 Nammonia as the perfusion radiotracer which was produced by the Cyclotron Noordwest BV. All image data were acquired in list mode on a Siemens Biograph-16 TruePoint TrueV PET/CT (Siemens Healthcare, Knoxville, USA) with the axial field of view of 21.6 cm. This 3D system consists of a 16-slice CT and a PET scanner with four rings of lutetium oxyorthosilicate (LSO) detectors. Patients were instructed to fast overnight and to avoid the consumption of methylxantines, caffeine-containing beverages or medications for 24 hours before the study. The details of the acquisition-reconstruction protocol have been published previously in detail. 9 Based on the dynamic subsets, left ventricular contours were assigned automatically using the Syngo MBF software (Siemens Medical Solutions, Berlin, Germany) with minimum observer intervention when appropriate. With a previously described 2-compartment kinetic model for the aforementioned tracer, value of stress MBF, rest MBF and myocardial flow reserve (MFR) were computed and color-coded with a standard scale for each sample on the polar map through the resulting time-activity curves for quantification. 10 An impaired MFR was defined as \2.0 in at least one of the 17 segments from the American Heart Association / American College of Cardiology standardized myocardial segmentation model.
Image Analysis
Data flow and processing Data were randomly divided into a development (training and validation) set and a test set which consisted of 90% and 10% of the total sample, respectively. Training and validation of the deep learning (DL) models were performed on the development set and a 5-fold cross-validation was employed to tune the hyperparameters of the DL models. The optimized models were evaluated on the test set, with data from individuals that had not been seen by the model during the training and validation process. Figure 1 depicts the implemented workflow.
The quantitative myocardial perfusion polar maps, namely the rest, stress and reserve polar maps derived from the PET scan were extracted in RGB color code (228 9 228 pixels wide) and resized to 224 9 224 pixels wide, which corresponds to the expected input dimension of the pretrained DL models. Separately, we developed classification models either from individual polar maps or the stack of all three (rest, stress and reserve) by concatenation.
Deep learning model architecture We employed a modified ResNet-50 architecture and input the perfusion polar maps of each patient to predict mean segmental myocardial perfusion and identify (predict) in separate models an impaired MFR (\2.0) and the binary cardiovascular risk factors sex, positive smoking status, hypertension, dyslipidemia and diabetes mellitus through fine-tuning (see Multi-task learning below).
Briefly, DL ResNet models are feedforward convolutional neural networks with ''shortcut connections'' between earlier layers and layers further down the network, called skip connections. ResNet models are organized into groups of layers, surrounded by the beginning and ending of a skip connection, called residual blocks, and variants of ResNet models are created by varying the number of such blocks ( Figure 2). Thus, in the current study, we modified the last layer of the 50-layers ResNet-50 network to generate 19 output features, of which 17 were used to predict the mean MFR and the remaining two features for the aforementioned binary classifications. In the case of stacked polar maps, the input layer of models was modified accordingly.
Multi-task learning To restrict the learning context for improved generalization, multi-task learning was employed such that each model learned to regress the mean MFR, while simultaneously identify an impaired MFR or individual cardiovascular risk factors (traits) ( Figure 2). More specifically, the regression tasks of mean MFR guided the DL models to recognize the polar map in the context of the standardized 17segmentation model; the models were to learn to master the classification task conditioned on the 17-segmentation model. Cross-entropy loss was selected for the classification task and mean squared error loss was selected for the regression task. The total was a weighted sum of the two losses, while k * [0,1] was the hyperparameter to be optimized in cross-validation.
Transfer learning A two-step transfer learning strategy was applied as follows: Model with parameters pre-trained on the ImageNet dataset was first finetuned to recognize the characteristics of polar maps via identification of impaired regional MFR and further tuned to classify individual cardiovascular risk factors. The model parameters were optimized through back-propagation, using a variant of the adaptive stochastic gradient based optimization algorithm Adam, 11 with a decoupled weight decay regularization. 12 Considering the large number of parameters of ResNet-50 and the relatively small size of development dataset, we optimized only parameters of the last 3 layers of ResNet-50 for binary classifications. To further avoid overfitting of the model to training data, we applied data augmentation techniques, including limited rescaling (10%), rotation (±10°) and random dropout of pixels. All DL experiments were implemented on PyTorch 1.4.0. 13 Attention heat maps To explore and discuss patterns corresponding to the inherent relationships between the polar maps and cardiovascular risk factors identified by DL, we generated attention heat maps for each risk factor taking individuals from the test set. Given a predicted label (presence or absence of a specific risk factor), an attention heat map visualizes the relative importance (attribution) of pixels of the input image towards that label predicted by the DL model. We applied two different attribution approaches to generate the attention heatmap: a perturbation-based occlusion sensitivity method 14 using a square patch of size 30930 pixels and a gradient-based method GradCAM 15 implemented using Captum, 16 which is an open source python library for model interpretability. Briefly, in the perturbation-based method, the image is systematically occluded partially by sliding a black square along the image to examine how the model would (re-)classify. Areas that would change the classification with greater degree are then considered to be important. In the gradient-based method, the importance of input neurons (pixels) is assigned based on the gradient information flowing into the last convolutional layer of the neural network with respect to the target classification. Areas (pixel collections) with higher gradients are thus considered to be more important to the target classification. Attention maps based on high confidence predictions ([0.9 or the highest confidence in the absence of high confidence prediction) were visually evaluated by a clinician to search for potentially interpretable and spatially relevant patterns.
Statistical Analysis
Descriptive statistics were expressed as frequency (percentage) for categorical variables, mean ± standard deviation (SD) for normally distributed quantitative variables and median (interquartile range, IQR) for variables with non-normal distributions. The normality of continuous variables was assessed by skewness statistics and graphically by histograms. Independent ttests were used for continuous variables, while Pearson chi-squared tests were used for categorical variables to compare the differences between the patients with/ without impaired MFR, and between the development and test set respectively. Statistical analyses were performed using Stata 16 (StataCorp LLC). A twotailed p \0.05 was considered to be statistically significant.
Performance Evaluation of DL Model Performance of the DL models was assessed by accuracy and area under the receiver operating curve (AUC) in the hold-out test set of 93 patients. A random prediction corresponded to an accuracy of 50%, and an AUC of 0.5 respectively. The 95% confidence intervals of both metrics were estimated by bootstrapping 4000 times. To compare performance to conventional statistical methods, logistic regression models for the cardiovascular risk factors were fitted with the mean MBF (rest Figure 2. Modified ResNet-50 architecture with multi-task learning. The ResNet-50 was modified at the output layer for joint learning of classification task and regression task. The ratio of classification loss and regression loss as an additional hyperparameter was optimized in the crossvalidation phase. For model using stack of three polar maps, the input layer was adjusted correspondingly (9 9 224 9 224). The main layers are represented in rectangles with solid lines, with numbers within describing their shapes and numbers outside indicate number of repeats respectively; the main stages are represented in rectangles with dotted lines. Avg pool: average pooling; conv: convolution block; iden: identity block; max pool: max pooling; MFR: myocardial flow reserve. and stress polar map) or MFR (reserve polar map) of the 17 segments using the training set. Thereon, DL models were contrasted against these regressions in the hold-out test set.
Study Population Characteristics
A total of 944 patients were included in the analysis. Table 1 , while no statistically significant differences in smoking behavior and dyslipidemia were observed. The cohort was randomly assigned to either the development (i.e., training and validation) or the test dataset in a 9:1 proportion, respectively ( Figure 1). Table 2 presents the prevalence of cardiovascular risk traits (factors) in the development set and test sets, which proved comparable as expected from the random parcellation. Table 3 shows the performance of the DL in detecting an abnormal myocardial perfusion, in either one of three territories or any of the territories. The highest performance was achieved among DL models either considering single reserve polar maps or the three polar maps stacks (rest, stress and reserve) as input, while the lowest performance was observed in those using rest polar maps as input. There was no significant difference in performance with regard to location of abnormal perfusion, either on specific a territory or overall. The DL model using myocardial perfusion reserve polar maps had the highest accuracy of 92.5% (95% confidence interval, CI 87. Table 4. When compared against classical regression models, DL models attained similar performance in identification of sex and diabetes with the exception the DL model also able to identify sex using reserve polar maps as input (Table 5). Notably, classical regression models were not able to identify positive smoking status taking mean MFR as input.
DL Attention Maps Evaluation
To explore the localizability and spatial profile of the associations captured by DL for the identification of cardiovascular risk traits, attention heatmaps were generated from the top performing statistically significant models, namely those classifying sex, diabetes mellitus and smoking status. The attention maps placed on the polar maps with the highest prediction confidence showed that female sex identification hovered over the apical regions of the left ventricle ( Figure 3). Conversely, we observed no fixed regions highlighted for the identification of diabetes mellitus and smoking for which rather diffuse patterns were noted.
DISCUSSION
The present study documents the feasibility and performance of a multi-task DL approach in the evaluation of quantitative PET myocardial perfusion polar maps for the identification of an impaired MFR and the identification of common cardiovascular risk traits (factors) in subjects with known or suspected CAD at the individual patient-level. Furthermore, our results frame how DL may enhance our capacity to identify complex attributes that associate with known risk factors that affect myocardial perfusion beyond what conventional regression analysis utilizing myocardial blood flow estimations may offer.
The clinical value of cardiac functional imaging is undisputed. PET allows quantitative evaluation of myocardial perfusion in absolute terms for the characterization of ischemia in CAD. Furthermore, perfusion estimates are also influenced by well-known cardiovascular risk factors, namely sex, smoking, dyslipidemia, arterial hypertension and diabetes mellitus. These traits are understood to additively modify risk at the individual patient level as underlined by the concept of clinical likelihood in the latest European Society of Cardiology guidelines on the diagnosis and management of chronic coronary syndromes. 17 The diagnostic and prognostic value of myocardial perfusion quantification beyond that of robust factors such as LVEF, scar extent, and even semi-quantitative perfusion variables, such as the summed stress score, has been illustrated through traditional statistical analyses. In fact, quantitative myocardial perfusion estimates (namely, stress MBF and MFR) have been suggested to represent two of the most significant predictors of cardiac events. 18 In this study, DL showed the best performance to accurately identify abnormal myocardial perfusion through the evaluation of reserve polar maps both regionally and globally. This is relevant because it will allow us to incorporate its utility into decision support for the clinical evaluation of PET myocardial perfusion scans.
On the other hand, there is paucity in previous studies reporting sex differences in global MBF values, and differences in the resulting MFR value have been inconclusive. 19,20 In the current study, we found that it was possible to classify the sex of a patient from either rest, stress or reserve polar maps, where DL achieved the intrinsic differences between males and females leading to divergent perfusion patterns during rest. Furthermore, we found that the DL model showed a discriminatory performance (AUC [ 0.5) in identifying a positive smoking status and diabetes mellitus all at an individual level. However, this was not the case for the classification of arterial hypertension and dyslipidemia.
Whether this was a result of differences in the average profile of adjacent cardiovascular risk factors remains unclear and should be cautiously considered. This differential performance may also arise from the fact that the effects of hypertension and dyslipidemia on myocardial perfusion will also be dependent on their degree of severity and on whether these conditions are being medically treated. Unfortunately, such information was not directly available in this study. Yet, we believe that such factors may have moderated the association of the risk traits with MBF and MFR, and thus affected the classification capacity of DL. This suggestion aligns with the fact that strongest differentiation could be made in the identification of sex already discussed.
It must be understood that the conventional approach of operationalizing information provided by myocardial perfusion imaging (e.g. PET) into simplified categorical (e.g. the semi-quantitative 5-point scale) or absolute continuous variables (e.g. MFR in ml/g/min) merely represents a heuristic that facilitates human interpretation and application of linear statistics. Furthermore, images in any domain represent by themselves a very complex collection of patterns emerging from all relationships between their smallest addressable elements, i.e. pixels. It is likely, therefore, that relevant features within comprehensive perfusion images may be overlooked by such operationalization.
Overall, this DL study offers a novel way in which the intrinsic value of advanced cardiac imaging can be more extensively utilized for clinical (identification of ischemia and cardiovascular risk traits) and research (exploration of complex patterns in the classification of such factors) purposes. We recognize, however, that whether this can in fact improve risk stratification and event prediction remains to be elucidated.
DL is an advanced machine learning methodology, able to appraise and identify complex image patterns that may go undetected by the human eye. Our DL implementation adds to the evidence suggesting that high-quality myocardial perfusion images contain a substantial amount of information with value beyond that of their numerical summary extracts, and that these relate at least moderately with conventional cardiovascular risk factors that represent in themselves chronic co-morbidities. Although a precise description of such abstract patterns was not yet identified, further research to identify the interactions of these patterns and quantify their importance in the classification task is warranted.
The present study naturally carries all the intrinsic disadvantages of any observational study. It also deals with a complex DL algorithm for which interpretation can be considered more challenging to perform than simpler statistical methods. This can be an obstacle when clinical interpretation of intermediate features is needed. In the current study, we investigated whether information on cardiovascular risk traits could be inferred from PET polar maps through DL. To mitigate the issue of a relatively small sample size in the context of DL, we employed multi-task learning to guide the network towards relations connected to the flow patterns by training the models to predict (an impaired) MFR from the polar maps and then the risk factors. This served not only as a prior knowledge of the polar map to aid the learning process, but also forced the models to extract common features relevant to all tasks, therefore potentially enhancing clinical/biological meaning of the prediction result. As DL modelling substantially exceeds threshold rule-based classification in complexity (sheer number of input and parameters/coefficients) a perfect performance (AUC = 1.0) in the identification of an impaired MPR could not be achieved at this sample size. Nevertheless, the achieved performance may still be considered as good for the identification of myocardial ischemia while simultaneously contributing to the further classification of cardiovascular risk traits from the polar maps.
NEW KNOWLEDGE GAINED
Deep learning can be applied on quantitative PET myocardial perfusion polar maps to identify ischemia and extract information on cardiovascular risk factors namely, sex, smoking and diabetes. A priori knowledge can be injected to assist the training of a deep learning model.
CONCLUSIONS
Multi-task DL for the evaluation of quantitative PET myocardial perfusion polar maps is able to identify an impaired MFR as well as cardiovascular risk traits at the individual-patient level. DL seems able to significantly identify sex, smoking and probably diabetes mellitus from both localized and diffuse perfusion patterns throughout the left ventricle. Although the mechanistic significance and clinical relevance of such patterns and identification capacity through DL analysis is still unclear, further research into the exploration of advanced cardiac imaging through DL is warranted.
Disclosures and Funding
The work of M.W. Yeung and J.W. Benjamins was supported by the Research Project CVON-AI (2018B017), financed by the PPP Allowance made available by Top Sector Life Sciences & Health to the Dutch Heart Foundation to stimulate public-private partnerships.
Open Access
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 5,332.6 | 2022-03-10T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science"
] |
Fueling the Covid-19 pandemic: summer school holidays and incidence rates in German districts
Abstract Background The Robert-Koch-Institute reports that during the summer holiday period a foreign country is stated as the most likely place of infection for an average of 27 and a maximum of 49% of new SARS-CoV-2 infections in Germany. Methods Cross-sectional study on observational data. In Germany, summer school holidays are coordinated between states and spread out over 13 weeks. Employing a dynamic model with district fixed effects, we analyze the association between these holidays and weekly incidence rates across 401 German districts. Results We find effects of the holiday period of around 45% of the average district incidence rates in Germany during their respective final week of holidays and the 2 weeks after holidays end. Western states tend to experience stronger effects than Eastern states. We also find statistically significant interaction effects of school holidays with per capita taxable income and the share of foreign residents in a district’s population. Conclusions Our results suggest that changed behavior during the holiday season accelerated the pandemic and made it considerably more difficult for public health authorities to contain the spread of the virus by means of contact tracing. Germany’s public health authorities did not prepare adequately for this acceleration.
Introduction
Holiday travels can be expected to accelerate the SARS-CoV-2 pandemic. To a small extent, this is because traveling via bus, train or plane adds to the risk of becoming infected. 1,2 More importantly, infections rise because individuals change their social behavior during holidays. 3 Many holiday-makers have more and more intense social interactions, often to people that they do not know and do not share social capital with which has been found to be conducive to maintaining social distancing norms. 4,5 Mobility also reduces the health agencies' ability to successfully trace close contacts of people that are infected with Sars-CoV-2.
The Robert-Koch-Institute (RKI) reports that over Germany's entire summer school holiday period in ∼27% of weekly cases reported to the Institute a foreign country was mentioned as the most likely place of infection. 6 This figure reached its maximum at 49% of weekly cases in week 34, which is in mid-August. It is, however, not possible to interpret these numbers as the effect of holiday-related travel since some of the infections may not actually have occurred abroad despite 'abroad' being mentioned as the likely place of infection, not all international travel is necessarily holidayrelated even if it takes place during the holiday season and not all holiday-makers spend their holidays abroad. The RKI numbers shed light on the relevance of international travels for the epidemic situation in Germany, but they may overstate or under-state the true impact of the holiday season on incidence rates.
We complement the RKI's analysis by studying the extent to which summer school holidays have accelerated the pandemic in Germany. In order to estimate the effect of summer school holidays on the weekly incidence rate, we employ an ecological analysis of variation in the weekly SARS-CoV-2 confirmed case incidence rate across German districts (individual-level data do not exist). Germany provides an excellent case study since we can exploit a particular feature of its system of school holidays, namely that they are not uniform across the Federal Republic but vary in their start and therefore also their end date from state to state in a pre-determined way. This idiosyncratic feature allows us to disentangle the effect that holidays have had on incidence rates in German districts located in states that are or have been on holiday from the general upward trend in new infections in Germany.
We test the following four hypotheses: First, school holidays have a positive effect on incidence rates. Second, the later parts of any given holiday season have a larger effect than its earlier parts given that there is a delay until holiday travelers return home and given infections are on the rise in practically all holiday travel destinations, both within and outside Germany, thus increasing the risk of catching the virus as the holiday season proceeds. Third, the holiday season does not merely increase individual risks. Travel associated with the holiday season should also have a lasting effect on the epidemic situation in the home districts because any infected returning traveler increases the probability of additional infections. Thus, the effects of holidays and holidayrelated travel on incidence rates do not disappear when the holiday season is over. And fourth, school holidays will have a stronger effect on incidence rates in districts that are richer on average and in which a larger share of the resident population are foreigners. Richer people can afford better to go on holiday for longer and foreign citizens are likely to use the holiday season for returning to their home country for family visits (possibly in addition to taking other holidays) not least because the lockdown in the spring of 2020 prevented most of them from seeing family abroad over the Easter holiday period.
There is surprisingly little existing evidence on the impact of public holidays on the SARS-CoV-2 pandemic. Early research has shown that the extension of the Lunar New Year holidays in China has contributed to the country's successful containment of the pandemic. 7 However, it is probably impossible to generalize these findings because the pandemic started around the time of this holiday period and the holiday extension helped authorities to identify infected individuals before traveling home. Two other studies point in the opposite direction, suggesting that Israel's hitherto successful mitigation policy broke down in the wake of mass social gatherings during the 9-11 March Jewish holiday of Purim or that holiday-related travels from metropolitan areas to the provinces in Sweden may spread infections. 8,9 To the best of our knowledge, our is the first academic study of the impact that the summer school holiday season has actually had on the pandemic.
Material
Our dependent variable is the weekly incidence rate (per 100 000 people) in a German district. Data are sourced from the RKI website (www.rki.de). They are based on confirmed positive tested cases. While the number of confirmed cases can be a problematic measure for the pandemic's dynamics, we know of no reason why testing would systematically vary across German districts.
Our sample covers all 401 districts in Germany with the 12 districts of Berlin aggregated to one single city state district due to lack of disaggregated data on the conditioning variables employed for testing one of our hypotheses. The temporal dimension is drawn from the period starting with the weekly incidence rate on Wednesday 10 June (week 23) and terminating with the weekly incidence rate on Wednesday 23 September (week 38). We deliberately define the week to end in a Wednesday rather than Sunday or Monday to avoid noise from occasional corrections made on Mondays or Tuesdays to compensate for under-reporting to the RKI over the weekend. For each district, we analyze the period ranging from 2 weeks prior to the beginning of holidays to 2 weeks after the end of the holidays. Our panel thus has N = 401 districts and T = 10 weeks equals 4010 observations.
In Germany, the dates of the summer school holidays are chosen years in advance by each of the 16 states in close consultation with each other. The intention is to reduce the probability and length of traffic jams on Germany's crowded motorways during the summer months. In each state, schools close for ∼6 weeks. In 2020, the summer school holiday season began on June 22 in Mecklenburg-Vorpommern and ended September 9 in Baden-Württemberg. Hence, Germany spreads the holidays over almost 13 weeks (see the Supplementary Document for full overview).
Methods
The average weekly incidence rate across all German districts over the entire sample period is 6.38 cases per 100 000 people with a standard deviation of 10.14. In Germany as a whole, the number of new infections had been stable at around 500 per day until the end of July. In August, the number of daily confirmed cases begun to rise reaching ∼1500 new cases at the end of August and ∼2000 new daily cases at the end of September.
This upward trajectory in Germany coincides with the summer school holiday season. It is unlikely, however, that the return of rising incidence rates has been determined by school holidays alone. To isolate the predicted effects from other influences, we include a lagged dependent variable to account for the common trend in the data. 10 Results are similar if we use an alternative approach for taking out the common trend, such as an autoregressive model of order one (results not reported). This is a conservative research strategy since part of this trend was most likely caused by returning holidaymakers. However, it is impossible to provide a precise estimate of the influence of holidays on the common trend because holiday travel was allowed in all states and all districts at all times, not just during school holidays.
Most of our estimation models are based on a specification with a dummy variable that is set to 1 if a district is located in a state in which schools are on summer holidays in that week as well as a dummy variable for the 2-week period after the holidays. This specification can be interpreted as a Chow-type model, 11,12 in which the dummy variables estimate whether there is a structural break between the holiday period as well as the period of 2 weeks after the holidays end, both relative to the period of 2 weeks before holidays begin, the presumed counterfactual.
This relatively simple specification with two dummy variables only is handy for extensions where we allow their effect to vary by state and allow their effect to be conditioned by two district-level variables that are likely to impact on the number of holiday-related travels undertaken from each district (on which more below). It is not an optimal specification however given it presumes the effect to be constant within the holiday period. Empirical evidence suggests that the average length of holiday stay of German tourists is ∼12-14 days. 13 Therefore, infections should start to rise only ∼2-4 weeks after the beginning of the school holidays. Therefore, we will also present results from a more appropriately specified model that allows the effect of the holiday period to vary week-by-week.
We estimate our models with a linear fixed effects estimator that absorbs any variation across districts that is time-invariant such as demographic, geographic and socio-economic factors that render some districts more generally exposed to the pandemic than others. 14 If we estimate the models with Arellano and Bond's dynamic panel estimator instead, results are very similar (results not reported). 15 Standard errors are clustered on districts. If we additionally apply two-way clustering of standard errors also by states results are hardly affected (results not reported). Since potential control variables come from annual data, they are time-invariant for the specific panel structure we have. These time-invariant variables are perfectly collinear with the district fixed effects and we therefore cannot estimate their effect in a district fixed effects model. They can however condition the effect of the time-varying school holidays variables, as we do in one model employing average taxable income and the share of foreigners amongst a district's resident population as conditioning variables.
Results
In Table 1, we first of all report results on a dummy variable that is set to 1 if a district is located in a state in which schools are on summer holidays in that week as well as a dummy variable for the 2-week period after the holidays (model 1). We find that the summer school holiday weeks are on average predicted to increase incidence rates by 1.71 cases per 100 000 people relative to the period before holidays, consistent with our first hypothesis. The 2-week period after holidays end is predicted to increase incidence rates by 4.81 cases per 100 000 people, consistent with our third hypothesis.
Model 1, which pools all holiday weeks together, masks that the effect is likely to vary and to increase over the holiday period. Model 2 is more appropriately specified as it allows the effect of the holiday season to vary week-by-week. We find that the effect increases in later weeks of the school holidays, consistent with our second hypothesis. The effect is close to zero in the first 2 weeks, rises from week three onwards, becomes statistically significant from week 4 onwards and increases to 4.15 cases per 100 000 people in week 6. The coefficients of the first and the second week after school holidays finish show that the increases in incidence rates brought about by the school holidays do not disappear but continue to rise to 5.13 cases per 100 000 people in the second week after school holidays end. In terms of substantive importance of this finding, an increase in the incidence rate of 4.15 cases in the final week of the holiday season equates to 44.7% of the average incidence rate across German districts during their sixth week of holidays, which is higher than the average incidence rate during the entire sample period and therefore represents the more appropriate benchmark against which the substantive effect size should be assessed so as not to overstate it. For the first and second week after holidays, the equivalent computation would suggest effects that equate to 46.0 and 45.3% of the average weekly incidence rate in those weeks.
In model 3, reported in Table 2, we allow the structural breaks to vary state-by-state but revert back to the simple Chow-type structural break model with only two dummy variables per state as otherwise we would have to report or visualize well over a hundred coefficients. We exclude the two states of Hamburg and Berlin since both are counted as consisting of only one district in our data, which would result in unreliable estimates in a district fixed effects specification. Table 2, in which we sort states by the point estimate of the holiday period dummy variable, shows large variation in the holiday effect on incidence rates across districts in different German states. Overall, we find that richer states are more likely to show relatively large effects, and we find that the increase in incidence rates associated with the holiday season tends to be larger in the Western German states than in the Eastern German states (Saxony, Thuringia, Saxony-Anhalt, Mecklenburg-Vorpommern and Brandenburg). Looking state by state, we find a statistically significant positive effect of the holiday period or the 2-week period after the end of holidays or of both in 12 of the 14 states included in model 3.
Overall, only two states in our sample do not show a statistically significant positive holiday effect: Brandenburg and North Rhine-Westphalia. Of these two cases, North Rhine-Westphalia appears to be an outlier. The state had high incidence rates before the holidays begun due to super-spreader events in a slaughterhouse of the Tönnies company in the districts of Gütersloh and Warendorf. If we drop the two districts of Gütersloh and Warendorf from the estimations then both coefficients of the holiday and postholiday periods become statistically significantly positive for this state. Figure 1 shows cumulative infection numbers (indicated by a solid line with their scale on the left-hand axis) and the weekly incidence rates (indicated by bars with their scale on the right-hand axis) for Bavaria, the richest German state bar the two city states of Bremen and Hamburg, and Thuringia, the state with one of the lowest average per capita income for each day between day 167 (10 June) and day 267 (23 September) of 2020. The vertical boundaries indicate the first and the last day of school holidays in these two states. As Model 3 has shown, the holiday season was associated with large increases in incidence rates relative to the trend in Bavaria, with much smaller increases relative to the trend in Linear fixed effects estimation. t-Statistics based on standard errors clustered on districts in parentheses. * * * , * * and * refer to statistical significance at 1, 5 and 10% levels, respectively.
Thuringia. Figure 1 supports and illustrate these findings from our regression analysis. Thuringia and Bavaria differ in many respects. Bayern is richer, more industrial, more urbanized, and it also hosts a larger share of foreign residents. In Table 3, we allow the effect of summer school holidays and the 2-week period after holidays to be conditioned by two variables, namely by average taxable income in a district as well as by the share of foreigners amongst a district's residents. These variables are time-invariant for our sample, therefore we cannot estimate coefficients for these variables themselves in a model with district fixed effects. However, we can estimate the con-ditioning effect of these variables on the time-varying holiday variables.
Model 4, reported in Table 3, shows a positive and statistically insignificant interaction effect between, respectively, average taxable income and the share of foreigners amongst a district's residents with the dummy variables for school holidays and the post-holiday period, consistent with our fourth hypothesis. In substantive terms, the results from model 4 imply that the effect of the holiday period is almost six times stronger in districts with close to the highest share of foreign residents (increase in incidence rate of 6.72 cases per 100 000 people as opposed to an increase by 1.2 cases), while the effect of the 2-week post-holiday period is almost seven times stronger (increase in incidence rate of 20.5 cases per 100 000 people as opposed to an increase by 3.2 cases). The effect in the richest districts is eight times stronger than average during the holiday period and almost four times stronger than average in the post-holiday period.
Main finding of this study
We have found that by the end of the holiday period the estimated effect equates to around 45% of the average incidence rate across German districts during their respective final week of holidays and their respective first 2 weeks after holidays end.
What is already known on this topic
The RKI reports that the maximum of new infections for which a country abroad is stated as the most likely place of infection during Germany's holiday season is around 49% in week 35 in mid-August with close to 45% in the 2 weeks either side of this maximum.
What this study adds
Based on a research design that captures the effect of holidaying both within and outside Germany, our central estimates are slightly lower than the maximum of new infections for which a country abroad is stated as the most likely place of infection in reports to the RKI. Despite very different research designs, the two approaches find similar substantive average effects. Disaggregating the effect week-by-week, we find that the effect increases over the holiday period and does not revert back to what it was from before the holiday period in the 2 weeks after holidays end. We have demonstrated effects differ across German states with statistically significant holiday effects in at least 12 of the 14 German states with more than one district in our dataset. The stronger effects take place in the Western German states. Two main hypothesized reasons for this heterogeneity across German states were that the states with a stronger effect consist of districts that tend to be both richer and have a larger share of foreign residents amongst their population, both of which spurs holiday-related travel. Corroborating this, we have shown that the higher is per capita income and the higher the share of foreigners in a district, the larger the increases in the growth rate of infections.
Limitations of this study
First, there are the well-known limitations of any ecological study like ours. Ideally, one would employ individual-rather than district-level data, however no such data exist anddue to privacy protection policies-cannot be collected. Second, we can only capture the effect of holiday-related travels triggered by public summer school holidays. Families with children of school-age in particular are dependent on school holidays for their holiday travel and the same holds for the employees of firms that close down for company holidays over that period. Thus, the majority of holiday travels will take place during school holidays. Yet, not all of holiday-related travel takes place during school holidays, which potentially biases downwards our estimate of the effect of holidayrelated travels on Sars-CoV-2 infection.
Conclusion
The impact that summer school holidays have had on incidence rates were entirely predictable and yet Germany's public health authorities were not prepared for re-starting travel in the era of Covid-19. 16 What they should have done was to significantly drive up testing facilities to compensate for the increase in infections and the reduced contact tracing capabilities. Eventually, Germany did introduce testing of returnees from particular high-risk destinations, but this came too late to prevent the significant increase in infections and, ironically, can further spread the virus if falsely negative tested individuals are lured into careless behavior. 17 Governments should also improve digital tracing capabilities both within their territories but more importantly across borders if they wish to avoid travel restrictions. Germany in principle has a good tracing system being built on local infrastructure but the best tracing system cannot operate if infected individuals cannot recall with whom they had close contact during their holidays. 18 Immunity passports to travel may also have to be reconsidered once vaccination becomes widely available despite their controversial nature. 19
Supplementary data
Supplementary data are available at the Journal of Public Health online. | 5,023.2 | 2021-03-26T00:00:00.000 | [
"Economics"
] |
Application of Alcohols to Dual - Fuel Feeding the Spark-Ignition and Self-Ignition Engines
Abstract This paper concerns analysis of possible use of alcohols for the feeding of self - ignition and spark-ignition engines operating in a dual- fuel mode, i.e. simultaneously combusting alcohol and diesel oil or alcohol and petrol. Issues associated with the requirements for application of bio-fuels were presented with taking into account National Index Targets, bio-ethanol production methods and dynamics of its production worldwide and in Poland. Te considerations are illustrated by results of the tests on spark- ignition and self- ignition engines fed with two fuels: petrol and methanol or diesel oil and methanol, respectively. Te tests were carried out on a 1100 MPI Fiat four- cylinder engine with multi-point injection and a prototype collector fitted with additional injectors in each cylinder. Te other tested engine was a SW 680 six- cylinder direct- injection diesel engine. Influence of a methanol addition on basic operational parameters of the engines and exhaust gas toxicity were analyzed. Te tests showed a favourable influence of methanol on combustion process of traditional fuels and on some operational parameters of engines. An addition of methanol resulted in a distinct rise of total efficiency of both types of engines at maintained output parameters (maximum power and torque). In the same time a radical drop in content of hydrocarbons and nitrogen oxides in exhaust gas was observed at high shares of methanol in feeding dose of ZI (petrol) engine, and 2-3 fold lower smokiness in case of ZS (diesel) engine. Among unfavourable phenomena, a rather insignificant rise of CO and NOx content for ZI engine, and THC and NOx - for ZS engine, should be numbered. It requires to carry out further research on optimum control parameters of the engines. Conclusions drawn from this work may be used for implementation of bio-fuels to feeding the combustion engines.
Introduction
Necessity of using bio-fuels results from the requirements of National Index Targets (NCW) in which a gradual rise in share of reproducible fuels in overall amount of engine fuels, is assumed. According to a Polish Ministry Council's act concerning the NCW, dated 20.07.2013, share of biofuel energy should reach 8,5 % in 2018, Fig. 1. It requires to introduce many changes in agricultural production industry, to rise investment expenditures for development of bio-fuel production works and to develop new fuel supply technologies for engines. Alcohols and esters of unsaturated fatty acids belong to the basic bio-fuels, the first group is applicable mainly to spark ignition engines and the other -to diesel engines. Primary alcohols , i.e. methanol and ethanol can be produced from biomass which is a reproducible source of energy available in a very large quantity. In the nature bio-mass undergoes natural decomposition into carbon dioxide (CO 2 ) and methane (CH 4 ), both numbered among greenhouse -effect gases. However, due to the fact that bio-mass decay processes are natural, emission of CO 2 and CH 4 is not considered harmful. Exploitation of bio-mass and its conversion into alcohols and then combustion in engines results in CO 2 emission to atmosphere, which is then consumed in photosynthesis process for production of biomass. In effect, use of bio-fuels may be considered a zero -emission process, in the aspect of CO 2 , and that which additionally lowers natural emission of CH 4 during decay processes.
Combustion of methanol and ethanol produces in effect carbon dioxide and water and the process runs according to the reaction as follows : (1) (2) Mass share of carbon atoms in molecule of alcohols is lower in relation to traditional fuels and amounts to 0,375 for methyl alcohol and 0,520 for ethyl alcohol , whereas for petrol and diesel oil the ratio reaches 0,845 0,850 approximately. However when differences in calorific values are taken into account , the gaining of the same energy amount from alcohols results in only a little lower CO 2 emission (by about 2 %) in comparison to petrol, Fig. 2. Alcohols, due to their perfect properties, first of all high octane number, high vapour heat, high combustion velocity, can be easily used both in spark ignition (ZI) engines (as only fuel or an addition to traditional fuels) and self-ignition (ZS) engines as an addition burned simultaneously with diesel oil. This paper presents a proposal of feeding ZI and ZS engines by using a dual -fuel mode and shows some results of the tests conducted on the spark -ignition engine Fiat 1100 MPI and the self-ignition engine SW 680.
Production of alcohols
Alcohols are chemical organic compounds which contain one or more OH hydroxyl groups connected with carbon atoms. The simplest alcohols known to people for thousands of years contain one OH group and are of a general formula: C n H 2n+1 OH. The following kinds belong to this group: -methanol CH3OH -methyl alcohol, -ethanol C2H5OH -ethyl alcohol, -propanol C3H7OH -propyl alcohol, -butanol C4H9OH -butyl alcohol.
From the point of view of engine fuels, the two first, methanol and ethanol, especially the second which can be produced from biomass in large quantities by using technologies known for centuries, are most important. Anhydrous ethanol may be used as an only fuel E100 or as an addition to petrol of the kinds: E15, E20, E80. The remaining alcohols such as propanol and butanol are very rarely applied. Ethanol can be produced according to two technological methods schematically shown in Fig. 3 : -1st generation method -in the conventional fermentation process of such raw plant materials as corn, potatoes, sugar reed, manioc, maize, -2nd generation method -with the use of cellulose, straw, maize cores, and other plant residuals.
Production of bioethanol based on 1st generation process is most often applied today, and research and development projects on 2nd generation biofuels are very intensively conducted worldwide. Full mastering of this technology is expected to be reached till 2030, and 2nd generation ethanol should be more widely used to this date as raw materials for it are commonly available [8].
Ethyl alcohol is obtained in fermentation process consisting in oxygen -free decomposition of sugars by yeasts and their enzymes. In general the alcohol fermentation equation has the following form : (3) Heat produced during the reaction (3) accelerates this process. As a result, many by-products such as vinegar acid, higher alcohols, esters, glycerine, are obtained. Their content and amount decide on taste merits of alcohol, however it is of no importance for application to fuels because their share is very low. The raw materials used for fermentation may be split into three groups : -Containing sugar -mellase, sugar reed, fruits, juices, -Containing starch -potatoes, rye, barley, wheat, maize, -Containing cellulose -wood, straw, maize cores, plant waste, garbage.
Two first groups of raw materials are used for production of 1st generation bioethanol and edible ethanol; the third group -in production of 2nd generation bioethanol. In fermentation process of 2nd generation ethanol, glucose obtained from enzymatic hydrolysis of cellulose contained in plant products, is used. They are composed of cellulose in about 60% , which, due to a large amount of biomass available worldwide, shows that significant supply of cellulose-based ethanol may be expected in the future. For this reason the 2nd generation production technologies are intensively developed in such countries as : Sweden, Norway, Finland, USA, Brazil and Canada. In present, production cost of cellulose ethanol is over twice higher than of that traditionally produced. However , development of 2nd generation bioethanol should lower the relations in a near future.
In present, USA and Brazil, leaders in alcohol production , deliver almost 65% of ethanol amount produced in the world, Fig. 4. European countries deliver about 15% of the ethanol produced in the world , but this share has significantly increased during last years. [7]. a) b) use of ethanol as a fuel for engines [9,12,13] In Brazil and USA a prevailing part of produced ethanol is intended for combustion as fuels (95 % in Brazil and 60 % in USA). For comparison, in European countries this part amounts to about 5 %, and in Poland to 3 % only, Fig. 4b.
Production of bioethanol in Poland does not satisfy demands of National Index Targets, and its participation in the whole quantity of used fuels is rather low, Fig. 5. Despite investments made in biofuel production industry within last ten years, rise in bioethanol production is rather low, and its level varies in particular years, Fig. 5a. It seems that without any radical change of regulations in this question and a decisive state intervention, Poland will gradually lag behind other EU countries where a big pressure is applied to production of bioethanol. (acc. [7]), b) energy participation of engine fuels used in Poland in 2010 [14] b)
Dual-fuel feeding the spark -ignition engines with the use of alcohol
In contemporary ZI engines multi-point injection of light fuels is used (indirect injection to inlet collector and direct one to combustion chamber). This injection system was assumed to select a test engine and prepare it to dualfuel feeding mode. The tests were carried out on a 1100 MPI Fiat, four-cylinder, spark-ignition engine with multipoint fuel injection system. In order to adjust the engine to dual-fuel feeding mode a prototype inlet collector with an additional injector in each cylinder, was applied, Fig. 6a. Methanol was injected close to inlet valve through original injectors of the engine. Petrol, during work of engine in dualfuel mode, was injected by additional injectors placed in some distance from the inlet valve. Such injection method was aimed at improving methanol evaporation, especially during engine work under low load. The applied feeding system made it possible to operate with petrol only (during starting and heating the engine), with methanol only during work under maximum load, and in dual -fuel mode at an arbitrary selected share of alcohol. The prototype feeding system was subjected to wide comparative and optimizing tests. In each work conditions the engine operated correctly and did not show any possible disturbance resulting from vibrations or indicator -controlled combustion process. Tab. 1 and Fig. 6b Comparison of overall efficiency of the engine shown in Fig. 7 indicates that efficiency of the engine fed with methanol only was high within the whole range of changes in loading and rotational speed. Differences in efficiency grow along with engine load rising, and in the range of medium and maximum load values the absolute differences amount to 3¸5%, which results in a relative rise in efficiency ranging from 10 to 16 %, deciding on operational consumption of energy. It is worth mentioning that the results were obtained without any optimization of ignition advance angle.
It seems that one of the causes of engine efficiency growth may be a greater combustion velocity of methanol, which leads to lower heat loss per cycle. The other cause is a higher evaporation heat of methanol which makes dose temperature lower during compression and at the beginning of combustion , that leads to lower mechanical losses during compression phase and in consequence to a higher efficiency. An additional reason may be a greater contraction factor of methanol in comparison to petrol (1,061 for methanol and 1,045 for petrol), which results in that from combustion of stechiometric mixture of methanol a greater number of moles of exhaust gas is produced, which additionally rises cylinder pressure and leads to torque rising. Further increase of engine work parameters can be obtained by rising compression ratio and optimizing engine control. For small addition of methanol (20 % share) and low engine load the overall efficiency of the dual-fuel fed engine was lower than in case of traditional feeding, Fig. 8. It seems that the fact of petrol injection through the additional injector placed in some distance from the inlet valve, affected the obtained results. As a result, conditions for producing petrolair mixture were worsened (lower temperature, influence of fuel film formed on inlet channel walls). It may affect work steadiness in particular cylinders and increase fuel consumption and emission of noxious components of exhaust gas. In effect, the favourable influence of combustion of methanol, proportional to its dose share , did not compensate the loss in efficiency resulting from worsened conditions for evaporation of petrol and mixing it with air. It could be probably possible to improve efficiency of engine under low load and at a low share of alcohol by applying a primary mixer of alcohol with petrol as well as by injecting the mixture through original injectors or by implementing a double injector used in dual-fuel, self-ignition engines.
At greater methanol shares efficiency of the engine was improved almost in the whole range of changes in its operational parameters, as shown in Fig. 8b. It suggests that even at unchanged compression ratio operational consumption of energy at dual-fuel feeding will be lower than that at feeding only with petrol.
Combustion of methanol, both as the only fuel and in mixture with petrol, affects content of toxic components in exhaust gas from ZI engine. The influence depends on alcohol share and engine load. At low methanol share values and low engine loads a greater content of CO was observed in comparison with that in case of feeding with petrol only, Fig. 9a. It was probably caused by decreased temperature due to presence of methanol, and worsened fuel evaporation resulting from some distance of the petrol injector. However at higher engine loads CO content, both in case of feeding with methanol only and in dual-fuel mode, was distinctly higher in comparison to that in case of petrol.
Combustion of methanol favourably affects emission of summary THC hydrocarbons, Fig. 9b. At dual-fuel feeding mode a distinct tendency to lowering THC emission may be observed and the lowering grows along with increasing share of methanol in combustion mixture. At feeding with methanol only, THC content was about 2¸2,5 times greater than in case of feeding with petrol. Influence of methanol addition to fuel dose on changes in NOx content in exhaust gas is not unambiguous. During feeding with methanol only the lowering of NOx content was observed over entire working area of engine. However, during dual-fuel feeding changes in NOx content depended on engine load, Fig. 9c. In the range of low and medium load values the lowering of NOx content, but at higher loads -a rise of NOx content (by about 10 ÷ 15 %) in relation to feeding with petrol, was observed. By controlling the air excess coefficient it was revealed that at dual-fuel feeding the engine was charged with somewhat poorer mixture in relation to its content in case of feeding with petrol and methanol only. It could cause an increase in NOx content at higher engine loads.
Dual-fuel feeding the self-ignition engines with the use of alcohol
Alcohols have a high self-ignition temperature and low octane number, which makes it impossible to control their self-ignition in engine work conditions. For this reason combustible alcohol -air mixtures always require an external self-ignition source. In self-ignition engines the only possible feeding mode is a dual-fuel system for which a very good source of self-ignition is a dose of diesel oil. In this case, alcohol may serve as a basic energy source or as a small addition to improve diesel oil combustion. Alcohol for selfignition engines may be delivered in three ways: -in the form of alcohol vapour mixed with suck-in air (by evaporating alcohol in an evaporator using heat from cooling system or in the form of an aerosol comprising alcohol drops dispersed in air flux), -by injecting liquid alcohol into inlet collector, -by injecting alcohol directly to combustion chamber during final combustion phase. Indirect delivery of alcohols to inlet collector, due to high evaporation heat of alcohols (2,5¸3,3 times higher than that of diesel oil and petrol), results in lowering temperature of suck-in dose, which may unfavourably affect lag of ignition. However, while applying direct injection, high evaporation heat of alcohols, at their high shares, may be used for lowering maximum temperature of a medium during its combustion. For this reason , this feeding mode is most advised, though it requires to apply an additional injector or dual-fuel injectors, which makes engine head construction a little more complex. Research projects on application of alcohols for dual-fuel ZS engines have been carried out by many centres worldwide, including Technical University in Radom, Poland, (see the publications of prof. Luft [1,2] referring engine feeding with evaporated alcohol, and the publications of prof. Kowalewicz [3,4,5] referring application of methanol injection into inlet collector) as well as an external branch of Technical University of Łódź, Poland (see publication of this author [6] where a mixed feeding system for engines was discussed).
Selected results of the tests on a SW 680 self-ignition, slowsuction, six-cylinder engine fed with methanol and diesel oil, are presented below. Technical data of the engine are given in Tab Even a small addition of methanol favourably affects combustion of diesel oil, a main part of energy delivered to engine. In effect, overall efficiency of the engine under full load increases by about 3 ÷ 6 % in relation to a traditionally fed engine, Fig. 10. Increases in overall efficiency of the engine at various shares of methanol are similar within the whole range of changes in rotational speed, while at a higher share, in the range of low rotational speeds, the efficiency increases are lower. It should be stressed that no optimization procedure of injection advance angle was conducted for the tested engine -the angle was constant and equal to 27 o of single rotation before reaching upper return point, independently on rotational speed and load of the engine.
It seems that such increase in overall efficiency is associated with higher combustion rate of methanol as compared to that of diesel oil. It results in an increase of temperature of a medium in reaction zones and simultaneously in a greater number of diesel oil ignition spots. The last factor is especially important at maximum engine loads, when, at a complete biggest dose of diesel oil, initial combustion processes play significant role. The greatest changes in overall efficiency of the engine were then observed , Fig. 11. In the presented tests a change in engine load was made by lowering diesel oil dose. At constant rotational speed, the selected control system caused methanol share increasing in total amount of energy along with engine load decreasing. In the range of low engine loads it could really affect charge temperature and decrease overall efficiency of the dual-fuel engine in relation to the traditionally fed engine. This is especially visible at the lower rotational speed of 1200 rpm (Fig. 11). For higher rotational speeds the influence of methanol, because of a higher thermal load of engine, is smaller and the overall efficiencies are similar for both feeding modes in the range of low loads. It should be mentioned that at the lowest loads the share of methanol was significant , equal to 44÷50 %, depending on rotational speed , in spite of that the methanol mixture was poor (the air excess coefficient lm > 3, 8 ÷ 4,2).
It seems that for combustion of poor methanol mixtures at low engine loads, interaction of liquid fuel jet plays crucial role. This observation is proved by results of the tests presented in Luft's publication [1] where the engine was fed with methanol vapour delivered from a special evaporator placed beyond inlet system (free from fuel evaporation cooling effect on to charge).
In more developed, turbo-charged engines changes in load are usually accompanied with the lowering of charging rate, which may also significantly influence methanol combustion process. It seems that it would be possible to make use from positive effect of methanol combustion also at lower engine loads by applying an appropriate control of charging rate. Additionally, it may be possible to change, if necessary, methanol share along with engine load changing, by applying methanol injection. engine fed respectively with diesel oil and two fuels : diesel oil and methanol [6] Application of methanol addition to ZS slow-sucktion engines results in the significant lowering of exhaust gas smokiness and CO content at maximum engine loads, Fig. 12. However it may lead to a higher content of NOx and not fully burned hydrocarbons.
The lowering of smokiness is associated with the lowering of diesel oil dose and acceleration of its combustion by burning methanol vapour. The process depends on a methanol share in the whole energy dose delivered per cycle. For a lower share of methanol the lowering of smokiness is almost twofold in the whole range of changes in rotational speed. For a higher share of methanol the smokiness lowering was 2,6¸3,0 -fold, which may be used for the lowering of emission of solid particles from engines installed in buses operating in towns with high road traffic. It is worth mentioning that such positive effect may be already reached by a small addition of methanol ( or ethanol), while cost of adaptation of engine to dual-fuel feeding mode is rather low in the case in question.
The increased content of hydrocarbons in exhaust gas at mixed feeding mode may be associated with an escape of a part of charge as a result of sheltering the valves, which may be more intensive in charged engines. For this reason, in ZS engines alcohol injection should be applied, preferably directly to cylinders. Research on this problem was carried out by prof. Kowalewicz of Technical University of Radom [3÷5]. It is also worth mentioning that methanol vapour is very toxic , which additionally makes injection systems preferable.
The increased content of NOx in exhaust gas at full engine load (Fig. 12c) is connected with an increased combustion rate of charge in dual-fuel, methanol -fed engine. The greatest differences in NOx content occur in the range of lower rotational speeds ( increase by 30%), which decreases along with the speed increasing, while in the range of 2000¸2200 rpm the differences are even lower than in the case of traditional feeding.
Conclusions
On the basis of the performed tests and analyses the following general conclusions may be offered : -Dual-fuel feeding mode for self-ignition and sparkignition engines with the use of alcohols is an interesting alternative which makes it possible to increase share of biofuels in total consumption of engine fuels. -In the case of self-ignition engines alcohol may constitute a basic fuel or serve as an addition to improve combustion of diesel oil. Adaptation of engine to addition of alcohol does not require any wide changes in engine construction and may be introduced both in older engines and contemporary ones. However in both the cases injection of liquid alcohol to inlet collector or directly to cylinders, is preferable. -Dual-fuel feeding mode in spark-ignition engines makes it possible to use an arbitrary share of alcohol (within the range of 0¸100% ) depending on load and thermal state of engine. Its compression ratio may be elevated this way by 1,5¸2,5 units in relation to basic engine. Moreover, at high shares of alcohol , dual-fuel feeding mode does not require to use anhydrous alcohols as it is in the case of mixtures with petrol, as a result production cost of biofuels may be lower. -Addition of methanol favourably affects combustion process of both petrol and diesel oil and results in the improving of operational parameters and overall efficiency of ZI and ZS engines. It is also possible to reach an increase in maximum effective power and torque at comparable loads (results of the tests are not presented in this paper). The rise in overall efficiency of ZI engine without any correction of compression ratio was equal to 3¸5 % at high shares of methanol , that is equivalent to the relative increase by 10¸16 %. In ZS engine the overall efficiency rise, at maximum loads, was obtained in the range of 3¸6 %, that is equivalent to the relative increase by 8¸17 %. The high relative increases in overall efficiency may contribute to a much lower consumption of energy by engines in operation. Addition of methanol leads to distinct changes in content of exhaust gas components emitted from dual-fuel engines. The following tendencies may be distinguished: concerning ZI engines: the lowering of CO content at higher loads , the significant (2¸2,5 -fold) lowering of THC content, the lowering of NOx content at higher shares of methanol as well as its rise in some engine operation ranges. concerning ZS engines : the significant (2¸3-fold) lowering of smokiness of exhaust gas, the lowering of CO content, the elevating of THC and NOx contents in some engine operation ranges.
It seems that certain unfavourable consequences of methanol combustion in ZI and ZS engines may be mitigated by optimizing control parameters of the engines. | 5,773.6 | 2014-10-28T00:00:00.000 | [
"Engineering"
] |
The factorization problem in Jackiw-Teitelboim gravity
In this note we study the 1 + 1 dimensional Jackiw-Teitelboim gravity in Lorentzian signature, explicitly constructing the gauge-invariant classical phase space and the quantum Hilbert space and Hamiltonian. We also semiclassically compute the Hartle-Hawking wave function in two different bases of this Hilbert space. We then use these results to illustrate the gravitational version of the factorization problem of AdS/CFT: the Hilbert space of the two-boundary system tensor-factorizes on the CFT side, which appears to be in tension with the existence of gauge constraints in the bulk. In this model the tension is acute: we argue that JT gravity is a sensible quantum theory, based on a well-defined Lorentzian bulk path integral, which has no CFT dual. In bulk language, it has wormholes but it does not have black hole microstates. It does however give some hint as to what could be added to rectify these issues, and we give an example of how this works using the SYK model. Finally we suggest that similar comments should apply to pure Einstein gravity in 2 + 1 dimensions, which we’d then conclude also cannot have a CFT dual, consistent with the results of Maloney and Witten.
Introduction
Most discussions of bulk physics in AdS/CFT focus on perturbative fields about a fixed background [1][2][3][4][5][6][7][8]. This has led to much progress in understanding the correspondence, see [9] for a recent review, but sooner or later we will need to confront the fact that the bulk theory is gravitational; in generic states gravitational backreaction cannot be treated as an afterthought. In particular the strong redshift effects near black hole horizons make physics from the point of view of the outside observer unusually sensitive to gravitational effects there [10][11][12][13][14]. Gravitational backreaction also provides the mechanism by which the holographic encoding of the higher-dimensional bulk into the lower-dimensional boundary theory breaks down if we try to preserve bulk locality beyond what is allowed by holographic entropy bounds [9,[15][16][17].
One especially confusing aspect of gravitational physics is that time translations are gauge transformations: much of the interesting dynamics is tied up in the gauge constraints. For example consider figure 1. It is sometimes said that in the context of the two-sided AdS-Schwarzschild geometry, we can see the interior of the wormhole by evolving both boundary times forward [18][19][20][21][22]. But in fact as we move from the left to the central diagram in figure 1, we see that we can evolve the boundary times as far to the future as we like without ever having the bulk time slice go behind the horizon. It is not until we move the interior part of the slice up that we start to directly see physics behind the horizon, but this is precisely the part of the evolution which is generated by the Hamiltonian constraint of general relativity. How are we to distinguish the slices in the center and right diagram, when from the CFT point of view they describe precisely the same quantum state? JHEP02(2020)177 A related point is that the entire formation and evaporation of a small black hole in AdS is spacelike to some boundary time slice, and thus must be describable purely via the Hamiltonian constraints. In other words, there is a spatial slice that intersects the collapsing matter prior to the formation of an event horizon and another spatial slice that intersects the Hawking radiation after the complete evaporation of the resulting black hole, both of which asymptote to the same time slice of the boundary. Such a description of the Hawking process would be complementary to the more standard one in which temporal diffeomorphisms are imagined to be gauge fixed, directly tying together the bulk and boundary time evolutions.
Another interesting question related to the black hole interior and gauge constraints is the following. Say that we believe that a black hole which evolves for a long enough time develops a firewall [23,24]. Where precisely does it form? A naive answer would be at the event horizon, but this is unlikely to actually be correct. The event horizon is a teleological notion, which for example can be modified by putting our evaporating black hole inside of a huge shell of collapsing matter, which will not collapse until long after our black hole evaporates. It seems doubtful, to say the least, that we could remove a firewall by so silly a trick as this. One might also suggest that firewalls form at "the" apparent horizon, but actually apparent horizons are highly non-unique since they depend on a choice of Cauchy slice [25]. The recently studied "holographic screens" [26] also are too non-unique to do the job. If indeed there are firewalls, there should be a gauge-invariant prescription for where (and also how) they form. 1 In this paper we will be primarily interested in a third issue raised by considering gravitational physics in AdS/CFT: the factorization problem [27,28]. This is the observation that the presence of gauge constraints in the bulk poses a potential obstacle to the exis- 1 There is clearly at least some approximate sense of "where" the edge of a black hole currently is, for example the event horizon telescope will soon image the disc of Sagittarius A* and the LIGO team already simulates black hole merger events using code which excises some kind of black hole region. It would be interesting to understand the generality of the underlying assumptions in such calculations, and whether or not a formal definition could be given which applies in sufficiently generic situations to be relevant for the firewall arguments.
JHEP02(2020)177
tence of a CFT dual, since such constraints might not be consistent with the tensor product structure of the boundary field theory when studied on a disconnected spacetime. As a simple example, consider 1 + 1 dimensional Maxwell theory on a line interval times time: The equations of motion for this theory tell us that the electric field is constant throughout spacetime, but its value cannot be the only dynamical variable since phase space is always even-dimensional. To find the other dynamical variable we need to be more careful about the boundary conditions: the variation of the Maxwell action on any spacetime M has a boundary term where r µ is the (outward pointing) normal form. To formulate a good variational problem, we need to impose boundary conditions such that this term vanishes for variations within the space of configurations obeying the boundary conditions. There are various options for these boundary conditions, the natural choice for AdS/CFT (the "standard quantization") is to take These boundary conditions are not preserved under general gauge transformations: we must at least require that any gauge transformation Λ(x) approaches a constant on each connected component of ∂M . In fact the most natural choice is to require that these constants are all zero (modulo 2π) for the gauge transformations which we actually quotient by: transformations where they are nonzero are then viewed as asymptotic symmetries which act nontrivially on phase space. 2 For our example on R × I, it is always possible to go to A 0 = 0 gauge by a gauge transformation that vanishes at the endpoints of the interval. The equation of motion then requires that where a is a constant which could be removed by an "illegal" gauge transformation Λ = −ax. Since we are not allowed to do such gauge transformations, a is physical: in fact it is nothing but the Wilson line from x = 0 to x = L at t = 0. After quantization, this system just becomes the quantum mechanics of a particle on a circle (here we are assuming the gauge group is U(1), not R), and in particular it has no tensor product decomposition into degrees of freedom to the left and right of the line x = L/2. Of course pure Maxwell theory is not expected to have a gravity dual anyways, so the non-factorization of this system may at first appear uninteresting. But in fact it has farreaching consequences: the Einstein-Maxwell theory on the two-sided AdS-Schwarzschild JHEP02(2020)177 geometry in any spacetime dimension has a zero mode sector which is equivalent to this theory, and which tells us, among other things, that in gravitational theories with CFT duals, one-sided states must exist with all gauge charges allowed by charge quantization [27,29]. Nonetheless it would be nice to concretely realize the factorization problem in a gravitational model, which at least somewhat plausibly might have been hoped to have a CFT dual. Our main goal in this paper is to do precisely this.
The theory we will study is the 1 + 1 dimensional Jackiw-Teitelboim theory of dilaton gravity [30][31][32], with bulk Lagrangian density (1.5) The first two sections of our paper will simply repeat the analysis sketched above for Maxwell theory in this model, which we will see similarly does not have a factorized Hilbert space. 3 This lack of factorization implies that the theory cannot have a CFT dual, nevertheless it is a self-consistent quantum mechanical system, albeit one with a continuous spectrum. There is nothing in the gravitational analysis that requires a breakdown of the JT description. However, we will comment on what might be added to the theory so that it could have a CFT dual; then the JT Lagrangian would be a low energy approximation and the canonical gravity analysis would eventually exit its regime of validity. There has been considerable recent interest in this model, see [36] for a nice review and further references. Our approach however is rather different in method and emphasis from this literature: • We work primarily in Lorentzian signature, focusing on identifying the physical onshell degrees of freedom.
• The "Schwarzian" Lagrangian will make no appearance in our analysis. Indeed the Schwarzian theory is not sensible by itself in Lorentzian signature, from our point of view it is an artifact of a particular way of evaluating the Euclidean path integral.
• We will make almost no mention of the group SL(2, R), which acts on the JT theory neither as a global symmetry nor as a natural subgroup of the gauged diffeomorphism group.
We nonetheless include a section where we explain how our analysis fits into the Lorentzian version of the SYK model, and we will there explain how to understand our results in the Schwarzian language. We hope that our analysis of the JT theory with "more conventional" techniques will be useful even to SYK-oriented readers.
Finally we discuss some of the possible implications of our work for higher-dimensional gravity. In particular, we will argue that there is a quite close analogy between JT gravity JHEP02(2020)177 in 1 + 1 dimensions and pure Einstein gravity in 2 + 1 dimensions: both seem to have precise path integral descriptions in the bulk, both have wormhole solutions, both have a two-sided Hilbert space which does not factorize, neither have black hole microstates counted by the Bekenstein-Hawking formula, and neither have CFT duals. In both cases the answers to these questions become more standard once matter is added, something we leave for future work.
Classical Jackiw-Teitelboim gravity
The Jackiw-Teitelboim action on a 1 + 1 dimensional asymptotically-AdS spacetime M is given by (2.1) Here Φ 0 is a large positive constant, which in a situation where we obtained this theory by dimensional reduction would correspond to the volume in higher-dimensional Planck units of the compact directions [32,49]. From a two-dimensional point of view Φ 0 is just the coefficient of the topological Einstein-Hilbert part of the action. Φ is a dynamical scalar field we will call the dilaton. K is the trace of the extrinsic curvature of the boundary, defined as with γ µν the induced metric on the boundary and r µ the outward-pointing normal form there. 4 The boundary term not involving K is a holographic renormalization, which ensures that the action and Hamiltonian are finite on configurations obeying the boundary conditions we will soon discuss. The variation of this action is 5 holds regardless of the signature of the boundary. The induced metric is related to the ordinary one by γµν ≡ gµν ∓ rµrν , where rµ is spacelike/timelike. 5 For spacetimes with additional boundaries which are not asymptotically-AdS, such as the time slice Σ we will use in computing the Hartle-Hawking wave function below, this equation remains correct except that the terms − ∂M dx |γ| 2δΦ + Φγ αβ δγ αβ appear only on the asymptotically-AdS parts of the boundary, since it is only there that we include the holographic renormalization counterterm −2 dx |γ|Φ.
JHEP02(2020)177
so after some simplification the equations of motion are As in the electromagnetic case, we need to choose boundary conditions such that the boundary terms in (2.4) vanish for any variation in the space of configurations obeying these boundary conditions. The obvious choice is to fix the induced metric γ µν and dilaton Φ at the AdS boundary, which we can do by imposing and then taking r c → ∞ with φ b fixed and positive. φ b is analogous to the AdS radius in Planck units in higher dimensions, it will be large in the semiclassical limit. These boundary conditions are only preserved by the subset of infinitesimal diffeomorphisms ξ µ which approach an isometry of the boundary metric, which means that the pullback of ξ µ to each component of ∂M must be a time translation. As in electromagnetism, we will only actually quotient by diffeomorphisms where these time translations are trivial, with the motivation again being to preserve boundary locality (note also that otherwise we would be left with a boundary theory with no states of nonzero energy). We thus expect boundary time translations to be asymptotic symmetries which act nontrivially on phase space: indeed they will be generated by the ADM Hamiltonians on the respective boundaries.
To understand these Hamiltonians more concretely, we can then define a "CFT metric" at each boundary, in terms of which we can define a boundary stress tensor [50] T µν (2.9) From (2.4) we then apparently have The JHEP02(2020)177
Solutions
There are various ways to describe the set of solutions of the equations of motion (2.5). One nice way is to observe that the first equation requires the metric to have constant negative curvature, which means that it is described by a piece of AdS 2 . AdS 2 can be obtained via an embedding into 1 + 2 dimensional Minkowski space, with metric AdS 2 is the universal cover of the induced geometry on the surface in this Minkowski space. The two AdS boundaries are at X → ±∞.
We may then ask what the set of possible solutions for Φ look like: the answer is that for any solution of (2.5), the slices of constant Φ are given by the intersections of the embedding surface (2.12) with a family of hyperplanes where A, B, C are three fixed real parameters which label the solution: we can think of them as parametrizing the normal vector n µ = (−A, −B, C) to the hypersurfaces. The solutions where n µ is spacelike or null will never obey our boundary conditions (2.6), since Φ will be negative almost everywhere on one of the AdS boundaries at X → ±∞. When n µ is timelike, we can set B = C = 0 by an SO(1, 2) rotation in the embedding space, so we can restrict to solutions of the form where we have relabelled A to Φ h for a reason which will be apparent momentarily. We can present these solutions more concretely by choosing coordinates in terms of which we have (2.16) We illustrate this solution in figure 2. Its maximal extension involves infinitely many boundary regions, some with Φ = +∞ and some with Φ = −∞. As is normal with Reissner-Nordstrom-type solutions, we expect that small matter fluctuations (once matter is included) will cause the "inner horizons", where Φ = −Φ h , to become singular, collapsing JHEP02(2020)177 If we assume that the inner horizon is singular, this solution describes a wormhole connecting two asymptotically-AdS boundaries.
the geometry down to just the wormhole region shaded green in figure 2. In pure Jackiw-Teitelboim gravity there is no matter which can do this, but the dynamical problem with two asymptotically-AdS boundaries obeying (2.6) is still only well-defined in the green region, since additional boundary data would be needed to extend the solution out of this region. Since we are primarily interested in constructing a theory which is a good model for gravity in higher dimensions, where the inner horizon is indeed always singular, we find it simplest to just truncate the spacetime at the inner horizon. 6 In addition to these "global" coordinates, we can also go to "Schwarzschild" coordinates, via in terms of which we have (2.18)
JHEP02(2020)177
For r > r s and −∞ < t < ∞, these coordinates cover the "right exterior" piece of the green shaded region in figure 2, which as usual lies between the right asymptotic boundary and the right part of the Φ = Φ h bifurcate outer horizon. The parameters of these solutions are related via These coordinates have the nice feature that slices of constant r are also slices of constant Φ, so in particular the cutoff surface in the boundary conditions (2.6) just lies at r = r c , and moreover t becomes the boundary time. From (2.18) it is easy to evaluate the boundary stress tensor (2.10) on each boundary for these solutions, one finds so the full canonical Hamiltonian evaluates to
Phase space and symplectic form
In the previous subsection we described a one-parameter family of solutions of the JT theory, labeled by the value of the dilaton on the bifurcate horizon, Φ h . This parameter is analogous to the electric field in our 1 + 1 Maxwell example: it is locally measurable. As in the Maxwell example however, Φ h cannot be the only parameter on the space of solutions: phase space must be even-dimensional. The other parameter, analogous to a in the Maxwell example, arises because in going to the coordinates (2.16), we in fact did an illegal gauge transformation. The easiest way to restore any solution parameters we removed this way is to act with another illegal gauge transformation, of the class which approaches an asymptotic symmetry at infinity: the parameters of this gauge transformation (modulo legal gauge transformations) will become the gravitational analogue of a in the electromagnetic example. In the present discussion, the only asymptotic symmetries are time translations on the left and right asymptotic boundaries. So at first it might seem that we have discovered two new parameters: our phase space still seems odd-dimensional! But in fact equation (2.20) tell us that H L = H R on all solutions, so the operator H L − H R generates no evolution on phase space. Thus we have only one new parameter, which we will call δ, which tells us how long we evolved the solution (2.16) by H R + H L . More explicitly, the relationship between global time τ and the "left" and "right" Schwarzschild times t L , t R at the AdS boundaries is , so a slice which is attached to the left and right boundaries at t L and t R respectively has We illustrate this in figure 3. There is another somewhat more operational way of describing δ, shown in figure 4. The idea is to start at the point on the left boundary where our time slice is attached, fire a geodesic into the bulk which is orthogonal to surfaces of constant Φ, and then see at what timet R this geodesic arrives at the right boundary. We then have
JHEP02(2020)177
where t R is the time where our time slice intersects the right boundary. Thus we can think of δ as measuring the "relative time shift" between the two boundaries: from now on we will refer to it as the "time shift operator" From this point view, the time shift operator is quite similar to the one-sided "hydrodynamic modes" discussed in [34,35]. We thus have arrived at the following two-dimensional Hamiltonian system: The ranges of these phase space coordinates are Φ h > 0, −∞ < δ < ∞. For any Hamiltonian system the symplectic form ω ab is defined bẏ which is more elegantly written by changing coordinates from Φ h to H, giving us Thus δ is simply the canonical conjugate of H. Before moving on to the quantum theory, it is convenient to here introduce another pair of coordinates on this phase space. Roughly speaking these are the geodesic distance between the two endpoints of a time slice and its canonical conjugate, but since that distance is infinite in the r c → ∞ limit we need to be a bit more careful. We will defined a "renormalized geodesic distance", L, via (2.29) Using the symmetry generated by H R − H L we can always choose t L = t R , so then from (2.16), we have and τ determined in terms of δ via (2.22) and (2.23). We then find which shows that L is indeed a well-defined function on our two-dimensional phase space.
A calculation then shows that if we define In terms of the renormalized geodesic length, JT gravity becomes just the mechanics of a non-relativistic particle moving in an exponential potential! This is a scattering problem, with waves that come in from L = ∞ and reflect off of the potential, and indeed that is what happens in the solutions (2.16).
JHEP02(2020)177
3 Quantum Jackiw-Teitelboim gravity We now discuss the quantization of JT Gravity, starting with the Hilbert space formalism.
Hilbert Space and energy eigenstates
The most straightforward proposal for the Hilbert space of the quantum JT theory is that it is spanned by a set of delta-function normalized states |E , with E > 0, such that We then can define the time shift operator as in the energy representation. Requiring that δ is hermitian on this Hilbert space then tells us that we must restrict to wave functions ψ(E) which vanish at E = 0. The reader may (rightly) be uncomfortable with this however: there is an old argument due to Pauli that there can be no self-adjoint "time operator" which is canonically conjugate to the Hamiltonian in a quantum mechanical system whose energy is bounded from below. The argument is trivial: if δ were self-adjoint, then we could exponentiate it to obtain the set of operators e iaδ , which we could use to lower the energy as much as we like, contradicting the lower bound on the energy (see [51] for a more rigorous version of this argument). Therefore our δ, though hermitian, must not be self-adjoint. In fact this problem is visible already in the classical system: the vector flow on phase space generated by δ hits the boundary at H = 0 in finite time.
These subtleties may be avoided if we instead use the renormalized geodesic distance operator L. Classically this generates a good flow on phase space, so it should correspond to a self-adjoint operator and thus have a basis of (delta-function normalized) eigenstates. As usual in single-particle quantum mechanics, we can construct the Hilbert space out of L 2 -normalizable functions of L. The energy eigenstates have wave functions which can be determined from the Schrodinger equation: The normalizable solutions of this equation with E > 0 are constructed using modified Bessel functions, in the usual scattering normalization we have These wave functions decay doubly exponentially at large negative L, while at large positive L we have with reflection coefficient
JHEP02(2020)177
The Gamma function identity Γ(z) * = Γ(z * ) tells us that this reflection coefficient is a pure phase, as is necessary since there is no transmission. These expressions can be thought of as providing an exact solution of quantum JT gravity with two asymptotic boundaries.
Euclidean path integral
It may seem that given the scattering wave functions (3.4), no more needs to be said about quantum JT gravity.
To compare to what we do in higher dimensions however, it is useful to consider how standard Euclidean gravity methods are related to our exact solution. We begin this discussion by reviewing the Euclidean path integral for JT gravity with one asymptotic boundary, on which the time coordinate t E has periodicity β. Namely we sum over geometries with the topology of the disk, and with induced metric at the boundary. We then again take the dilaton to obey and take r c → ∞ to get asymptotically-AdS boundary conditions. The Euclidean action is The saddle point for this path integral is the Euclidean Schwarzschild solution where smoothness at r = r s requires The extrinsic curvature at the boundary is If we interpret this as a thermal partition function, then we can use standard thermodynamic formulas to find the energy and entropy: (3.14) JHEP02(2020)177 Figure 5. The Hartle-Hawking state: we sum over geometries and dilaton configurations with an AdS boundary of length r c β/2 and a "bulk" boundary Σ, which we interpret as a time-slice of the two-boundary system. The boundary conditions on Σ depend on which basis we wish to compute the wave function in.
We discuss in section 4 below to what extent these can actually be interpreted as thermal energy and entropy, but we note now that one can rewrite the semiclassical result in the suggestive manner
Hartle-Hawking state
Whether or not the Euclidean path integral defines a thermal partition function, we can always use it to define a natural family of states in the Hilbert space of the two-boundary system which we constructed in section 3.1 above: these states are labelled by a real parameter β, and are collectively called the Hartle-Hawking state [52,53]. They can be interpreted as describing a wormhole connecting the two asymptotic boundaries, where from either side an observer sees a black hole in equilibrium at inverse temperature β.
The basic idea of the Hartle-Hawking state is illustrated in figure 5. We can compute the wave function of the Hartle-Hawking state in various bases, the traditional choice is to fix the induced metric on the bulk slice Σ, together with any matter fields, which computes the wave function in the Wheeler-de Witt representation. In this section, we will semiclassically compute the wave function of the Hartle-Hawking state in the two bases of the two-boundary Hilbert space, labelled by E and L, which we discussed in subsection 3.1.
The L basis calculation is conceptually simpler but technically harder, so we begin with the E basis calculation. From (2.21) we know that the energy is a simple function of the value of the dilaton at the bifurcate horizon, Φ h . So we need to pick boundary conditions on the bulk slice Σ which ensure (i) that it passes through the bifurcate horizon and (ii) that the dilaton is equal to Φ h there. To achieve these we will require that where n µ is the normal vector to Σ. In the second equation we have chosen "global" coordinates on the slice, which is not really necessary, but it is perhaps useful to be concrete. These boundary conditions are consistent with the action variation (2.4), since both boundary terms vanish (remember that the Φγ αβ δγ αβ and −δΦ terms are not present since this is not an AdS boundary). More concisely, we want to integrate over geometries with a piece of AdS boundary of length r c β/2 and a piece of bulk boundary Σ with vanishing normal derivative of Φ and Φ = Φ h at its minimum on Σ. In the end we may then substitute Φ h = φ b E/2 to get the wave function in terms of E. We emphasize that here Φ h and β are not related via
18)
β labels which Hartle-Hawking state we are considering and Φ h is the argument in its wave function. We expect however that (3.18) should hold at the peak of the wave function. The saddle point for this calculation is shown in figure 6, it is a "sliver" of the Euclidean Schwarzschild solution (3.10) with t E ∈ (0, β/2) and r s = Φ h φ b . The kink in the boundary at r = r s does not violate the boundary conditions since it happens at the minimum of Φ: the derivative of Φ vanishes in any direction there. To proceed further we need to evaluate the Euclidean action of this solution, but this is complicated by the fact that the solution has corners, which require additional terms in the action not present in (3.9).
We begin our discussion of corner terms by recalling the Gauss-Bonnet theorem in the presence of corners: Here χ is the Euler character, g is the genus, b is the number of boundaries, and θ i are the interior opening angles of any corners (θ i = π means no corner). These corner contributions can be derived by smoothing out the corner and then taking a limit where the extrinsic curvature K picks up a δ-function contribution. This suggests that we should upgrade our Euclidean JT action (3.9) with corner terms
JHEP02(2020)177
and indeed this is the correct prescription for corners in the action for a path integral which is describing an overlap of two states. In using the path integral to compute a wave function however, there are additional corners (such as those in figure 5) which arise from cutting an "overlap" type path integral: for these the corner prescription involves π/2 − θ i instead of π − θ i , since we need the corners to cancel when we glue two states together to compute an overlap (see section 3.2 of [54] for some more discussion of this). If we denote by C 1 the set of corners of the first type and C 2 the set of corners of the second type, and also the AdS piece of the boundary by B, then the full Euclidean action for computing wave functions is Returning now to our saddles for the energy-basis wave function, the saddle in figure 6 has corners of both types, but fortunately the corners of type C 2 both have θ = π 2 so they don't contribute. The kink at the horizon is a corner of type C 1 since it would not contribute if its internal angle θ were π, but in fact θ is so we do have a contribution. Away from this kink the bulk slice Σ is a geodesic, so K = 0 there. We thus have Evaluating this on our saddle point using (3.12), and remembering that t E is integrated from 0 to β/2, we find 8 We now proceed to the L-basis calculation. 9 We now want Σ to be a geodesic of renormalized length L, so we now define Σ by requiring
JHEP02(2020)177
with the range of s being equal to L + 2 log(2φ b r c ). These boundary conditions are again consistent with the variation (2.4), since now K = 0 and δγ µν = 0 (remember again that the Φγ αβ δγ αβ and −δΦ terms are not present since Σ is not an AdS boundary). The saddle points for this calculation are a bit more involved, we want a piece of the Euclidean Schwarzschild geometry (3.10) whose boundary has a piece which is asymptotically AdS, with length r c β/2, and a piece which is a geodesic through the bulk, of renormalized length L. We illustrate this in figure 7. There is a two-parameter family of geodesics in this geometry, parametrizing by proper length we have where J tells us how close our geodesic approaches the center of the disk and t E,0 tells us at what value of t E this closest approach happens. We can set t E,0 = 0 by convention, so to construct a solution we need to give J and r s as functions of L and β such that our geodesic indeed has renormalized length L and the AdS component of the boundary indeed has length r c β/2. After some algebra, we find that r s is obtained by solving the equation JHEP02(2020)177 Note that a is positive, and that we must have x ∈ (0, π) since the length r c β/2 of the AdS component of the boundary must be less than or equal to the full boundary length 2πrc rs . A unique solution exists provided that a ≤ 1, or in other words that with r s = 0 when this inequality is saturated, and no solution exists for a > 1. When a = 2/π, or in other words β = 2πφ b e L/2 , (3.32) we find that r s = 2π β , which corresponds to cutting the Euclidean solution (3.10) in two. This should be what we find is the peak of the wave function. As L → −∞ we have r s → 4π β . If we fix β and decrease L, r s increases monotonically. Finally to evaluate the action, we again use (3.21). There are now no corners of type C 1 , but we will see that the two corners of type C 2 now make a nontrivial contribution. This is not obvious, since we expect that as r c → ∞ we have θ → π/2 at each corner for any β and L, but since Φ(x j ) = φ b r c we are potentially sensitive to a subleading term in θ which is O(1/r c ). Indeed a short calculation tells us that we have so there will be a nontrivial corner contribution. The rest of the action is easy to evaluate, we again have K = 0 on Σ and R = −2 in the bulk, so we need only compute the corner terms, the Φ 0 terms, and the terms at the AdS boundary. The result is with x determined as a function of L and β by solving (3.28) and using (3.29). This wave function has a unique maximum at x = π/2, which from (3.29) happens when r s = 2π β , as expected. This peak will dominate the integrated square of the wave function, which again is consistent with the saddle point evaluation (3.13) of Z[β]. Near this peak we have which is consistent with the idea that large φ b is the semiclassical limit.
Thus we see that the Hartle-Hawking states fit nicely into Hilbert space of the Jackiw-Teitelboim gravity, with (reasonably) simple semiclassical wave functions in the E and JHEP02(2020)177 L bases. It would be interesting to extend these calculations to one-loop, in fact the normalization of the Hartle-Hawking state is one-loop exact [55], so the wave function itself might be as well.
Factorization and the range of the time shift
We now return to the interpretation of the single-boundary Euclidean path integral Z[β], whose semiclassical value in the Jackiw-Teitelboim gravity is given by (3.13). So far the only Hilbert space interpretation we have given it is as the norm of the unnormalized Hartle-Hawking state in the two-boundary Hilbert space, as produced by the Euclidean path integral without any rescaling. In AdS/CFT however there is another interpretation for this path integral: following [56], we can interpret the unnormalized Hartle-Hawking state as corresponding to the unnormalized thermofield double state in the tensor product Hilbert space of two copies of the boundary CFT. 10 The one-sided path integral is then the norm of this state, but this is nothing but the one-sided thermal partition function. Is this interpretation valid in the Jackiw-Teitelboim gravity?
The answer to this last question is no. The reason is that the Hilbert space of twoboundary Jackiw-Teitelboim gravity, which is just a single-particle quantum mechanics, does not tensor-factorize into a product of one-boundary Hilbert spaces. Although the Hartle-Hawking state exists, there is no analogue of equation (4.1). Instead we have equation (3.25), which we can write in a manner more similar to (4.1) by labeling states by the one-sided energy E L , which by (2.20) is half that of the two-sided energy used in (3.25), to get This is not a state in a tensor-product Hilbert space: indeed there are no states at all where E L = E R , since there is no matter in the pure JT theory all energy is sourced by the bifurcate horizon. We therefore conclude that there can be no boundary theory dual to pure quantum Jackiw-Teitelboim gravity. Were one to have existed, there would have been such a factorization. 11 10 Here |i R are energy eigenstates of the "right" CFT and |i * L are their conjugates under a two-sided version of CPT which exchanges the two sides, see [57] for more explanation of this. 11 Readers who have casually followed the recent SYK developments may be puzzled, since naively one might have gotten the impression that the two-boundary JT theory should be dual to "two copies of the Schwarzian theory". This is wrong, basically because a single Schwarzian theory in Lorentzian signature does not make sense. We give the precise statements in the following section.
JHEP02(2020)177
There is another interesting illustration of the non-factorization of the JT gravity Hilbert space. In any tensor product Hilbert space for which the Hamiltonian is a sum of the form we have the partition function identity Both sides of this identity are computable in JT gravity, so we can test if it is true. Assuming factorization, Z L and Z R would both be given by the function Z[β] we computed in (3.13). Z tot [β] we can then attempt to compute by computing the thermal trace in our two-sided Hilbert space. There is however an immediate problem: since the spectrum of H is continuous, the trace in Z tot is not well-defined. Let's illustrate this in a simpler example: the quantum mechanics of a free non-relativistic particle of mass m moving on a circle of radius R. Momenta is quantized as so we have a density of states The thermal partition function is therefore The key point is that the density of states, and therefore the partition function, are divergent in the limit that R → ∞. In our Jackiw-Teitelboim quantum mechanics with Hamiltonian (2.35), the dynamical coordinate L is similarly noncompact, leading to a continuous spectrum with a divergent density of states, so the left hand side of our putative equation (4.6) is divergent while the right hand side is finite. This is another illustration of the non-factorization of the two-boundary Jackiw-Teitelboim Hilbert space. It may seem that we deserved the nonsense we got in attempting to test equation (4.6) in JT gravity, since after all that equation was derived assuming factorization and we already know that the JT Hilbert space doesn't factorize. But in fact we can use this equation to do something more interesting: we can ask how the theory would need to be modified such that (4.6) would indeed hold. In other words, what would the two-sided density of states ρ tot (E) need to be such that we indeed had
JHEP02(2020)177
In the semiclassical approximation, this can happen only if where ρ L was defined in (3.16) and S is the entropy (3.14). In light of (4.8), this equation has a natural interpretation: in a factorized theory obeying (4.6), our renormalized geodesic observable L cannot really be larger than some length of order e 2S(E L ) . From (2.32) we then also learn that we cannot evolve the Hartle-Hawking state with our total Hamiltonian H = H L + H R for times which are longer than of order e 2S(E L ) without the Jackiw-Teitelboim description breaking down. At least then, if not sooner, there must be "new physics" in any theory which factorizes. 12 The idea that exponentially long time evolution is in tension with the semiclassical description of the Hartle-Hawking state was also discussed in [59].
In this last argument it may seem that we have gotten "something for nothing", since we learned what the range of the time-shift operator δ must be in a factorized theory using only the Jackiw-Teitelboim path integral. This is indeed miraculous, but in fact it is the same old miracle by which the Euclidean path integral evaluation of Z[β] is able to correctly count black hole microstates using the only the low energy bulk effective action. This is possible only because that Euclidean path integral is not a trace of the Hilbert space of the bulk effective theory with one asymptotic boundary: in fact in JT gravity no such Hilbert space exists. Given only the bulk theory, the only Hilbert space interpretation we can give to Z[β] is as the norm of the unnormalized Hartle-Hawking state: what we have learned here is that factorization is the key assumption which allows us to re-interpret this as a thermal partition function.
Embedding in SYK
How might we attain a factorizable version of the Jackiw-Teitelboim gravity, in which Z[β] is indeed a partition function? For the 1 + 1 Maxwell theory discussed in the introduction, the answer is simple: we need to introduce new matter fields which possess the fundamental unit of U(1) gauge charge. This modifies Gauss's law such that the electric flux on the left boundary is no longer required to be equivalent to the electric flux on the right boundary, and the Wilson line which stretches from one boundary to the other can be split by a pair of these dynamical charges [27]. 13 For gravity we might therefore expect that achieving factorizability is as simple as introducing matter fields, and in some sense this is true. 12 The timescale e 2S(E L ) is quite natural from the point of view of the proposal that exponentially complex operations should disrupt the structure of spacetime [19][20][21]58]: it is the time it takes for the time evolution operator e −iH L t to reach maximal circuit complexity, and is also the time it takes the thermofield double state to reach maximal state complexity. 13 Strictly speaking we also need to introduce a UV cutoff as well since no continuum quantum field theory on a connected space has a factorized Hilbert space. In quantum field theory the question of factorizability is best understood in terms of whether or not the theory obeys the "split property", see [29] for more discussion of this.
JHEP02(2020)177
In more than two spacetime dimensions, where the global AdS vacuum exists, adding matter would enable us to form one-sided black holes from the collapse of this matter. Therefore the bulk Lagrangian will no longer be UV-complete: the full bulk theory will need to be able to count the microstates of those black holes in Lorentzian signature. In two dimensions, the space must end somewhere, however we may add collapsing matter on top of a smaller two-sided black hole to produce a larger one. Then it will again be the case that the full bulk theory must be able to account for the exponentially large number of additional microstates. What we really need then is to find a holographic boundary description, where we understand the theory as a large-N quantum mechanics living on the disconnected space R R.
So far the only known explicit examples of this are based on the Sachdev-Ye-Kitaev model [37][38][39][40][41][42]45]. These examples unfortunately have a large number of light matter fields, which cause bulk locality to break down at the AdS scale, but they do also have a Jackiw-Teitelboim sector which decouples from all that at low temperatures. 14 In Euclidean signature this was shown in [41][42][43][44][45], in this section we sketch the (fairly trivial) modifications which are needed to give the analogous argument in Lorentzian signature.
The SYK model is a collection of N Majorana fermions χ a , interacting with Hamiltonian where the antisymmetric tensor J abcd represents disorder drawn at random from the Gaussian ensemble The Lagrangian of the SYK model is We are interested in the Lorentzian path integral for two copies of this model, so our dynamical variables will be 2N Majorana fermions χ a i , where a runs as before from 1 to N , while i is equal to either L or R and tells us which copy we are talking about. We will take the disorder J abcd to be the same for each copy, since the "real" model corresponds to a single instantiation of the disorder and we want the same Hamiltonian on both sides.
The large-N solution of this model begins with an assumption, justified by numerics, that we can view the disorder as "annealed" rather than "quenched". This means that we can integrate over it directly in the path integral rather than waiting until we compute observables to average over it. We are then interested in evaluating the Lorentzian path JHEP02(2020)177 At large N this integral can be done by a version of the Hubbard-Stratonivich transformation [61,62]. We first integrate out the disorder, to arrive at We then "integrate in" bilocal auxilliary fields Σ ij (t, t ) and G ij (t, t ) such that Finally we can then integrate out the fermions, which are now Gaussian, arriving at 15 with the bilocal effective action S(G, Σ) given by Here the determinant is defined for matrices with both ij and tt indices. The equations of motion are where we have used matrix notation in the second equation. Now the key observation is that at low energies compared to J, we can ignore the time derivative in equation (5.9), in which case these equations of motion become invariant under the reparametrization symmetry diff(R) × diff(R): (5.10)
JHEP02(2020)177
In these transformations the primed and unprimed times are related by . (5.11) Most of this symmetry however will be spontaneously broken by any particular saddle point solution G c ij , Σ c ij . In Lorentzian signature we are interested in excitations about the zerotemperature thermofield double state: this state will be nontrivial since at large N the SYK model has a large vacuum degeneracy. The equations of motion (5.9) can be solved without too much difficulty by going to momentum space, the result is that the matrix G ij (t, t ) is nothing but the boundary two-point function of a Majorana fermion in global AdS 2 , with metric (2.16). The ij indices tell us which boundary each of the two fermions is on. This two-point function is invariant under only under the PSL(2, R) subgroup of diff(R)×diff(R) which is inherited from Lorentz transformations in the embedding space (2.11) ( PSL(2, R) is the connected universal cover of the embedding space isometry group SO(1, 2), where time evolution goes from −∞ to ∞). Thus at low energies we expect a set of zero modes taking values in diff(R) × diff(R) / PSL(2, R). (5.12) These zero modes, let's call them φ n , will be lifted by finite J effects, so they will have an effective action of the form the lowest-order in derivatives action with this symmetry is two copies of the Schwarzian action, where f i are our two diffeomorphisms of R and then we quotient by PSL(2, R) to get the action for the φ n . The classical solutions of the equations of motion obtained by varying this action are a pair diffeomorphisms f i (t) induced by distinct boundary PSL(2, R) transformations, identified modulo the joint PSL(2R) induced by isometries of global AdS 2 . We then can simply note that in [45] precisely this theory, two copies of the Schwarzian theory with a mixed PSL(2, R) gauged, was derived from the Lorentzian JT theory with two asymptotic boundaries (see also [43,44]. 16 This completes the derivation of the Lorentzian JT theory from the SYK model. There are two important observations about this derivation: (1) We see that JT gravity is not equal to "two copies of the Schwarzian theory", at least not in the naive sense of having a tensor product of two sensible Lorentzian theories. There is a tensor product in a larger unphysical Hilbert space obtained by quantizing pairs of diffeomorphisms, but we must quotient by the subgroup PSL(2, R) which 16 The basic idea is that if we solve the metric equation but not yet the Φ equation, then the functions fi(t) keep track of where the two boundaries where Φ = φ b rc are located in AdS2. We should quotient by the embedding space isometry group PSL(2, R), which acts nontrivially on both fL(t) and fR(t). The Schwarzian actions arise from the boundary terms in the action (2.1).
JHEP02(2020)177
mixes the two so the physical Hilbert space does not factorize. Doing this quotient separately for each diffeomorphism would have led to an empty theory.
(2) This theory is embedded into a larger Hilbert space which does tensor-factorize, that of two copies of the SYK model (with a fixed instantiation of the disorder). In describing the low energy sector however, we found ourself needing to use "left-right" degrees of freedom which, in the original SYK variables, have the form This last equation is quite interesting from the point of view [27]: it is a gravitational version of the procedure of splitting a Wilson line with a pair of dynamical charges. Note in particular that although the bulk fermions created by χ a i are not present in the JT gravity, we still need them to express the JT degrees of freedom within the SYK description. This was one of the main lessons of [27]: in the presence of bulk gauge fields, mapping low-energy bulk operators into the boundary theory can require heavy bulk degrees of freedom which do not otherwise appear in the low-energy effective action. 17
Conclusion
One important lesson of Jackiw-Teitelboim gravity is that bulk quantum gravity can make sense with a local Lagrangian. Indeed we have nonperturbatively constructed the Hilbert space and dynamics of the two-boundary Jackiw-Teitelboim gravity, and we have shown that many calculations are feasible within this simple setting. There are many more calculations which we did not attempt, two which we expect would be quite interesting are extending our calculation of the Hartle-Hawking wave function to one loop (and perhaps beyond), and repeating our analysis for the supersymmetric version of Jackiw-Teitelboim gravity.
We believe that the basic reason for why the JT Lagrangian leads to a well-defined bulk theory of quantum gravity is precisely that the Hilbert space it constructs doesn't factorize: even though it has wormhole solutions, it does not have black hole microstates. We have seen that the usual computations of black hole thermodynamics can all be given "non-thermodynamic interpretations" within this theory, with in particular the Euclidean one-boundary path integral being interpreted as the normalization of the unnormalized Hartle-Hawking state rather than a thermal partition function.
One important issue which we have not explored in detail is the role of topologically nontrivial configurations in the Euclidean path integral of the Jackiw-Teitelboim theory. Off-shell field configurations certainly exist where the spacetime evolves from a spatial line interval to a line interval plus any number of circles, and if there are more than two asymptotic boundaries then additional "rewiring configurations" are possible, which change which pairs of asymptotic boundaries are connected. We illustrate two such configurations 17 The form of (5.15) is quite similar to the equation 5.14 in [27] for the emergent Wilson link in the CP N −1 nonlinear σ model. In both cases we have a large-N average over bilinears of microscopic charges. That model thus seems to be quite a good model of emergent gravity in this particular case. in figure 8. Such topology-changing configurations are not present in the Lorentzian path integral, at least not if we define it to include sums only over globally-hyperbolic (in the AdS sense) geometries, and there are also usually not real Euclidean solutions with these topologies. Moreover the SYK model does not seem to have a discrete infinity of additional states associated to including an arbitrary number of spatial circles. Nonetheless it would be good to understand in what circumstances we can or should give a physical interpretation to these configurations, for example in AdS/CFT topology-changing Euclidean configurations are sometimes needed to reproduce known CFT results [63]. We leave exploration of this question to future work, but we emphasize that until it is addressed we cannot really claim to completely understand the bulk path integral formulation of Jackiw-Teitelboim gravity.
JHEP02(2020)177
It is interesting to consider if such a self-contained theory of gravity is possible in higher dimensions: for 3 + 1 dimensions and higher we expect that the answer is no, since once there are propagating gravitons these are already enough to make black holes whose microstates must be counted. But what about 2 + 1? In fact we suspect that pure Einstein gravity in 2+1 dimensions with negative cosmological constant, and also its supersymmetric extension, give two more examples of nontrivial bulk theories of quantum gravity which make sense as local path integrals but do not have CFT duals. Here are some features which resemble those of JT gravity: 18 • All UV divergences in their path integrals can be absorbed by simple renormalizations of G and Λ, so they are "secretly renormalizable" [64,65].
• They have two-boundary wormholes, namely the BTZ solution [66], and thus have semiclassical Hartle-Hawking states, whose normalization gives the one-boundary Euclidean path integral with boundary S 1 × S 1 .
JHEP02(2020)177
• There are no propagating degrees of freedom in the bulk, but the quantum mechanics of the time-shift operator and the Hamiltonian still exist, and thus give a nontrivial dynamics to the two-boundary system. This is now in addition to the boundary gravitons which are present even with one asymptotic boundary.
• The one-boundary theory, while no longer trivial because of boundary gravitons and topologically nontrivial black hole geometries, does not have nearly enough states to account for the Bekenstein-Hawking entropy which the normalization of the Hartle-Hawking state would have predicted [67].
Thus we conjecture that a complete quantization of pure Einstein gravity with negative cosmological constant (and its supersymmetric extension) should be possible using bulk path integral methods in 2 + 1 dimensions. The existence of the BTZ "black hole" is no obstruction to this, since it should be interpreted as a wormhole instead of a one-sided black hole. As we found in JT gravity, we expect that the two-boundary Hilbert space will not factorize due to the nonlocal consequences of the diffeomorphism constraints, which therefore would immediately imply that this Hilbert space cannot arise from that of a boundary CFT on a disconnected space. These conjectures are consistent with the results of Maloney and Witten, who computed the one-boundary partition function exactly and saw that it did not have the form of a thermal trace [68]. 19 There has been a fair bit of worry about how to "fix" this, for example by including complex saddle points or additional Planckian degrees of freedom, but inspired by JT gravity our proposal is instead that this is simply the right answer! Pure gravity in 2 + 1 dimensions with negative cosmological does exist, but it doesn't have a dual CFT. 20 This proposal clearly needs more scrutiny before it should be accepted, but with the JT theory to guide us, where many of the same issues arise in simpler guise, it seems to be time for another shot.
JHEP02(2020)177
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 13,042.4 | 2020-02-01T00:00:00.000 | [
"Physics"
] |
Descriptions of the mature larva and pupa of the Scaly strawberry weevil, Sciaphilus asperatus (Bonsdorff, 1785) (Coleoptera, Curculionidae, Entiminae) and observations of its biology
Abstract The mature larva of Sciaphilus asperatus is redescribed and illustrated, and the pupa is described for the first time. Supplements to the identification keys for larvae and pupae of selected Palaearctic Entiminae genera and species are given. Data on the life history, especially oviposition capacity and voltinism, of S. asperatus are provided and discussed, and the number of the six larval instars was confirmed. The economic importance of S. asperatus is briefly highlighted.
Species of Sciaphilus form a rather uniform group characterized in the adult stage by: (1) small body size (< 6 mm); (2) short rostrum with acute carina close to apex; (3) flat eyes; (4) long, slender antennae; (5) rounded elytra densely covered with oblong, erect and spherical adherent scales, the latter forming a more or less contrasting pattern; (6) femora with a conspicuous tooth (Hoffmann 1950;Smreczyński 1966;Freude et al. 1981). Sciaphilus asperatus (Figs 1-3) is a wingless, parthenogenetic, triploid species (Suomalainen 1969;Morris 1997). The adult is a polyphagous feeder on leaves of many herbs, shrubs and trees, mainly in the herb or even in the lower shrub layer, producing more or less characteristic notches on the leaf edge (Fig. 4). In the larval stage it feeds on the roots of plants like strawberry (Fragaria L.), cinquefoil (Potentilla L.), raspberry, blackberry (both Rubus L.), hawthorn (Crataegus L., all Rosaceae), and primrose (Primula L., Primulaceae). In the Berggarten area of Hannover-Herrenhausen it was regularly found in beds of Astilbe Buch.-Ham. ex D. Don, Tiarella L. (both Saxifragaceae), Epimedium L. (Berberidaceae) and small Rhododendron L. species (Ericaceae) (Sprick and Stüben 2012). In Lublin adults of S. asperatus were observed feeding also on Weigela florida (Bunge) DC. (Caprifoliaceae). In the laboratory, adults readily fed on a great number of plant species from more than 15 families (Willis 1964;Dieckmann 1980;Burakowski et al. 1993;Sprick and Stüben 2012), many of which may also be host plants. It is a eurytopic species reported from a large variety of biotopes, preferring rather moist and shady places. It occurs mostly in forests, bushes, fallow grassland and on river banks, but also in cultivations, like tree nurseries, parks and gardens (Koch 1992;Burakowski et al. 1993;Morris 1997;Gosik 2007;Sprick and Stüben 2012).
Biology and life-cycle of Sciaphilus asperatus have been described by Willis (1964), Krause (1978), Dieckmann (1980), and Burakowski et al. (1993). Adults are observed on host plants from mid-April to the beginning of October (Dieckmann 1980). The oviposition period in the field extends from late April to the end of July. Eggs are laid in batches between adjoining surfaces which was stated by Marvaldi (1999a) to be a common oviposition type in many Entiminae genera. Willis (1964) reported egg masses of 6 to 157 eggs, while Dieckmann (1980) recorded ca. 80 eggs (several observations). In the laboratory, we observed masses of 21 and 95 eggs (Fig. 23), deposited in two or three rows and glued with a secretion between layers of filter paper, or between the paper or a leaf placed in the box as food supply and the substrate. In the field, eggs are glued between overlapping leaves, to leaf folds, leaf petioles and stems, usually close to the ground (Willis 1964). Dieckmann (1980) and Willis (1964) reported that per year a single specimen can produce up to 880 or to 1000 eggs, respectively.
The larva develops in spring and summer, and this species usually overwinters in the adult stage (Krause 1978). According to Willis (1964), a small number of newly emerged weevils lay eggs -after a rather long pre-oviposition period of 24-31 days (only 12 days in spring) -also between mid-August and the beginning of September, which approximates the life-cycle of S. asperatus to that of many other soil-dwelling weevils by the presence of overwintering adults and larvae (see Gosik et al. 2016, Gosik et al. 2019. S. asperatus pupates between June and August; pupation in the field lasts between 14 and 21 days (Willis 1964).
The economic importance of Sciaphilus asperatus is usually low, compared to that of several Otiorhynchus species. Willis (1964) reported only two cases of severe damage in commercial strawberry cultivations in Northern Ireland and concluded: "From the limited data available it appears probable that severe damage to strawberry plants by larvae of S. asperatus occurs infrequently in Northern Ireland and tends to be associated with areas of light, well-drained soil." Alford (1999) also restricted the potential economic importance to strawberries: "Weevils from related genera [other than Otiorhynchus] (e.g. Exomias Bedel, 1883 and Sciaphilus) are also of pest status, e.g. on strawberry." Sprick and Stüben (2012), who studied the soil-dwelling weevil fauna of many tree nurseries, garden centres and parks in Germany, also ranked this species in the category of minor economic importance: "Species that usually rarely cause damage". In a study from North America, S. asperatus comprised nearly 10% of the total larvae in forest soils (ten sites) from the Great Lakes region in Michigan and Wisconsin (Pinski et al. 2005). S. asperatus sometimes forms rather large populations, which is possible, given its parthenogenesis and oviposition capacity, but Morris (1997) stated that this species rarely occurs in large numbers in Great Britain. In a small number of cultivations, mainly of Rosaceae herbs and preferably strawberries, it achieved large numbers and therefore pest status, too.
The morphology of immatures of Sciaphilus asperatus is still incompletely known. A piece of information on this topic is given by Emden (1950Emden ( , 1952: his paper contains descriptions of spiracles and the body shape of first larval instars, as well as diagnostic characters at genus and species level. Moreover, differences between first instar larva and the mature larva are provided. But only diagrams of labrum and epipharynx of the first instar larva are presented. On the other hand, the pupa remained still unknown. On the basis of head measurements, Willis (1964) reported the presence of six larval instars in S. asperatus.
All the larvae and pupae were collected in the field at a site where the life cycle had previously been studied using pitfall traps (see Sprick and Stüben 2012). The field work in the consecutive season was concentrated on obtaining materials for morphological study. The mature larva and pupa were described, whereas the first instar larva was used only for measurement purposes in order to ascertain the number of developmental stages (see for example Gosik 2014 or Gosik et al. 2019). Immature stages were preserved in 75% ethanol and used for measurements and morphological descriptions.
Slide preparation basically followed May (1994). The larvae selected for study under the microscope were cleared in 10% potassium hydroxide (KOH), then rinsed in distilled water and dissected. After clearing, head, mouthparts and body (thoracic and abdominal segments) were separated and mounted on permanent microscope slides in Faure-Berlese fluid (50 g Gum Arabic and 45 g chloral hydrate dissolved in 80 g of distilled water and 60 cm 3 of glycerol) (Hille Ris Lambers 1950). The specimens and slides are deposited in the collections of the Department of Zoology, Maria Curie-Skłodowska University (Lublin, Poland).
Description of the mature larva
All data in [mm], ( n : number of exemplars).
Body (Figs 5-9) slender, slightly curved, rounded in cross section. Prothorax slightly bigger than mesothorax; metathorax as wide as mesothorax. Abdominal segments 1-6 of almost equal length; segments 7-9 tapering gradually to the terminal parts of the body; segment 10 reduced to four anal lobes of unequal size: the biggest dorsal, the smallest ventral, both lateral equal in size. Spiracle of thorax bicameral, and of abdominal segments 1-8 annular. Chaetotaxy well developed, setae capilliform, variable in length, light yellow. Each side of prothorax with nine prns of unequal lengths: one long, three moderately long, five short or minute (seven of them placed on pronotal sclerite, next two close to spiracle); two ps (one long, one medium); and one very short eus. Meso-and metathorax (Fig. 5) on each side with one short prs, four pds, variable in length (pds 1 , pds 3 and pds 4 medium, pds 2 relatively short), one medium as, one medium and one minute ss, one medium eps, one medium ps, and one short eus. Each pedal area of thoracic segments with six pda, variable in length (seta "z" invisible). Abd. 1-7 (Figs 7-9) on each side with one short prs, five pds, variable in length (pds 1 , pds 3 , and pds 5 very long, pds 2 and pds 4 very short) and arranged along the posterior margin of each segment, one minute and one long ss, one minute and one long eps, one minute and one medium ps, one short lsts and two short eus. Abd. 8 (Figs 7-9) on each side with one short prs, three pds, variable in length (pds 1 and pds 3 very long, pds 2 very short) and arranged along the posterior margin of the segment, one minute ss, one minute and one long eps, one minute and one medium ps, one short lsts and two short eus. Abd. 9 (Figs 7-9) on each side with three ds (ds 1 and ds 3 very short, ds 2 long), all located close to the posterior margin of the segment, one short and one long ps and two short sts. Each lateral anal lobe (Abd. 10) with a pair of minute ts.
Description of the pupa
All data in [mm].
Chaetotaxy well developed, setae of various lengths and shapes: on head (except vs), rostrum and mandibular thecae, capilliform, straight; on dorsal parts of thoracic (except ls) and abdominal segments, thorn-like. Setae yellowish to brownish, usually located on visible protuberances. Head capsule and rostrum with one pair of vs, two pairs of sos, os, pas, three pairs of rs, and two pairs of es. Vs thorn-like, medium-sized; all sos, os, pas and rs medium long, straight, equal in length; es and mts straight, very short (Fig. 20). Pronotum with two pairs of as, ls, ds, pls, and three pairs of sls. Only ls and sls 3 thin, capilliform, remaining setae thorn-like, placed on distinct protuberances. Sls 2 and sls 3 growing together on a single protuberance (Fig. 22).
Meso-and metathorax each with five pairs of rather small setae forming a line medially. Abdominal segments 1-7 each with 4 pairs of thorn-like ds (placed along posterior margin), and 2 minute, capilliform ls. Dorsal setae on abdominal segments 1 and 2 small, equal in length, on next segments increasing gradually in size; segment 8 with two pairs of minute, capilliform ls, two minute, capilliform vs, and three pairs of ds: first and second thorn-like, third capilliform, ds 2 and ds 3 growing together on a single protuberance; segment 9 with two pairs of minute, capilliform vs, next two with minute setae on each urogomphus (Fig. 22). Apex of femora with 2 fes; fes 1 long, straight, fes 2 short, thorn-like, both placed on protuberances (Figs 20-22).
Discussion
Sciaphilus asperatus is a common species. Biology and life cycle are in general well known. However, some special aspects of development, such as number of larval instars, voltinism and oviposition capacity have to be discussed herein. Some differences in chaetotaxy between S. asperatus and selected genera from Entiminae are also discussed. Finally, larva and pupa are integrated in current determination keys. -9, ur -urogomphus, setae: as -apical, d -dorsal, ds -discal, es -epistomal, fes -femoral, l, ls -lateral, mts -mandibular theca, os -orbital, pas -postantennal, pls -posterolateral, rs -rostral, sls -superlateral, sos -superorbital, v -ventral, vs -vertical. Larval instar determination Willis (1964) reported six larval instars, but the diagram on which the results of his measurements are based shows only five. He provides measurement data for 377 larvae. We checked this using the method of Sprick and Gosik (2014) or Gosik et al. (2019): see Tables 1, 2. The data listed in Table 1 show that the mean values are very close for L 1 larvae: 0.277 mm (our data) and 0.233 mm according to Emden (1952). In mature larvae the range is a little larger but also quite close: 1.117 mm (our data) and 1.215 mm (Emden 1952). The HW of six measured pupae lies within this range. These are the best pre-conditions for larval instar determination ( Table 2). The tested Growth Factor (GF) values are around 1.40, as in some other species (see for example Gosik 2014 or Gosik et al. 2019).
From Table 2 it can be inferred that larval growth is rather slow: the best approximation in both cases is achieved with GF values < 1.4: 1.37 -1.38 (1.375) from our own data and 1.39 -1.40 (1.391) from the data of Emden (1952). These values are (Willis 1964). Hence, these two larvae are not considered typical of this species and will be excluded from the instar determination. Willis' data for mature larvae range between ca. 0.88 mm and 1.35 mm with a maximum at 1.12 mm (71 larvae) according to his Figure 45 and between 1.06 mm and 1.30 mm according to the text (page 97). Italics: calculated values; bold: measurements (except head line). -1) : own data; 2) data of Emden (1952) much smaller than in Tanymecus (Gosik et al. 2019), where GF values ranged between 1.44 and 1.45. Furthermore, it is immediately obvious that Sciaphilus asperatus has six larval instars and that the indefinite larvae of Emden (1952) (Table 1) belong to the 4 th larval instar.
Oviposition capacity
Some data are available regarding the egg-laying capacities of Sciaphilus asperatus. According to Emden (1952), the volume of a female's abdomen is 277 times that of a single egg. In actual fact, however, the available space must be less because of the space requirements of the digestive system, the ovipositor, viscose fluids, bordering structures and others. The highest recorded egg mass was 157 eggs per oviposition event (Willis 1964). Willis (1964) and Dieckmann (1980) respectively reported ca. 880 and 1000 eggs laid by a single female during one season. But these data are from (or probably from) weevils maintained in the laboratory. The data relating to weevils maintained under outdoor conditions by Willis (1964) resulted in lower values of 450 to 700 eggs per female. If the pre-oviposition period lasts around 10 days (see Willis 1964), there could be 12 egg deposition events between mid-April and the end of July. If this is right, 38 to 60 eggs could be laid per oviposition event under outdoor conditions.
Generations and voltinism
According to the data presented by Krause (1978) and Dieckmann (1980), Sciaphilus asperatus should be a univoltine species: overwintered adults produce eggs, larvae hatch in spring and summer, pupation takes place in summer, and adult weevils of the new generation emerge also in summer. But Burakowski et al. (1993) reported that a small part of the new generation lays eggs in August, producing larvae that overwinter. Moreover Willis (1964) presumed that usually all larvae overwinter. This hypothesis has still to be checked. It appears equally possible that larvae from eggs laid early in the season develop in the same year as is true for many other soil-dwelling weevils (see for example Gosik et al. 2016, Gosik and. A species that develops within one season is univoltine, whereas a species that needs longer than one year for its development is semivoltine. Neither definition fits S. asperatus or many other soil-dwelling weevils. Apparently, there is a mix of univoltine summer development, and univoltine or semivoltine (if development of the overwintering larvae should last longer than one year) autumn/spring development; in winter there is not usually any development. It seems these definitions are hard to apply to soil-dwelling weevils, as they do not fit the facts very well.
Remarks on chaetotaxy
There are only several small discrepancies between the description of the mature larva given by Emden (1952) and those presently described: e.g. Emden reported two mandibular setae (one prominent and next very small), whereas we observed only a single seta. It is possible that the second (very small) mds visible on the first instar larva was torn off during intensive feeding of the mature larva. Emden (1952) reported on the presence of seven setae on the pedal area of the prothorax, the seventh a minute seta ("z"), and three further minute setae of each lateral anal lobe. We noticed only six setae on each pedal lobe and only two pairs of minute terminal setae on the lateral lobes of the tenth segment.
It is worth stressing that the presently described mature larva of S. asperatus possesses exactly all essential characters listed by Marvaldi (1998) for Entiminae larvae, Type "A", namely: single as; setae mes 1 close together whereas setae mes 2 placed far one another; mes 2 placed close to ams; sensilla cluster placed between mes 2 ; labral rods curved outwards; premental sclerite trident-shaped, with posterior extension truncate, expanded at apex.
Supplement to the key to selected genera and tribes of Palaearctic Entiminae larvae
Based on Gosik et al. (2016), , , and Gosik et al. (2019): in Graptus, Peritelus, and Sciaphilus the key is based on one species each (G. triguttatus triguttatus, P. sphaeroides, and S. asperatus).
(Previous step as in Gosik Taking into consideration the shape, number, and distribution of setae, and the general body shape, the pupae of Sciaphilus asperatus and of Exomias pellucidus (Boheman, 1834) are very similar (see Gosik and Sprick 2013). Especially due to hair-like setae on head and rostrum and to thorn-like setae on pronotum and abdomen, which are observed on both species as well as the presence of paired sls growing on single protuberances and the absence of ventral setae on abdominal segments 1-7. This morphological information is coherent with the close systematic position of both species in the tribe Sciaphilini. | 4,148.2 | 2019-08-29T00:00:00.000 | [
"Biology"
] |
BMN operators and string field theory
We extract from gauge theoretical calculations the matrix elements of the SYM dilatation operator. By the BMN correspondence this should coincide with the 3-string vertex of light cone string field theory in the pp-wave background. We find a mild but important discrepancy with the SFT results. If the modified $O(g_2)$ matrix elements are used, the $O(g_2^2)$ anomalous dimensions are exactly reproduced without the need for a contact interaction in the single string sector.
Introduction
In [1] Berenstein, Maldacena and Nastase studied a pp-wave limit of string theory in the AdS 5 × S 5 background. Type IIB strings on the pp-wave geometry were found to correspond to operators of a N = 4 SU(N) super Yang-Mills theory with large R charge J in the limit where J 2 /N is fixed. They obtained definite predictions for the scaling dimensions of the relevant operators in the free string limit which were subsequently verified on the gauge theory side [1,2,3].
Subsequent work was made in extending the correspondence on both sides to lowest orders in the effective gauge coupling λ ′ = g 2 Y M N/J 2 and genus g 2 = J 2 /N parameter [4,5,6,7,8]. On the string theory side the tool used to study interactions was light cone IIB string field theory (SFT) constructed for the pp-wave background in [9] (and also inherently discrete string-bit formulations [10,11,12]). There exist explicit expressions for the gauge theory parameters g 2 , λ ′ in terms of string theoretical quantities (but see also [13]: The link was made through a proposal made in [5] of a relation between matrix elements of the SFT hamiltonian and certain gauge theoretical 3point functions. This was verified in various cases [14,15,16,17,18,19] (see also [20,21,22,23] for further developments). However the explicit proposal was not derived from 'first principles'. A direct calculation of the O(g 2 2 ) anomalous dimensions from the O(g s ) SFT matrix elements failed to give an agreement with the gauge theoretical result. This was not a direct contradiction, however, due to the theoretical possibility of O(g 2 2 ) contact terms in the SFT hamiltonian. In this letter we want to look for a more direct test of the SFT -gauge theory correspondence.
The main aim of this paper is to extract directly from the gauge theoretical calculations done so far the order O(g 2 ) matrix elements of the gauge theory dilatation operator. This should be identified with the O(g s ) vertex of light cone string field theory thus allowing for a direct comparision with the formulation of [9]. In addition it might give some insight into the failure of SFT (modulo contact terms) to describe the O(g 2 2 ) gauge theoretical anomalous dimensions.
The outline of this paper is as follows. In section 2 we will recall some features of the BMN operator-string correspondence, in section 3 we will extract the O(g 2 ) matrix elements and show that they are sufficient to reconstruct full O(g 2 2 ) anomalous dimensions without the need for explicit O(g 2 2 ) 'contact interactions' in the single string sector. We conclude the paper with a discussion.
BMN operator-string correspondence
The dictionary established in [1] between string theory and gauge theoretical operators associates to each physical state of the string an explicit (singletrace) operator of the gauge theory. The operators which we will consider here are here Z = (φ 5 + φ 6 )/ √ 2 and φ i are other transverse coordinates. These operators correspond respectively to the states |0, p + , a i 0 † |0, p + and a i n † a j −n † |0, p + . Double trace operators correspond to two-string states and at zero genus (g 2 = 0) can be identified unambigously. The operators that we will use here are Here r ∈ (0, 1) denotes the fraction of light cone momentum carried by the first string. Presumably (bosonic) multistring states have to be symmetrized (this will not be important here). The light cone string hamiltonian is Therefore we should identify it (up to the factor 2/µ and the constant shift) as equivalent to the gauge theory dilatation operator D.
At zero-genus all the single and double string states are eigenstates of H l.c. string as the respective gauge theory operators are eigenstates of D. Once we turn on the interaction, the dilatation operator will start to mix the operators and H l.c.
string will start to mix the corresponding single and multistring states. We expect the action of the full interacting operators D and string on the gauge theory operators, and (multi-)string states respectively to coincide 1 : i.e. we should have D αβ = h αβ . In [9] the terms linear in g s in H l.c. string were constructed: where H l.c.
2
is the free hamiltonian and H l.c.
3
represents the 3-string vertex. The following matrix elements were computed in [18] and will be relevant later: Up till now most comparisions between string field theory and gauge theory were performed either on the level of 3-point correlation functions or by computing scaling dimensions.
The former method was based on a proposal which linked the structure constants C ijk and appropriate SFT hamiltonian matrix elements [5] i|H l.c.
Although plausible and supported by various calculations it has not been strictly proven from first principles nor shown how it could be systematically extended beyond leading order. The latter method of comparison based on determining anomalous dimensions is difficult because the first nontrivial corrections to the scaling dimensions are of order O(g 2 2 ) while the SFT hamiltonian in the pp-wave background has been only determined to O(g 2 ) order. Indeed H l.c.
3
with the matrix elements (11)-(12) could not reproduce [5,7] the g 2 2 correction to the 1 Up to possible rescalings of the individual states.
anomalous dimension of the O J ij,n operator obtained in a SYM calculation [6]: In fact the disagreement between the scaling dimensions calculated in gauge theory and ones obtained from the cubic interaction hamiltonian has been attributed to the possible appearance of nontrivial contact terms of order O(g 2 s ). Indeed additional O(g 2 s ) terms appear also in flat space light cone SFT [24,25]. However there they only involve four string fields while here it seems that the disagreement can be cured only by terms which involve only two string fields.
Therefore it is interesting to directly extract the O(g 2 ) matrix elements of the gauge theory dilatation operator as these, according to the BMN operator-string correspondence, should be identified with the O(g s ) SFT hamiltonian matrix elements.
Gauge theory results
We will now extract the matrix of the gauge theory dilatation operator up to order O(g 2 ). Let O α be the set of all operators (single-and multi-trace) with R charge J which are eigenstates of the free (planar) dilatation operator,Ō α the corresponding complex conjugates and let us denote by O ′ A the operators with definite scaling dimension: where D is the dilatation operator. These O ′ A 's may be rewritten as linear combination of the original operators and vice-versa Similar formulas hold for the barred operators (with a different matrix 2 V * αA ). Thus the matrix elements of the gauge theory dilatation operator in the original basis O α are This should be identified with 2 µ α|H l.c. string |β + Jδ αβ . We will now show how to extract the matrix V ∆V −1 from 2-point correlation functions. Using the expansions (16) we get Here the C A 's are some undetermined normalization constants. Expanding to linear order in the logarithm gives where the matrices M ′ and M ′′ are given by and V † denotes here the transpose of V * . The dilatation operator matrix is then given by The matrices M ′ and M ′′ have been calculated in [6]. For our purposes it is enough to find their elements to order O(g 2 ). To this order there are only nonzero elements in the O J 12,n -T J,r 12,m sector and the O J 12,n -T J,r 12 sector. It is easy to see that to order O(g 2 ) we may treat them independently.
The O J 12,n -T J,r 12,m sector The calculations of [6,7] yield (see e.g. (3.15) in [6]) The dilatation matrix to order O(g 2 ) is thus Several comments are in order here. Firstly the result does not agree with the matrix elements of [18]. There is some relation, however. We note that the difference of the off-diagonal elements is equal to which exactly coincides with (11) up to the normalization factor of Jr(1 − r).
In fact we see that the rhs of the proposal (13) is antisymmetric w.r.t exchange of initial and final states. A minor generalization which would still hold even for the modified matrix elements (24) would be Secondly the matrix (24) does not have a definite symmetry. From the SFT point of view this would signify that the amplitude of splitting strings is different from joining. This does not necessarily mean that the gauge theory dilatation operator is non-hermitian since the natural scalar product is nonzero only between the barred and non-barred sectors. We will return to this point in the discussion.
The O J 12,n -T J,r 12 sector In this case the relevant formulas (see e.g. (3.15) in [6]) are The dilatation matrix to order O(g 2 ) is thus Again we see that it is nonsymmetric and that only the difference of offdiagonal elements gives the SV matrix element (12).
Let us now assume that the cubic O(g s ) SFT vertex is given by the above formulas (24) and (29). We will show that this is enough to reproduce the exact gauge-theoretic scaling dimension to order O(g 2 2 ).
Scaling dimensions to order O(g 2 2 )
The formulas for scaling dimension follow easily (as in [5,7]) from first order perturbation theory in the off-diagonal elements of the hamiltonian (dilatation matrix), but keeping in mind the fact that the hamiltonian is nonsymmetric. Indeed assuming that D αβ = ∆ α δ αβ + g 2 H (1) αβ we obtain We assume that H (2) αα = 0 (no contact interactions in the single string sector). We will now show that the full O(g 2 2 ) result is obtained. It is interesting to compare with section 5.2 in [5]. Now T J,r 12 does not contribute as the product of the off-diagonal elements in (29) vanishes. Only the operators T J,r 12,m give a contribution. Since ∆ n − ∆ r m = λ ′ (n 2 − m 2 /r 2 ) we have to calculate = π 4nr 2 − nπr csc 2 (nπr) + + cot(nπr)(2n 2 π 2 r 2 csc 2 (nπr) − 1) (32) and replace (1/J) r by an integral. The result is in agreement with (14). We see that the full O(g 2 2 ) result was obtained just from the cubic O(g 2 ) interaction. The positive sign of the correction for n = 1 could only appear due to the fact that the matrix (24) is nonsymmetric. In comparision to the work of [6] the above result (33) was derived here only from a small subset of data. This is a strong argument in favour of a SFT interpretation -O(g 2 2 ) elementary interactions (contact terms) in the single string sector, which seem unlikely by comparision to the flat space SFT indeed do not appear here (by the above calculation we demonstrated that H (2) nn = 0). On the gauge theory side, if it were not for the SFT interpretation we would not have any reason to expect a vanishing O(g 2 2 ) term in the single trace (single string) sector.
However the main problem which remains is how to reconcile the asymmetric SFT vertex reconstructed here from the gauge theory calculations of [6] with the construction of light cone SFT in the pp-wave background.
Discussion
In this paper we have reconstructed the order O(g 2 ) matrix elements of the dilatation operator directly from gauge theory calculations. By the BMN operator-string correspondence this should give the 3-string O(g s ) vertex of light cone SFT in the pp-wave background. We find a disagreement with the continuum SFT matrix elements of [18] even at order O(g s ).
From this point of view we may return to the problem of the failure of SFT to reproduce the correct gauge theory scaling dimensions. Previously this was attributed to the possible existence of O(g 2 s ) contact terms. However from the flat space perspective such contact terms in the single string sector are unlikely.
Here we show that there is a disagreement even at order O(g s ), although a mild one. With the 'new' O(g s ) matrix elements the full O(g 2 2 ) anomalous dimensions can be reconstructed without any additional O(g 2 2 ) contact terms. As was mentioned earlier we believe that this is an argument in favour of a SFT interpretation.
The deviation from the matrix elements of the SFT vertex constructed in [9] is not very large. The asymmetric component coincides with the SFT matrix elements of [18]. So perhaps there is room for reconciling these results with SFT.
A curious feature of the gauge theoretical dilatation matrix which we obtained is that it does not have any simple symmetry properties. Matrix elements which would correspond on the string theory side to 'splitting' and 'joining' of strings are different. From the point of view of string theory this asymmetry may not be unacceptable as, in contrast to flat space, the pp-wave background is not symmetric w.r.t light cone time reversal (x + → −x + ) since then the RR field strength changes sign. On the gauge theory side there is no obvious contradiction with hermiticity because the natural scalar product is off-diagonal and is non vanishing only for operators with opposite R charge.
It would be interesting to see how it is possible to understand explicitly that lack of symmetry within the SFT framework.
A remaining open problem is to reproduce the dilatation matrix elements derived here from 'continuum' SFT. As this paper was being written [12] appeared which gave a refined discrete string bit approach to the BMN-string correspondence. It would also be interesting to examine the interrelation with the framework of [23]. | 3,423 | 2002-09-30T00:00:00.000 | [
"Physics"
] |
Analysis of the danger of thermal radiation of a spherical flame
A new mathematical model of spherical flame propagation during emergency emission and ignition of mixtures of hydrocarbon gases in an open space has been developed. A comparative analysis of the results of the study made it possible to establish the main mechanisms of heating of a combustible gas mixture as a result of radiation from gorenje products. As a result of mathematical modeling, the resulting radiation model meets the real gorenje conditions of gas-air mixtures.
Introduction
In emergency situations, the main mechanism of thermal effects of high-temperature sources capable of causing damage to people, destruction of related objects and the environment is thermal radiation. Sources of intense thermal radiation can occur during the extraction, processing, storage and transportation of hydrocarbon raw materials and energy-saturated materials, nuclear explosions, space and man-made disasters, terrorist acts and other emergency situations. Most of these sources are spherical in shape.
According to the data, the dangerous parameter of the impact of a spherical flame formed during an emergency release of liquefied hydrocarbon gases into an open space is the amount of radiation that causes ignition of wood or burns to a person. The calculation of this parameter is carried out at a constant diameter of the spherical flame. According to the classical theory of gorenje gorenje, stationary flame propagation is observed in an unlimited space at a constant rate of combustion. However, with a fixed initial volume of gas, the combustion of fuel will take place with a variable diameter of the spherical flame. In the theory of gorenje gas mixtures, the heating of the combustible mixture by flame radiation is neglected due to the diathermicity of gases and the relatively small width of the heated layer. Analysis of the literature data on radiation heat exchange in translucent media shows that, unlike solids in a gas, energy radiation occurs in volume both into the surrounding space and into the sphere [1][2][3][4][5]. To date, there are many works devoted to flame modeling [6][7][8][9].
In connection with the above, the theoretical and practical interest was the development of a mathematical model for the stationary propagation of a spherical flame and the assessment of the danger of irradiation of an object with an increased intensity.
Materials and methods
Numerical methods are used in the work.
Results
In case of emergency depressurization of liquefied petroleum gas storage tanks, a vapor-air cloud is formed under normal conditions. After reaching the lower or upper concentration limits, it is possible to ignite a combustible mixture and form a spherical flame with an initial diameter of up to a hundred meters. In the process of burning, the combustible mixture heats up and expands gorenje. In the quasistationary stage of gorenje, the diameter of the ball is 1.5 times larger than the initial one calculated for normal conditions. Consequently, the adiabatic heating conditions of the initial mixture are not preserved, and in addition to the thermal conductivity mechanism of heating the gas mixture from gorenje, there is an additional source. Such a volumetric source can be the radiation energy of gorenje products, which is absorbed by the initial mixture. This means that the dynamics of the volume of the combustible mixture will depend on three processes: heating from a spherical surface at gorenje temperature, expansion as a result of additional heating by radiation, volume reduction during the combustion of the combustible mixture. A spherical flame can be considered as a space bounded by the combustion front gorenje. A quantitative description of complex phenomena and processes of heat exchange inside a sphere with a diameter of hundreds of meters and a temperature difference of up to a thousand degrees is not possible. Therefore, in the theory of heat transfer in a limited space, it is proposed to consider complex heat transfer as an elementary phenomenon of heat transfer by introducing a single coefficient of thermal conductivity λ.
In the gorenje infinitesimal front approximation, the unlimited area of integration of the equation is divided into two zones: the initial combustible mixture and the reaction products. A spherical flame can be considered as a space bounded by the combustion front gorenje. A quantitative description of complex phenomena and processes of heat exchange inside a sphere with a diameter of hundreds of meters and a temperature difference of up to a thousand degrees is not possible. The same heat source with a specific power of q v operates inside. With continuous fuel supply, we will consider the process stationary. In this case, we obtain the equation of thermal conductivity in a spherical coordinate system [10]: (1) Separating the variables, we get: Next, we integrate both parts of the equation: Let's separate the variables again: Due to the finite value of the temperature in the center, the constant C 1 = 0, then: We find the constant C 2 from the boundary condition. On the surface of the flame, thermal interaction with the medium occurs due to radiation. According to the Stefan-Boltzmann law: Where ε is the study coefficient, T c is the flame surface temperature equal to: Substituting (6) and (8) into (7), we get: From where the integration constant C 2 is equal to: Then the law of distribution of the temperature field will take the form:
Discussion
As follows from the obtained formula (11), the temperature field of the flame of a spherical shape depends on the size of the flame, on the thermal power and radiated ability. At the same time, the field itself inside the flame changes according to the parabolic law.
Conclusion
Thus, the analysis of the danger of thermal radiation of a spherical flame made it possible to identify and solve a number of important aspects of the problem of the gorenje of the damaging effect of a spherical flame when burning mixtures of hydrocarbon gases in an open space. The analysis of mathematical methods for modeling spherical flame radiation contributes to the improvement of existing and the creation of new methods for assessing the danger of thermal radiation. | 1,411.2 | 2022-07-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Tailoring the lineshapes of coupled plasmonic systems based on a theory derived from first principles
Coupled photonic systems exhibit intriguing optical responses attracting intensive attention, but available theoretical tools either cannot reveal the underlying physics or are empirical in nature. Here, we derive a rigorous theoretical framework from first principles (i.e., Maxwell’s equations), with all parameters directly computable via wave function integrations, to study coupled photonic systems containing multiple resonators. Benchmark calculations against Mie theory reveal the physical meanings of the parameters defined in our theory and their mutual relations. After testing our theory numerically and experimentally on a realistic plasmonic system, we show how to utilize it to freely tailor the lineshape of a coupled system, involving two plasmonic resonators exhibiting arbitrary radiative losses, particularly how to create a completely “dark” mode with vanishing radiative loss (e.g., a bound state in continuum). All theoretical predictions are quantitatively verified by our experiments at near-infrared frequencies. Our results not only help understand the profound physics in such coupled photonic systems, but also offer a powerful tool for fast designing functional devices to meet diversified application requests.
Introduction
Recently, photonic systems consisting of multiple plasmonic/dielectric resonators coupled in different ways have attracted much attention [1][2][3][4] . Compared to simple systems containing only one type of resonators, coupled systems exhibit more fascinating near-field (NF) properties (e.g., local field enhancement) and far-field (FF) responses manifested by unusual lineshapes, such as Fano resonance [5][6][7] and Rabi oscillations 8,9 , dictated ultimately by how the involved resonators are coupled together. Couplings have offered more opportunities for controlling the NF and FF light environments of such complex photonic systems as desired, making them particularly useful in applications, such as nanolasing 10,11 , fluorescence enhancement [12][13][14] , and information transport [15][16][17] .
Despite great advances on the experimental side, the theoretical understandings of such systems are far from satisfactory, which also hinders the rapid design of appropriate systems with desired NF and FF responses. For example, full-wave simulations require huge computing costs and reveal very little physics. Meanwhile, although many models (e.g., coupled-mode theory (CMT) 18-21 , Fano's formula 22,23 , or effective circuit models 24,25 ) were proposed to analyze the underlying physics, they typically require model parameters fitted from simulation results, and thus cannot predict unknown phenomena before having studied the systems numerically. As an early attempt, a photonic tight-binding method (TBM) 26 , with all involved parameters computable without fitting procedures, was proposed to successfully predict the resonance peak positions of a coupled system. Unfortunately, the TBM provides no information on the entire optical responses (e.g., the lineshapes), which are usually more desired for practical applications. The intrinsic difficulties are that these systems are open in nature, in which different resonators can couple not only with each other via NFs but also, more importantly, with external free space via FF interactions (Fig. 1a). To establish a complete theory to predict the entire optical properties of arbitrarily coupled photonic systems, one needs to rigorously consider both NF and FF interactions on the same foot. While several semi-analytical approaches have recently appeared, they have their own limitations and are not generic enough to study arbitrarily coupled systems in a formal way [27][28][29] .
In this paper, we derive a formal theoretical framework from first principles (i.e., Maxwell's equations), with all involved parameters directly computable without fitting procedures, to predict the optical lineshapes of arbitrarily coupled photonic systems. The obtained equations resemble the empirical CMT but are derived from first principles, and thus have unambiguous physical meanings, as clearly revealed by benchmark calculations against rigorous Mie theory on a model system. After validating our theory through comparison with experimental/numerical results on a realistic plasmonic metasurface, we present how to employ it to tailor the lineshape of a coupled plasmonic system as desired by varying the interresonator coupling. In particular, we show that it is possible to generate a completely "dark" optical mode with vanishing radiative loss (i.e., a bound state in continuum (BIC) 30,31 ) in such systems, although the constituent resonators exhibit moderate radiative losses. All theoretical predictions are quantitatively verified by experimental results on a series of metasurfaces containing plasmonic resonators coupled in different ways.
Establishment of the formal theory
We start by establishing a formal theory applicable to generic coupled open systems. As shown in Fig. 1a, we consider the scatterings of a system consisting of M arbitrary resonators located at different positions in a host medium under certain external illumination. Such an open system can be schematically described by the model depicted in Fig. 1b, where the region containing resonators is connected to the external continuum via N ports with well-defined properties. Formally, we need to solve the following Schrödinger-like equation: where Ψðr; ωÞ is the total wave function, and b V m is the Hamiltonian of the whole system with b H h describing the host medium and b V m the potential contributed by the mth resonator.
To expand the unknown function Ψðr; ωÞ appropriately, we need a complete set of basis wave functions that are orthogonal to each other and normalizable in certain ways. In the same spirit as the TBM 26,32 , here, we define a set of wave functions fψ LEM m ðr; ωÞ; m ¼ 1; :::; Mg, which are the (approximate) solutions of the Hamiltonian b , describing the subsystem containing only the mth resonator. For simplicity, here, we assume that each resonator supports only one mode, and the extensions to more general cases (e.g., resonators exhibiting multiple or degenerate modes) are straightforward. Different from the systems treated by the TBM, which are closed 26 , and thus have well-defined localized eigenfunctions, here, the open systems under study only support leaky eigenmodes (LEM), as explained subsequently.
Suppose that the resonators exhibit high quality (Q) factors; we can use the following approach to obtain ψ LEM m ðr; ωÞ. Shining the subsystem with external illumination, we can solve b H m Ψ m ¼ ωΨ m to obtain Ψ m analytically or numerically, and then obtain the response spectrum of the system. We then identify the resonance frequency ω m of the mth resonator from the maximum of Fig. 1 Schematics of the system under study and our theory. a Photonic system containing multiple arbitrary resonators coupled together under external illumination. The inset shows a typical optical lineshape of such a system. b Schematics of our theory: under certain external illumination, the total scattered field of the coupled system is a linear combination of leaky eigenmodes (LEM, ψ LEM m ) of different resonators, each containing a near-field part ψ NF m and a far-field tail ψ FF m the response spectrum. Choosing a "background" representing the system at a frequency far from all resonances, we can calculate the background wave function Ψ B by shining the "background" medium with the same external illumination. We finally obtain the desired LEM wave function through ψ LEM m ¼ Ψ m À Ψ B for the mth resonator. We note that fψ LEM m g are quite different from the quasinormal-mode (QNM) functions defined in refs. 27,28,33 . While fψ LEM m gare wave functions of the systems under external illumination at real frequencies ω m , QNM functions are eigenfunctions of the systems without external illumination corresponding to complex eigenfrequencies. Moreover, LEM functions do not diverge at infinity, whereas QNM functions inevitably diverge 34,35 . Therefore, LEM functions are particularly suitable for the lineshape problems studied here, which require external illumination. Examples of how to obtain ψ LEM where ψ NF m and ψ FF m represent the NF and FF parts of the wave function, respectively. Technically, for any given system with well-defined external ports, we can always project ψ LEM m onto the port modes on reference planes of all external ports and then construct ψ FF m by these port modes, which are assumed to fill the entire space. With ψ FF m known, we then obtain ψ NF m numerically based on Eq. (2). The NF functions ψ NF m have good properties to help us perform further analyses. In the vicinity of the scatterer, under the high-Q approximation where the FF part of the wave function is significantly weaker than the NF part, ψ NF m can be approximately viewed as the eigenfunction of Meanwhile, ψ NF m can be normalized since it is well localized around the mth resonator. Moreover, considering that these wave functions are spatially well separated, we find that they approximately satisfy the following orthonormal condition: where the integrals are performed over the entire space. We note that one needs to multiply ψ LEM m by the same normalization constant that is used to normalize ψ NF m , since these two functions are connected by Eq. (2). Equation (4) indicates that fψ NF m ; m ¼ 1; :::; Mg form a set of orthogonal bases to expand the total wave functions in the NF region. Note that the approximation Eq. (4) is widely used in the TBM for treating electrons in solids 26 .
We now identify the FF eigenbases of the system. In the FF region, eigenmodes are just a set of propagating modes k ± q E n o allowed by the system, where +(−) denotes the incoming (outgoing) propagation direction, q labels the mode channel, and k is the wavevector satisfying certain dispersion relation kq (ω). These wave functions satisfy the following orthogonal condition: where the integrals are performed on the reference plane of a particular external port. In principle, extending our theory to study cases with continuum scattering ports 36,37 is also possible, although one needs to compute all parameters related to these scattering channels.
We are now ready to represent Ψ as a linear combination of these basis functions. We have Ψ = Ψ B + Ψ sca , where Ψ sca is contributed by the scatterings of all resonators. In the same spirit as the TBM, Ψ sca can be approximately written as a sum of scattered fields Ψ sca m associated with each individual scatterer. At first glance, one may expect that Ψ sca m ðr; ωÞ must be ψ LEM m ðr; ω m Þ defined previously. However, ψ LEM m ðr; ω m Þ is the scattered wave at resonance frequency ω m , not at arbitrary frequencies as required in Eq. (1). We can amend ψ LEM m ðr; ω m Þ slightly to obtain the form of ψ LEM m ðr; ωÞ for a frequency ω not far from ω m . The NF part [ψ NF m ðr; ω m Þ] is solely determined by ω m , as it is (approximately) an eigensolution of Eq. (3) for eigenfrequency ω m . Since, we will need to utilize the orthonormal properties of ψ NF m ðrÞ offered by Eq. (3) later, here, we take the original form of ψ NF m ðrÞ in constructing the trial wave functions at general frequencies ω ≠ ω m . Meanwhile, the FF part ψ FF m ðr; ω m Þ contains propagating terms depending on the wavevector k q , which must be modified from k q (ω m ) to k q (ω) according to the dispersion relations. We note, however, that ψ LEM m ðr; ωÞ thus obtained neglects the frequency corrections to the FF radiation amplitudes. In principle, such corrections can be taken into account by considering the NF-FF relation of a given source 38,39 . To obtain a concise analytical form for our theory, here, we neglected such corrections, justified by the high-Q approximation. Later, we show that such an approximation works quite well even though the original modes supported by individual resonators do not exhibit extremely high-Q factors.
We can finally construct the total wave function as where {a n } are a set of unknown coefficients representing the strengths of fields scattered by different resonators under external illumination represented by fs þ q g denoting the excitation amplitudes at different incoming ports, and Ψ q B denotes the background wave function obtained when only the qth port is excited with unit amplitude. Substituting Eq. (6) into Eq. (1), projecting both sides by ψ NF m and utilizing the orthogonal condition Eq. (4), we obtain the following equations to determine{a n }: We next multiply both sides of Ψðr; ωÞ defined in Eq. (6) by each FF outgoing basis k À q D , and then perform the field integrations at the reference planes of all ports. Using the orthonormal conditions Eq. (5), we finally obtain the set of equations: , which describe the strengths of scattered fields measured at different external ports. Here, all parameters in Eqs. (7) and (8) are unambiguously defined and can be calculated via the following integrals: where "V" and "S" denote whether the integrals are performed over the entire volume or at the reference plane of a port. The physical meanings of all involved parameters can be clearly seen from their expressions. For example, t mn and X mn represent the coupling strengths between two resonators due to their NF and FF interactions, respectively. Derivations of Eqs. (7)-(9) can be found in Sec. II of the Supplementary Information. It is helpful to explicitly discuss the conditions imposed on our systems to make the derived theory (e.g., Eqs. (7)-(9)) valid. By re-examining Eq. (7) for the single-scatterer case, we find that Im(Γ m ), if it exists, can shift the resonance frequency ω m , and thus, a large Im(Γ m ) implies that ψ NF m is not reasonably chosen. Therefore, the first criterion is ImðΓ m Þ ! 0, which determines the accuracy of our theory at resonance. Meanwhile, we also require ReðΓ m Þ << ω m , which is responsible for the correctness of our theory in describing the entire lineshape. The second criterion can be easily satisfied by a moderate Q value (e.g., Q > 5), as long as the frequency dispersion of the material is not significant and high-order modes are all far from the mode under study. The first criterion, however, requires the resonators to be deep subwavelength in size so that ψ FF m and ψ NF m can exhibit a π/2 phase difference inside the whole region occupied by the resonator 38 , leading to a negligible Im(Γ m ). For plasmonic resonances, such a deep-subwavelength condition is easily satisfied. However, for dielectric resonances, such a condition can only be satisfied in systems with a very high refraction index (n), which pushes the Q factors to even higher values (see Sec. IX in the Supplementary Information for more details).
We note that Eq. (9) is derived for lossless systems, and thus, Γ m must only contain radiation damping. In realistic systems, we also need to consider another parameter Γ a m , representing the damping due to absorption (i.e., replacing Γ m by Γ m þ Γ a m in Eq. (8)). This parameter can be computed using Γ a m represents the Hamiltonian of the realistic lossy systems, while b H 0 m describes the same system with material losses omitted 40 . Equations (7)-(9) are the core results of this paper, which have clear and profound physical meanings. While Eq. (7) describes the dynamics of each mode under certain excitations, Eq. (8) describes the measurable scattering spectra. We note that Eqs. (7) and (8) resemble the two equations in CMT 18,19 , but our theory is different and possesses the following merits. In the empirical CMT, the key parameters defined are usually obtained by fitting with numerical simulations, while the remaining parameters can be derived by energy-conservation and time-reversal arguments 41 . In contrast, here, in our theory, all parameters can be unambiguously evaluated by Eq. (9), and therefore, one can use it to predict the lineshapes of coupled systems before performing numerical simulations on them. Moreover, the empirical CMT cannot explicitly consider the NF couplings between resonators 18 , while in our approach, NF couplings t mn can be unambiguously determined (see Eq. (9)) and explicitly included in determining the lineshape (Eq. (8)).
Although single-resonator parameters (ω res and Γ res ) can be analytically obtained for certain high-symmetry structures for that analytical formulas of scattering coefficients are available 42 , such an approach is not general enough to deal with arbitrary coupled systems without analytical expressions of scattering coefficients and cannot be used to study the couplings between different resonators.
Applications to photonic systems and benchmark tests
We now apply the developed formal theory to photonic systems, described generally by an inhomogeneous permittivity function εðr; ωÞ, in which at each local pointr, the permittivity is εðωÞ ¼ ε 1 ½1 þ ω 2 p =ðω 2 0 À ω 2 þ iωΓ e Þ, where ε ∞ , ω 0 , ω p , and Γ e are all position-and frequencyindependent parameters, describing the local properties of constituent materials. The governing equations (i.e., Maxwell's equations in the frequency domain) can be formally rewritten as Eq. (1) 40 , where the Hamiltonian is given by and the wave function is defined as ΨðrÞ ¼HẼP À V Þ T , withẼ,H, andP denoting the electric, magnetic, and polarization fields, respectively, andṼ ¼ dP=dt describing the polarization current. Consider the lossless case first (i.e., Γ e = 0). The inner product between two wave functions is defined as 26,40 Meanwhile, in the FF region occupied by air, the inner product between two-port modes can be defined as wherec is the light speed in the host medium. This ensures that different port modes are orthogonal and that each mode carries a unit of energy flux 43 . With Eqs. (10)- (12) and supposing that fψ NF m ; ψ FF m g are obtained, one can substitute them into Eq. (9) to compute all parameters (see Sec. III in the Supplementary Information) and then substitute them into Eqs. (7) and (8) to determine the lineshape.
For photonic resonators with regular shapes, fψ NF m ; ψ FF m g can be obtained analytically. For arbitrary resonators, we need to numerically obtain the required wave functions. We emphasize that, however, such numerical calculations are only needed once. Once fψ NF m ; ψ FF m g are obtained, we can predict the lineshapes of the coupled systems without having to perform simulations on them.
We first choose an analytically solvable system-a single gold sphere illuminated by an x-polarized plane wave-to test our theory against Mie theory. As shown in Fig. 2a, consider a sphere located at the origin with radius r m = 0.036λ p and Drude permittivity εðωÞ ¼ ε 0 ½1 À ω 2 p =ω 2 , with ω p and λ p denoting the plasmon resonance frequency and wavelength. Such a problem can be analytically solved by Mie theory 44,45 , yielding an analytical form of Ψ sca ðr; ωÞ. When the scatterer is much smaller than the wavelength of incident light, the electric dipole channel dominates in the frequency range plotted 46 , and thus, we can obtain ω res ¼ ½1 À 8π 2 ðr m =λ p Þ 2 =15 þ :::ω p = ffiffi ffi 3 p and the analytical forms of ψ LEM ; ψ FF , and ψ NF , as well as k ± j i (see Sec. IV in the Supplementary Information). Figure 2a depicts the field distributions of ψ LEM ; ψ FF , and ψ NF : ψ NF exhibits a clear electric dipole resonance feature, and ψ FF represents the FF radiation of an electric dipole located at the origin.
Substituting all wave functions into Eq. (9), we find κ ¼ d ¼ 2:92 10 À2 ffiffiffiffiffi ffi ω p p i and Γ ¼ 4:28 10 À4 ω p . Since there is only one scatterer and one port in the system, we neglect all subscripts without causing confusion. Substituting these parameters into Eqs. (7) and (8), we obtain the scattering spectrum of the nanosphere, defined as σðωÞ ¼ 3π 1 À R j j 2 =ð2η 0 k 2 0 Þ, with η 0 ¼ ffiffiffiffiffiffiffiffiffiffiffi μ 0 =ε 0 p being the vacuum impedance and R s À =s þ , representing the scattering coefficient. The spectrum thus calculated is depicted in Fig. 2b as a solid line, well matching the Mie theory (squares) and FEM calculation (circles) results.
Under the electric dipole approximation, we further simplify the analytical expressions of all involved parameters (see Sec. IV in the Supplementary Information) as withp ¼ R sphereP ðrÞdr (P is the polarization field inside the sphere; see the inset in Fig. 2a), representing the effective dipole moment of the nanosphere. Equation (13) reveals a few important physics difficult to obtain from numerical calculations. First, κ and d, defined as two distinct field integrations (Eq. (9)), surprisingly generate identical results (see Eq. (13)), which is consistent with the time-reversal symmetry argument 19 . Second, Γ takes an expression identical to that derived for a dipole emitter based on Poynting's theorem (see Eq. (8.74) in ref. 38 ), revealing the clear physical meaning of the radiation damping. Finally, Eq. (13) uncovers the relation 2Γ ¼ p j j 2 ω 4 res =ð6πε 0 c 3 Þ ¼ d j j 2 verified by numerical calculations (see Fig. 2c), which ensures energy conservation consistent with Poynting's theorem 38 . We note that these relations were derived by energy-conservation and timereversal arguments in the empirical CMT. Here, they are directly and rigorously demonstrated in our theory simply because our theory is established based on Maxwell's equations, which already satisfy energy-conservation and time-reversal symmetry.
After studying coupled electric dipole resonators to justify our theory against analytical formulas derived in prior literature 47,48 (see Sec. V in the Supplementary Information for details), we implement our theory to study arbitrary photonic coupled systems. As shown in Fig. 3a, the system we consider is a periodic metasurface with unit cells arranged in a hexagonal lattice (with periodicity 550 nm), each containing two different types of nanoparticles (bar and C-shaped resonator) coupled together. All nanoparticles are made of silver and are placed on a semi-infinite dielectric substrate (n = 1.55). Following the general strategy established above, we first perform lossless FEM simulations to study the scattering properties of two model systems, each containing resonators of a particular type arranged in the same hexagonal lattice (see Fig. 3a). Due to the periodic arrangements with deep-subwavelength spacing, only the zero-order transmission/reflection channels survive in the FF. From the calculated reflection spectra (circles) shown in Fig. 3b, c, we identify the resonance frequencies {ω m , m = 1, 2} of the two resonators (see dashed lines in Fig. 3b, c). We then follow the general strategy described in the last section to determine the needed NF and FF wave functions fψ FF m ; ψ NF m ; m ¼ 1; 2g. Substituting these singleresonator properties into Eq. (9), we obtain all needed parameters (see Sec. VI in the Supplementary Information for details) and, in turn, the desired transmission/reflection spectra. The reflectance spectra calculated by our theory are plotted in Fig. 3b, c as black lines, in perfect agreement with FEM simulations (circles) of realistic structures. This is remarkable since we did not perform any fitting procedures in obtaining these spectra. The lineshape of the coupled system predicted by our theory is further confirmed by our experiments. We fabricated three samples according to the designs using the standard electron-beam lithography (EBL) method (see left panel in Fig. 3b-d for their scanning electron microscopy (SEM) images) and experimentally characterized their reflection Supplementary Information). The excellent agreement among the FEM, the experimental and our theoretical results unambiguously justify our theory.
Implementations of the theory in lineshape tailoring
We now apply our theory to "design" the lineshape of a photonic system. Figure 3d shows that the interresonator coupling can dramatically change the lineshape of a coupled system, essentially determined by the two "dressed" modes with frequencies and bandwidths fω ± ;Γ ± g. Therefore, we must first understand the properties of the dressed modes fω ± ;Γ ± g.
Consider a two-mode two-port system with two resonators placed on the same plane illuminated by a normally incident wave. Assuming Γ a 1 ¼ Γ a 2 ¼ Γ a for simplicity, we can explicitly rewrite Eq. (7) as Diagonalizing the matrix containing t by an orthogonal transformation M, we obtain the following equation describing the amplitudes of two collective modesã ± : and ΔΓ ¼ Γ 1 À Γ 2 , and a þãÀ ð Þ T ¼ M a 1 a 2 ð Þ T . Since an orthogonal transformation does not change the trace of a matrix, it is sufficient to study Δω ¼ω þ Àω À and ΔΓ ¼Γ þ ÀΓ À , which are determined by t, Δω, and ΔΓ via Here, and in what follows, we have scaled all involved physical quantities (i.e., Δω, ΔΓ, Δω, ΔΓ, and t) by ffiffiffiffiffiffiffiffiffi Γ 1 Γ 2 p to make them dimensionless. Equation (16) shows that even for two resonators with fixed properties, one can still use the interresonator coupling t to change the properties of the "dressed" modes and, in turn, "design" the final lineshape of the coupled system.
The left and right panels in Fig. 4a depict, respectively, how Δω and ΔΓ vary with Δω and t, with ΔΓ set at two different values. We find that while Δω exhibits circular equal-value lines on the Δω~t plane independent of ΔΓ, ΔΓ, exhibits fascinating behavior on the Δω~t plane depending sensitively on ΔΓ. In particular, on each Δω~t phase plane with a fixed ΔΓ, we always find two special lines, defined as ΔΓ ¼ 0 (red lines) and ΔΓ ¼ ± ðΓ 1 þ Γ 2 Þ (green lines), to separate the whole space into four subregions with distinct properties. Physically, while the Fig. 3 Benchmark test of our theory on a realistic system. a Schematic of the coupled plasmonic system under study. Here, the geometrical parameters are p = 530, d = 30, w = 240, l = 420, R = 110, and a = 85, all in units of nm. b-d Reflectance spectra of periodic metasurfaces containing b bar resonators only, c C resonators only, and d the two resonators coupled together, obtained by our theory (solid lines), FEM simulations (circles), and measurements (triangles). White dashed lines and gray areas denote the frequencies and widths of the resonant modes. The right panels of c and d are SEM images of the fabricated samples with scale bars (white lines) of 500 nm condition ΔΓ ¼ 0 implies that the two dressed modes have identical bandwidths (i.e.,Γ þ ¼Γ À ), the other condition ΔΓ ¼ ± ðΓ 1 þ Γ 2 Þ means that one dressed mode exhibits vanishing radiative damping. Interestingly, these two phase boundary lines rotate as ΔΓ changes, as shown in Fig. 4b.
To illustrate the key features of the four subregions, we purposely choose eight points from a circle on the Δω~t plane with ΔΓ = 2 (see Fig. 4a) and illustrate in Fig. 4c how the reflection spectra of the corresponding systems evolve. Consistent with our expectations, the spectra of systems 1 and 5 only exhibit one peak, as the other mode is completely dark, while the spectra of systems 3 and 7 exhibit two peaks with equal bandwidths. In between these special points, the spectra gradually evolve. Notably, the radiation damping (bandwidths) of the two "dressed" modes can vary continuously from 0 to Γ 1 + Γ 2 , while moving on the circle (see Fig. 4d).
The physics is very clear: now that the dressed modes are appropriate linear combinations of two original modes, their radiation damping must also be linear combinations of that of the two original modes. Therefore, varying Δω and t can dramatically modify the relative portions of the two original modes in constructing the dressed modes and, in turn, efficiently control the radiation damping of the dressed modes. In principle, one can realize any desired lineshapes based on our phase diagram by choosing certain original modes and "tuning" the coupling t. Of particular interest is the appearance of a purely dark mode with infinitely long lifetime, which shares the same physical origin as the BIC and has many interesting applications [49][50][51] .
We now experimentally verify our predictions on lineshape tailoring based on coupled systems constructed by the two resonators studied in Fig. 3a. Since t is solely determined by the overlap between the ψ NF m of two resonators (see Eq. (9)), we understand that changing the resonators' relative configuration can dramatically modify t. Indeed, as we rotate the C-shaped resonator with respect to the bar resonator, we find that t drastically changes (see solid line in Fig. 5a). In particular, increasing the relative angle θ between two resonators can drive t to change from a positive value to a negative value, passing through 0 at a particular angle. Such an intriguing t~θ relation can be simply explained by an effective model for plasmonic coupling established previously 47,48 . Choosing six points on the t~θ curve, as shown in Fig. 5a, we employ our theory to study the optical lineshapes of their corresponding realistic systems. Since the two original modes have fixed properties, these six systems with different t are located on a straight line in the phase diagram passing through two phase boundaries (see Fig. 5b). Their reflection spectra, computed by our theory, are depicted in Fig. 5c as solid lines, exhibiting the expected behaviors. In particular, the spectrum of the third system only exhibits one peak, while that of the fifth system contains two equal-bandwidth peaks, consistent with the phase diagram shown in Fig. 5b. Once again, we emphasize that all spectra are calculated with our theory directly and without any fitting procedures.
We then perform both experiments and simulations to verify the above theoretical predictions. We fabricate samples according to the designs using the standard EBL method, with the right panel in Fig. 5c showing SEM images of the fabricated samples. Illuminating these samples with normally incident light withẼ kŷ, we measure their transmission/reflection spectra and depict the reflection spectra as solid triangles in Fig. 5c. We also perform FEM simulations to calculate their reflection spectra (open circles in Fig. 5c). Both the experimental and simulation results are in excellent agreement with the spectra obtained by our theory (solid lines in Fig. 5c). In particular, the measured/simulated spectra of sample 3 exhibit clear BIC features, while those of sample 5 contain two peaks with equal bandwidths. We also employ our theory to predict the transmission spectra of these systems, which are in excellent agreement with the measured and simulated results (see Sec. VIII in the Supplementary Information).
The solid line in Fig. 5d depicts how varying t significantly modulates the radiative Q factor of the lowfrequency dressed mode, as predicted by our theory. That the Q factor diverges at a specific point signifies the appearance of a BIC. The symbols are the Q factors of six realistic samples obtained by analyzing their measured reflection spectra. Excellent agreement is noted between the experimental and analytical results. At the frequency where the BIC appears, the radiations from the two individual resonators exactly cancel each other, leading to vanishing of the total radiative loss (see Sec. VI in the Supplementary Information).
Discussion
In summary, we have derived a formal theoretical framework directly from Maxwell's equations to study the optical responses of arbitrarily coupled photonic systems, in which all involved parameters are unambiguously computable without any fitting procedures. After testing it against both Mie theory and numerical simulations on different systems, we illustrate how to employ it to design the lineshape of a coupled system by modulating the couplings between resonators. In particular, we show that one can always choose a specific coupling between two arbitrary resonators to make one of the "dressed" modes in the coupled system completely dark, creating a BIC. All predictions are quantitatively verified by our experiments and simulations at near-infrared wavelengths. In addition to revealing the profound physics underlying the coupling-induced phenomena, our theory also offers a powerful tool to design optical devices with wellcontrolled NF and FF properties, and can be extended to study coupled systems for other types of waves.
Simulations
We employed FEM simulations using the commercial software COMSOL Multiphysics. The permittivity of Ag was described by the Drude model εðωÞ ¼ ε 1 À ω 2 p ωðω þ iΓ e Þ , with ε 1 ¼ 5ε 0 , ω 0 ¼ 0 THz, and ω p ¼ 2π 2176:2 THz. The effective damping rate was set as Γ e ¼ 2π 38:3 THz for the bar structure and Γ e ¼ 2π 27:3 THz for the Cshaped resonator, obtained by fitting with our experimental results. The SiO 2 spacer was considered a lossless dielectric with permittivity ε = 2.42. Additional losses caused by surface roughness and grain boundary effects in thin films, as well as dielectric losses were effectively considered in the fitting parameter Γ e .
Fabrication
All our meta-devices were fabricated following standard EBL and lift off processes. First, the positive resist was successively spin coated on a silica substrate, and exposed with EBL (JEOL 6300) with an acceleration voltage of 100 kV. After exposure, the samples were developed in the solution of isopropanol alcohol and methyl isobutyl ketone. Then, 3 nm Cr and 30 nm Au/Ag were deposited using electron-beam evaporation. Finally, the top patterns were formed after a lift of process. All samples had dimensions of 80 µm × 80 µm.
Optical characterizations
We used a homemade macroscopic spectrometer equipped with a broadband supercontinuum white light source and a fiber-coupled grating spectrometer (Ideaoptics NIR2500) to characterize the optical properties of fabricated samples (see more details in Sec. VII of Supplementary Information). | 7,945 | 2020-09-08T00:00:00.000 | [
"Physics"
] |
Search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks using a matrix element method
A search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks is presented. Events with hadronic jets and one or two oppositely charged leptons are selected from a data sample corresponding to an integrated luminosity of 19.5 inverse femtobarns collected by the CMS experiment at the LHC in pp collisions at a centre-of-mass energy of 8 TeV. In order to separate the signal from the larger t t-bar + jets background, this analysis uses a matrix element method that assigns a probability density value to each reconstructed event under signal or background hypotheses. The ratio between the two values is used in a maximum likelihood fit to extract the signal yield. The results are presented in terms of the measured signal strength modifier, mu, relative to the standard model prediction for a Higgs boson mass of 125 GeV. The observed (expected) exclusion limit at a 95% confidence level is mu<4.2 (3.3), corresponding to a best fit value mu-hat = 1.2 +1.6 -1.5.
Introduction
Following the discovery of a new boson with mass around 125 GeV by the ATLAS and CMS Collaborations [1][2][3] at the CERN LHC, the measurement of its properties has become an important task in particle physics.The precise determination of its quantum numbers and couplings to gauge bosons and fermions will answer the question whether the newly discovered particle is the Higgs boson (H) predicted by the standard model (SM) of particle physics, i.e. the quantum of the field responsible for the spontaneous breaking of the electroweak symmetry [4][5][6][7][8][9].Conversely, any deviation from SM predictions will represent evidence of physics beyond our present knowledge, thus opening new horizons in high-energy physics.While the measurements performed with the data collected so far indicate overall consistency with the SM expectations [3,[10][11][12][13], it is necessary to continue improving on the measurement of all possible observables.
In the SM, the Higgs boson couples to fermions via Yukawa interactions with strength proportional to the fermion mass.Direct measurements of decays into bottom quarks and τ leptons have provided the first evidence that the 125 GeV Higgs boson couples to down-type fermions with SM-like strength [14].Evidence of a direct coupling to up-type fermions, in particular to top quarks, is still lacking.Indirect constraints on the top-quark Yukawa coupling can be inferred from measuring either the production or the decay of Higgs bosons through effective couplings generated by top-quark loops.Current measurements of the Higgs boson cross section via gluon fusion and of its branching fraction to photons are consistent with the SM expectation for the top-quark Yukawa coupling [3,[10][11][12].Since these effective couplings occur at the loop level, they can be affected by beyond-standard model (BSM) particles.In order to disentangle the top-quark Yukawa coupling from a possible BSM contribution, a direct measurement of the former is required.This can be achieved by measuring observables that probe the top-quark Yukawa interaction with the Higgs boson already at the tree-level.The production cross section of the Higgs boson in association with a top-quark pair (ttH) provides an example of such an observable.A sample of tree-level Feynman diagrams contributing to the partonic processes qq, gg → ttH is shown in Fig. 1 (left and centre).The inclusive nextto-leading-order (NLO) ttH cross section is about 130 fb in pp collisions at a centre-of-mass energy √ s = 8 TeV for a Higgs boson mass (m H ) of 125 GeV [15][16][17][18][19][20][21][22][23][24], which is approximately two orders of magnitude smaller than the cross section for Higgs boson production via gluon fusion [23,24].The first search for ttH events used pp collision data at √ s = 1.96TeV collected by the CDF experiment at the Tevatron collider [25].Searches for ttH production at the LHC have previously been published for individual decay modes of the Higgs boson [26,27].The first combination of ttH searches in different final states has been published by the CMS Collaboration based on the full data set collected at √ s = 7 and 8 TeV [28].Assuming SM branching fractions, the results of
CMS detector
The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the field volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections.Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid.Extensive forward calorimetry complements the coverage provided by the barrel and endcap detectors.The first level of the CMS trigger system, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most interesting events in a time interval of less than 4 µs.The high-level trigger processor farm further decreases the event rate from around 100 kHz to around 1 kHz, before data storage.A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables can be found in Ref. [29].
Data and simulated samples
The data sample used in this search was collected with the CMS detector in 2012 from pp collisions at a centre-of-mass energy of 8 TeV, using single-electron, single-muon, or dielectron triggers.The single-electron trigger requires the presence of an isolated electron with transverse momentum (p T ) in excess of 27 GeV.The single-muon trigger requires an isolated muon candidate with p T above 24 GeV.The dielectron trigger requires two isolated electrons with p T thresholds of 17 and 8 GeV.
Signal and background processes are modelled with Monte Carlo (MC) simulation programs.
The CMS detector response is simulated by using the GEANT4 software package [37].Simulated events are required to pass the same trigger selection and offline reconstruction algorithms used on collision data.Correction factors are applied to the simulated samples to account for residual differences in the selection and reconstruction efficiencies with respect to those measured.
The ttH, H → bb signal is modelled by using the PYTHIA 6.426 [38] leading order (LO) event generator normalised to the NLO theoretical cross section [15][16][17][18][19][20][21][22][23][24], and assuming the SM Higgs boson with a mass of 125 GeV.The main background in the analysis stems from tt+jet production.This process has been simulated with the MADGRAPH 5.1.3[39] tree-level matrix element generator matched to PYTHIA for the parton shower description, and normalised to the inclusive next-to-next-to-leading-order (NNLO) cross section with soft-gluon resummation at next-to-next-to-leading logarithmic accuracy [40].The tt+jets sample has been generated in a five-flavour scheme with tree-level diagrams for two top quarks plus up to three extra partons, including both charm and bottom quarks.An additional correction factor to the tt+jets samples is applied to account for the differences observed in the top-quark p T spectrum when comparing the MADGRAPH simulation with data.The interference between the ttH, H → bb diagrams and the tt+bb background diagrams is negligible and is not considered in the MC simulation.Minor backgrounds come from the Drell-Yan production of an electroweak boson with additional jets (W+jets, Z+jets), and from the production of a top-quark pair in association with a W ± , Z boson (ttW, ttZ).These processes have been generated by MADGRAPH matched to the PYTHIA parton shower description.The Drell-Yan processes have been normalised to the NNLO inclusive cross section from FEWZ 3.1 [41], while the NLO calculations from Refs.[42] and [43] are used to normalise the ttW and ttZ samples, respectively.Single top quark production is modelled with the NLO generator POWHEG 1.0 [44][45][46][47][48][49] combined with PYTHIA.Electroweak diboson processes (WW, WZ, and ZZ) are simulated by using the PYTHIA generator normalised to the NLO cross section calculated with MCFM 6.6 [50].Processes that involve top quarks have been generated with a top-quark mass of 172.5 GeV.Samples generated at LO use the CTEQ6L1 parton distribution function (PDF) set [51], while samples generated with NLO programs use the CTEQ6.6MPDF set [52].
Effects from additional pp interactions in the same bunch crossing (pileup) are modelled by adding simulated minimum bias events (generated with PYTHIA) to the generated hard interactions.The pileup multiplicity in the MC simulation is reweighted to reflect the luminosity profile observed in pp collision data.
Event reconstruction
The global event reconstruction provided by the particle-flow (PF) algorithm [53,54] seeds the reconstruction of the physics objects deployed in the analysis.To minimise the impact of pileup, charged particles are required to originate from the primary vertex, which is identified as the reconstructed vertex with the largest value of ∑ p 2 T,i , where p T,i is the transverse momentum of the ith charged particle associated with the vertex.The missing transverse momentum vector p miss T is defined as the negative vector sum of the transverse momenta of all neutral particles and of the charged particles coming from the primary vertex.Its magnitude is referred to as E miss T .
Muons are reconstructed from a combination of measurements in the silicon tracker and in the muon system [55].Electron reconstruction requires the matching of an energy cluster in the ECAL with a track in the silicon tracker [56].Additional identification criteria are applied to muon and electron candidates to reduce instrumental backgrounds.An isolation variable is defined starting from the scalar p T sum of all particles contained inside a cone around the track direction, excluding the contribution from the lepton itself.The amount of neutral pileup energy is estimated as the average p T density calculated from all neutral particles in the event multiplied by an effective area of the isolation cone, and is subtracted from the total sum.Jets are reconstructed by using the anti-k T clustering algorithm [57], as implemented in the FASTJET package [58,59], with a distance parameter of 0.5.Each jet is required to have pseudorapidity (η) in the range [−2.5, 2.5], to have at least two tracks associated with it, and to have electromagnetic and hadronic energy fractions of at least 1% of the total jet energy.Jet momentum is determined as the vector sum of the momenta of all particles in the jet.An offset correction is applied to take into account the extra energy clustered in jets because of pileup.Jet energy corrections are derived from the simulation, and are confirmed with in situ measurements of the energy balance of dijet and Z/γ+jet events [60].Additional selection criteria are applied to each event to remove spurious jet-like features originating from isolated noise patterns in few HCAL regions.The combined secondary vertex (CSV) b-tagging algorithm is used to identify jets originating from the hadronisation of bottom quarks [61].This algorithm combines the information about track impact parameters and secondary vertices within jets into a likelihood discriminant to provide separation of b-quark jets from jets that originate from lighter quarks or gluons.The CSV algorithm assigns to each jet a continuous value that can be used as a jet flavour discriminator.Large values of the discriminator correspond preferentially to b-quark jets, so that working points of increasing purity can be defined by requiring higher values of the CSV discriminator.For example, the CSV medium working point (CSVM) is defined in such a way as to provide an efficiency of about 70% (20%) to tag jets originating from a bottom (charm) quark, and of approximately 2% for jets originating from light quarks or gluons.Scale factors are applied to the simulation to match the distribution of the CSV discriminator measured with a tag-and-probe technique [62] in data control regions.The scale factors have been derived as a function of the jet flavour, p T , and |η|, as described in Ref. [28].
Event selection
The experimental signature of ttH events with H → bb is affected by a large multijet background which can be reduced to a negligible level by only considering the semileptonic decays of the top quark.The selection criteria are therefore optimised to accept events compatible with a ttH signal where H → bb and at least one of the top quarks decays to a bottom quark, a charged lepton, and a neutrino.Events are divided into two exclusive channels depending on the number of charged leptons (electrons or muons), which can be either one or two.Top quark decays in final states with tau leptons are not directly searched for, although they can still satisfy the event selection criteria when the tau lepton decays to an electron or muon, plus neutrinos.Channels of different lepton multiplicities are analysed separately.The single-lepton (SL) channel requires one isolated muon with p T > 30 GeV and |η| < 2.1, or one isolated electron with p T > 30 GeV and |η| < 2.5, excluding the 1.44 < |η| < 1.57 transition region between the ECAL barrel and endcap.Events are vetoed if additional electrons or muons with p T in excess of 20 GeV, the same |η| requirement, and passing some looser identification and isolation criteria are found.The dilepton (DL) channel collects events with a pair of oppositely charged leptons satisfying the selection criteria used to veto additional leptons in the SL channel.To reduce the contribution from Drell-Yan events in the same-flavour DL channel, the invariant mass of the lepton pair is required to be larger than 15 GeV and at least 8 GeV away from the Z boson mass.The optimisation of the selection criteria in terms of signal-to-background ratio requires a stringent demand on the number of jets.At least five (four) jets with p T > 30 GeV and |η| < 2.5 are requested in the SL (DL) channel.A further event selection is required to reduce the tt+jets background, which at this stage exceeds the signal rate by more than three orders of magnitude.For this purpose, the CSV discriminator values are calculated for all jets in the event and collectively denoted by ξ.For SL (DL) events with seven or more (five or more) jets, only the six (four) jets with the largest CSV discriminator value are considered.The likelihood to observe ξ is then evaluated under the alternative hypotheses of tt plus two heavy-flavour jets (tt+hf) or tt plus two light-flavour jets (tt+lf).For example, for SL events with six jets, and neglecting correlations among different jets in the same event, the likelihood under the tt+hf hypothesis is estimated as: where ξ i is the CSV discriminator for the ith jet, and f hf(lf) is the probability density function (pdf) of ξ i when the ith jet originates from heavy-(light-) flavour partons.The latter include u, d, s quarks and gluons, but not c quarks.For the sake of simplicity, the likelihood in Eq. ( 1) is rigorous for W → ud(s) decays, whereas it is only approximate for W → cs(d) decays, since the CSV discriminator pdf for charm quarks differs with respect to f lf [61].Equation ( 1) can be extended to the case of SL events with five jets, or DL events with at least four jets, by considering that in both cases four of the jets are associated with heavy-flavour partons, and the remaining jets with light-flavour partons.The likelihood under the alternative hypothesis, f ( ξ|tt+lf), is given by Eq. ( 1) after swapping f hf for f lf .The variable used to select events is then defined as the likelihood ratio The distribution of F for SL events with six jets is shown in Fig. 2 (bottom right).
In the following, events are retained if F is larger than a threshold value F L ranging between 0.85 and 0.97, depending on the channel and jet multiplicity.The selected events are further classified as high-purity (low-purity) if F is larger (smaller) than a value F H , with F L < F H < 1.0.The low-purity categories serve as control regions for tt+lf jets, providing constraints on several sources of systematic uncertainty.The high-purity categories are enriched in tt+hf 2) for single-lepton events with at least six jets after requiring a loose preselection of at least one jet passing the CSVM working point.The plots at the bottom of each panel show the ratio between the observed data and the background expectation predicted by the simulation.The shaded and solid green bands corresponds to the total statistical plus systematic uncertainty in the background expectation described in Section 7.More details on the background modelling are provided in Section 6.3.events, and drive the sensitivity of the analysis.The thresholds F L and F H are optimised separately for each of the analysis categories defined in Section 6.The exact values are reported in Table 1.
After requiring a lower threshold on the selection variable F , the background is dominated by tt+jets, with minor contributions from the production of a single top quark plus jets, tt plus vector bosons, and W/Z+jets; the expected purity for a SM Higgs boson signal is only at the percent level.By construction, the selection criteria based on Eq. ( 2) enhance the tt+bb subprocess compared to the otherwise dominant tt+lf production.The tt+bb background has the same final state as the signal whenever the two b quarks are resolved as individual jets.Therefore, this background cannot be effectively reduced by means of the F discriminant.The cross section for tt+bb production with two resolved b-quark jets is larger than that of the signal by about one order of magnitude and is affected by sizable theoretical uncertainties [63], which hampers the possibility of extracting the signal via a counting experiment.A more refined approach, which thoroughly uses the kinematic properties of the reconstructed event, is therefore required to improve the separation between the signal and the background.
Signal extraction
As in other resonance searches, the invariant mass reconstructed from the H → bb decay pro- vides a natural discriminating variable to separate the narrow Higgs boson dijet resonance from the continuum mass spectrum expected from the tt+jets background.However, in the presence of additional b quarks from the decay of the top quarks, an ambiguity in the Higgs boson reconstruction is introduced, leading to a combinatorial background.The distribution of the experimental mass estimator built from a randomly selected jet pair is much broader compared to the detector resolution, since wrongly chosen jet pairs are only mildly or not at all correlated with m H . Unless a selection rule is introduced to filter out the wrong combinations, the existence of such a combinatorial background results in a suppression of the statistical power of the mass estimator, which grows as the factorial of the jet multiplicity.Multivariate techniques that exploit the correlation between several observables in the same event are naturally suited to deal with signal extraction in such complex final states.
In this paper, a likelihood technique based on the theoretical matrix elements for the ttH process and the tt+bb background is applied for signal extraction.This method utilises the kinematics and dynamics of the event, providing a powerful discriminant between the signal and background.The tt+bb matrix elements are considered as the prototype to model all background processes.This choice guarantees optimal separation between the signal and the tt+bb background, which is a desirable property given the large rate and theoretical uncertainty in the latter.The performance on the other tt+jets subprocesses might not be necessarily optimal, even though some separation power is still preserved; indeed, the tt+bb matrix elements describe these processes better than the signal matrix elements do, as it has been verified a posteriori with the simulation.Also, it is found that most of the statistical power attained by this method in separating ttH, H → bb from tt+bb events relies on the different correlation and kinematic distributions of the two b-quark jets not associated with the top quark decays.
Construction of the MEM probability density functions
The MEM probability density functions under the signal and background hypothesis are constructed at LO assuming for simplicity that in both cases the reactions proceed via gluon fusion.At √ s = 8 TeV, the fraction of the gluon-gluon initiated subprocesses is about 55% (65%) of the inclusive LO (NLO) cross section, and it grows with the centre-of-mass energy [21].Examples of diagrams entering the calculation are shown in the middle and right panels of Fig. 1.All possible jet-quark associations in the reconstruction of the final state are considered.For each event, the MEM probability density function w( y|H) under the hypothesis H = ttH or tt+bb is calculated as: where y denotes the set of observables for which the matrix element pdf is constructed, i.e. the momenta of jets and leptons.The sum extends over the N a possibilities of associating the jets with the final-state quarks.The integration on the right-hand side of Eq. ( 3) is performed over the phase space of the final-state particles and over the gluon energy fractions x a,b by using the VEGAS [64] to the measured transverse recoil ρ T , defined as the negative of the total transverse momentum of jets and leptons, plus the missing transverse momentum.
The remaining part of the integrand in Eq. ( 3) contains the product of the gluon PDFs in the protons (g), the square of the scattering amplitude (M), and the transfer function (W).For H = ttH, the factorisation scale µ F entering the PDF is taken as half of the sum of twice the top-quark mass and the Higgs boson mass [20], while for H = tt+bb a dynamic scale is used equal to the quadratic sum of the transverse masses for all coloured partons [65].The scattering amplitude for the hard process is evaluated numerically at LO accuracy by the program OPENLOOPS [66]; all resonances are treated in the narrow-width approximation [67], and spin correlations are neglected.The transfer function W ( y, p) provides a mapping between the measured set of observables y and the final-state particles momenta p = ( p 1 , . . ., p 8 ).Given the good angular resolution of jets, the direction of quarks is assumed to be perfectly measured by the direction of the associated jets.Also, since energies of leptons are measured more precisely than for jets, their momenta are considered perfectly measured.Under these assumptions, the total transfer function reduces to the product of the quark energy transfer function times the probability for the quarks that are not reconstructed as jets to fail the acceptance criteria.The quark energy transfer function is modelled by a single Gaussian function for jets associated with light-flavour partons, and by a double Gaussian function for jets associated with bottom quarks; the latter provides a better description of the low-energy tail in the transfer function arising from semileptonic B hadron decays.The parametrisation of the transfer functions has been derived from MC simulated samples.
Event categorisation
To aid the evaluation of the MEM probability density functions at LO, events are classified into mutually exclusive categories based on different parton-level interpretations.Firstly, the set of jets yielding the largest contribution to the sum defined by Eq. ( 1), determines the four (tagged) jets associated with bottom quarks; the remaining N untag (untagged) jets are assumed to originate either from W → qq decays (SL channel) or from initial-or final-state gluon radiation (SL and DL channels).There still remains a twelve-fold ambiguity in the determination of the parton matched to each jet, which is reflected by the sum in Eq. ( 3).Indeed, without distinguishing between b and b quarks, there exist 4!/(2!2!) = 6 combinations for assigning two jets out of four with the Higgs boson decay (H = ttH), or with the bottom quark-pair radiation (H = tt+bb); for each of these possibilities, there are two more ways of assigning the remaining tagged jets to either the t or t quark, thus giving a total of twelve associations.In the SL channel, an event can be classified in one of three possible categories.The first category (Cat-1) is defined by requiring at least six jets; if there are exactly six jets, the mass of the two untagged jets is required to be in the range [60, 100] GeV, i.e. compatible with the mass of the W boson.
If the number of jets is larger than six, the mass range is tightened to compensate for the increased ambiguity in selecting the correct W boson decay products.In the event interpretation, the W → qq decay is assumed to be fully reconstructed, with the two quarks identified with the jet pair satisfying the mass constraint.The definition of the second category (Cat-2) differs from that of Cat-1 by the inversion of the dijet mass constraint.This time, the event interpretation assumes that one of the quarks from the W boson decay has failed the reconstruction.The integration on the right-hand side of Eq. ( 3) is extended to include the phase space of the nonreconstructed quark.The other untagged jet(s) is (are) interpreted as gluon radiation, and do not enter the calculation of w( y|H).The total number of associations considered is twelve times the multiplicity of untagged jets eligible to originate from the W boson decay: N a = 12N untag .In the third category (Cat-3), exactly five jets are required, and an incomplete W boson reconstruction is again assumed.In the DL channel, only one event interpretation is considered, namely that each of the four bottom quarks in the decay is associated with one of the four tagged jets.
Finally, two event discriminants, denoted by P s/b and P h/l , are defined.The former encodes only information from the event kinematics and dynamics via Eq.( 3), and is therefore suited to separate the signal from the background; the latter contains only information related to b tagging, thus providing a handle to distinguish between the heavy-and the light-flavour components of the tt+jets background.They are defined as follows: and where the functions f ( ξ|tt+hf) and f ( ξ|tt+lf) are defined as in Eq. ( 1) but restricting the sum only to the jet-quark associations considered in the calculation of w( y); the coefficients k s/b and k h/l in the denominators are positive constants that can differ among the categories.
The joint distribution of the (P s/b , P h/l ) discriminants is used in a two-dimensional maximum likelihood fit to search for events resulting from Higgs boson production.By construction, the two discriminants satisfy the constraint 0 ≤ P s/b , P h/l ≤ 1.Because of the limited size of the simulated samples, the distributions of P s/b and P h/l are binned.A finer binning is used for the former, which carries the largest sensitivity to the signal, while the latter is divided into two equal-sized bins.The coefficient k s/b appearing in the definition of P s/b is introduced to adjust the relative normalisation between w( y|ttH) and w( y|tt+bb); likewise for k h/l .A redefinition of any of the two coefficients would change the corresponding discriminant monotonically, thus with no impact on its separation power.However, since both variables are analysed in bins with fixed size, an optimisation procedure, based on minimising the expected exclusion limit on the signal strength as described in Section 8, is carried out to choose the values that maximise the sensitivity of the analysis.
Background modelling
The background normalisation and the distributions of the event discriminants are derived by using the MC simulated samples described in Section 3. In light of the large theoretical uncer-tainty that affects the prediction of tt plus heavy-flavour [63,68], the MADGRAPH sample is further divided into subsamples based on the quark flavour associated with the jets generated in the acceptance region p T > 20 GeV, |η| < 2.5.Events are labelled as tt+bb if at least two jets are matched within √ (∆η) 2 + (∆φ) 2 < 0.5 to bottom quarks not originating from the decay of a top quark.If only one jet is matched to a bottom quark, the event is labelled as tt+b.These cases typically arise when the second extra b quark in the event is either too far forward or too soft to be reconstructed as a jet, or because the two extra b quarks are emitted almost collinearly and end up in a single jet.Similarly, if at least one reconstructed jet is matched to a c quark, the event is labelled as tt+cc.In the latter case, single-and double-matched events are treated as one background.If none of the above conditions is satisfied, the event is classified as tt plus light-flavour.Table 1 reports the number of events observed in the various categories, together with the expected signal and background yields.The latter are obtained from the signal-plus-background fit described in Section 8.
Systematic uncertainties
There are a number of systematic uncertainties of experimental and theoretical origin that affect the signal and the background expectations.Each source of systematic uncertainty is associated with a nuisance parameter that modifies the likelihood function used to extract the signal yield, as described in Section 8.The prior knowledge on the nuisance parameter is incorporated into the likelihood in a frequentist manner by interpreting it as a posterior arising from a pseudomeasurement [69].Nuisance parameters can affect either the yield of a process (normalisation uncertainty), or the shape of the P s/b and P h/l discriminants (shape uncertainty), or both.Multiple processes across several categories can be affected by the same source of uncertainty.In that case the related nuisance parameters are treated as fully correlated.
The uncertainty in the integrated luminosity is estimated to be 2.6% [70].The lepton trigger, reconstruction, and identification efficiencies are determined from control regions by using a tag-and-probe procedure.The total uncertainty is evaluated from the statistical uncertainty of the tag-and-probe measurement, plus a systematic uncertainty in the method, and is estimated to be 1.6% per muon and 1.5% per electron.It is conservatively approximated to a constant 2% per charged lepton.The uncertainty on the jet energy scale (JES) ranges from 1% up to about 8% of the expected energy scale depending on the jet p T and |η| [60].For each simulated sample, two alternative distributions of the P s/b and P h/l discriminants are obtained by varying the energy scale of all simulated jets up or down by their uncertainty, and the fit is allowed to interpolate between the nominal and the alternative distributions with a Gaussian prior [69].A similar procedure is applied to account for the uncertainty related to the jet energy resolution (JER), which ranges between about 5% and 10% of the expected energy resolution depending on the jet direction.Since the analysis categories are defined in terms of the multiplicity and kinematic properties of the jets, a variation of either the scale or the resolution of the simulated jets can induce a migration of events in or out of the analysis categories, as well as migrations among different categories.The fractional change in the event yield induced by a shift of the JES (JER) ranges between 4-13% (0.5-2%), depending on the process type and on the category.When the JES and JER are varied from their nominal values, the p miss T vector is recomputed accordingly.The scale factors applied to correct the CSV discriminator, as described in Section 4, are affected by several sources of systematic uncertainty.In the statistical interpretation, the fit can interpolate between the nominal and the two alternative distributions constructed by varying each scale factor up or down by its uncertainty.
Theoretical uncertainties are treated as process-specific if they impact the prediction of one simulated sample at the time.They are instead treated as correlated across several samples if they are related to common aspects of the simulation (e.g.PDF, scale variations).The modelling of the tt+jets background is affected by a variety of systematic uncertainties.The uncertainty due to the top-quark p T modelling is evaluated by varying the reweighting function r t (p t T ), where p t T is the transverse momentum of the generated top quark, between one (no correction at all) and 2r t − 1 (the relative correction is doubled).This results in both a shape and a normalisation uncertainty.The latter can be as large as 20% for a top quark p T around 300 GeV, and corresponds to an overall normalisation uncertainty of about 3-8% depending on the category.To account for uncertainties in the tt+jets acceptance, the factorisation and renormalisation scales used in the simulation are varied in a correlated way by factors of 1/2 and 2 around their central value.The scale variation is assumed uncorrelated among tt+bb, tt+b, and tt+cc.In a similar way, independent scale variations are introduced for events with exactly one, two, or three extra partons in the matrix element.To account for possibly large K-factors due to the usage of a LO MC generator, the tt+bb, tt+b, and tt+cc normalisations predicted by the MAD-GRAPH simulation are assigned a 50% uncertainty each.This value can be seen as a conservative upper limit to the theoretical uncertainty in the tt+hf cross section achieved to date [63].Essentially, the approach followed here is to assign large a priori normalisation uncertainties to the different tt+jets subprocesses, thus allowing the fit to simultaneously adjust their rates.Scale uncertainties in the inclusive theoretical cross sections used to normalise the simulated samples range from a few percent up to 20%, depending on the process.The PDF uncertainty is treated as fully correlated for all processes that share the same dominant initial state (i.e.gg, gg, or qq); it ranges between 3% and 9%, depending on the process.Finally, the effect of the limited size of the simulated samples is accounted for by introducing one nuisance parameter for each bin of the discriminant histograms and for each sample, as described in Ref. [71].Table 2 summarises the various sources of systematic uncertainty with their impact on the analysis.Table 2: Summary of the systematic uncertainties affecting the signal and background expectation.The second column reports the range of rate variation for the processes affected by a given source of systematic uncertainty (as specified in the last three columns) when the nuisance parameter associated with it is varied up or down by its uncertainty.The third column indicates whether a source of systematic uncertainty is assumed to affect the process normalisation only, or both the normalisation and the shape of the event discriminants.
Results
The statistical interpretation of the results is performed by using the same methodology employed for other CMS Higgs boson analyses and extensively documented in Ref. [2].The measured signal rate is characterised by a strength modifier µ = σ/σ SM that scales the Higgs boson production cross section times branching fraction with respect to its SM expectation for m H = 125 GeV.The nuisance parameters, θ, are incorporated into the likelihood as described in Section 7. The total likelihood function L (µ, θ) is the product of a Poissonian likelihood spanning all bins of the (P s/b , P h/l ) distributions for all the eight categories, times a likelihood function for the nuisance parameters.Based on the asymptotic properties of the profile likelihood ratio test statistic q(µ) = −2 ln L µ, θµ /L μ, θ , confidence intervals on µ are set, where θ and θµ indicate the best-fit value for θ obtained when µ is floating in the fit or fixed at a hypothesised value, respectively.
Figures 3 and 4 show the binned distributions of (P s/b , P h/l ) in the various categories and for the two channels.For visualisation purposes, the two-dimensional histograms are projected onto one dimension by showing first the distribution of P s/b for events with P h/l < 0.5 and then for P h/l ≥ 0.5.The observed distributions are compared to the signal-plus-background expectation obtained from a combined fit to all categories with the constraint µ = 1.No evidence of a ttH signal over the background is observed.The statistical interpretation is performed both in terms of exclusion upper limits (UL) at a 95% CL, where the modified CL s prescription [72,73] is adopted to quote confidence intervals, and in terms of the maximum likelihood estimator of the strength modifier ( μ). 3 summarises the results.
Overall, a consistent distribution of the nuisance parameters pulls is obtained from the combined fit.In the signal-plus-background (background-only) fit, the nuisance parameters that account for the 50% normalisation uncertainty in the tt+bb, tt+b, and tt+cc backgrounds are pulled by +0.2 (+0.5), −0.4 (−0.3), and +0.8 (+0.8), respectively, where the pull is defined as the shift of the best-fit estimator from its nominal value in units of its a priori uncertainty.The correlation between the tt+bb normalisation nuisance and the μ estimator is found to be ρ ≈ −0.4, and is the largest entry in the correlation matrix.From an a priori study (i.e.before fitting the nuisance parameters with the likelihood function of the data), the nuisance parameter corresponding to the 50% normalisation uncertainty in the tt+bb background features the largest impact on the median expected limit, which would be around 4% smaller if that uncertainty were not taken into account.Such a reduced impact on the expected limit implies that the sensitivity of the analysis is only mildly affected by the lack of a stringent a priori constraint on the tt+bb background normalisation; this is also consistent with the observation that the fit effectively constrains the tt+bb rate, narrowing its normalisation uncertainty down to about 25%.
Table 3: The best-fit values of the signal strength modifier obtained from the SL and DL channels alone, and from their combination.The observed 95% CL UL on µ are given in the third column, and are compared to the median expected limits for both the signal-plus-background and for the background-only hypotheses.For the latter, the ±1σ and ±2σ CL intervals are also given.For illustration, Fig. 5 (bottom) shows the distribution of the decimal logarithm log(S/B), where S/B is the ratio between the signal and background yields in each bin of the twodimensional histograms, as obtained from a combined fit with the constraint µ = 1.Agreement between the data and the SM expectation is observed over the whole range of this variable.
Summary
A search for Higgs boson production in association with a top-quark pair with H → bb has been presented.A total of 19.√ s = 8 TeV has been analysed.Events with one lepton and at least five jets or two oppositesign leptons and at least four jets have been considered.Jet b-tagging information is exploited to suppress the tt plus light-flavour background.A probability density value under either the ttH or the tt+bb background hypothesis is calculated for each event using an analytical matrix element method.The ratio of probability densities under these two competing hypotheses allows a one-dimensional discriminant to be defined, which is then used together with b-tagging information in a likelihood analysis to set constraints on the signal strength modifier µ = σ/σ SM .
No evidence of a signal is found.The expected upper limit at a 95% CL is µ < 3.3 under the background-only hypothesis.The observed limit is µ < 4.2, corresponding to a best-fit value μ = 1.2 +1.6 −1.5 .Within the present statistics, the analysis documented in this paper yields competitive results compared to those obtained on the same data set and for the same final state by using non-analytical multivariate techniques [28].However, the matrix element method applied for a maximal separation between the signal and the dominant tt+bb background allows for a better control of the systematic uncertainty due to this challenging background.This method represents a promising strategy for the future, when the statistical uncertainty will be greatly reduced and the role of the systematic uncertainties will become crucial for a precise determination of the top-quark Yukawa coupling.
Figure 2 (
top) shows the jet multiplicity in the SL (left) and DL (right) channels, while the bottom left panel of the same figure shows the multiplicity of jets passing the CSVM working point in the SL channel.
5 FFigure 2 :
Figure2: Top row: distribution of the jet multiplicity in (left) single-lepton and (right) dilepton events, after requiring that at least two jets pass the CSVM working point.Bottom-left: distribution of the multiplicity of jets passing the CSVM working point in single-lepton events with at least four jets.Bottom-right: distribution of the selection variable F defined in Eq. (2) for single-lepton events with at least six jets after requiring a loose preselection of at least one jet passing the CSVM working point.The plots at the bottom of each panel show the ratio between the observed data and the background expectation predicted by the simulation.The shaded and solid green bands corresponds to the total statistical plus systematic uncertainty in the background expectation described in Section 7.More details on the background modelling are provided in Section 6.3.
algorithm.The four-momenta of the initial-state gluons p a,b are related to the four-momenta of the colliding protons P a,b by the relation p a,b = x a,b P a,b .The delta function enforces the conservation of longitudinal momentum and energy between the incoming gluons and the k = 1, . . ., 8 outgoing particles with four-momenta p k .To account for the possibility of inital/final state radiation, the total transverse momentum of the final-state particles, which should be identically zero at LO, is instead loosely constrained by the resolution function R(x,y)
Figure 5 (
Figure 5 (top left) shows the observed 95% CL UL on µ, compared to the signal-plus-background and to the background-only expectation.Results are shown for the SL and DL channels alone, and for their combination.The observed (background-only expected) exclusion limit is µ < 4.2 (3.3).The best-fit value of µ obtained from the individual channels and from their combination is shown in Fig. 5 (top right).A best-fit value μ = 1.2 +1.6−1.5 is measured from the combined fit.Table3summarises the results.
Figure 3 :
Figure3: Distribution of the P s/b discriminant in the two P h/l bins for the high-purity (H) categories.The signal and background yields have been obtained from a combined fit of all nuisance parameters with the constraint µ = 1.The bottom panel of each plot shows the ratio between the observed and the overall background yields.The solid blue line indicates the ratio between the signal-plus-background and the background-only distributions.The shaded and solid green bands band correspond to the ±1σ uncertainty in the background prediction after the fit.
Figure 4 :
Figure4: Distribution of the P s/b discriminant in the two P h/l bins for the low-purity (L) categories.The signal and background yields have been obtained from a combined fit of all nuisance parameters with the constraint µ = 1.The bottom panel of each plot shows the ratio between the observed and the overall background yields.The solid blue line indicates the ratio between the signal-plus-background and the background-only distributions.The shaded and solid green bands correspond to the ±1σ uncertainty in the background prediction after the fit.
Figure 5 :
Figure 5: (top left) Observed 95% CL UL on µ are compared to the median expected limits under the background-only and the signal-plus-background hypotheses.The former are shown together with their ±1σ and ±2σ CL intervals.Results are shown separately for the individual channels and for their combination.(top right) Best-fit value of the signal strength modifier µ with its ±1σ CL interval obtained from the individual channels and from their combination.(bottom) Distribution of the decimal logarithm log(S/B), where S (B) indicates the total signal (background) yield expected in the bins of the two-dimensional histograms, as obtained from a combined fit with the constraint µ = 1.
Table 1 :
Expected and observed event yields in the (top) high-purity (H) and (bottom) lowpurity (L) categories of the SL and DL channels.The expected event yields with their uncertainties are obtained from a signal-plus-background fit as described in Section 8.In the last row of each table, the symbol S (B) denotes the signal (total background) yield.
5 fb −1 of pp collision data collected by the CMS experiment at | 10,082.8 | 2015-02-09T00:00:00.000 | [
"Physics"
] |
Impact of Nitroxyl Radicals on Photovoltaic Conversion Properties of Dye-Sensitized Solar Cells
Nitroxyl radicals, characterized by unique redox properties, have been investigated for their potential influence on the photovoltaic conversion properties of dye-sensitized solar cells (DSSCs). In this study, we investigated the influence of nitroxyl radicals as donor sites in DSSCs. We observed that the redox activity of nitroxyl radicals significantly enhanced the photovoltaic conversion efficiency of DSSCs; this finding can offer new insights into the application of these radicals in solar energy conversion. Furthermore, we found that increasing the proportion of nitroxyl radicals improved the DSSC performance. Through a combination of experimental and analytical approaches, we elucidated the mechanism underlying this enhancement and highlighted the potential for more efficient DSSCs using nitroxyl radicals as key components. These findings provide new avenues for developing advanced DSSCs with improved performances and sustainability.
Introduction
Since the report on dye-sensitized solar cells (DSSCs) by Grätzel et al. [1], DSSCs have been actively pursued to promote the use of renewable energy to achieve a sustainable society [2].In DSSCs, dye molecules adsorbed on titanium dioxide (TiO 2 ) absorb solar light, causing electrons to be extracted from the dye to the TiO 2 electrode, allowing for the external generation of electrical power.However, the efficiency of electron injection from the dye to the TiO 2 electrode can be significantly affected by the aggregation of the dye molecules on the TiO 2 electrode.This results in interactions among the dye molecules, such as electron transfer, leading to decreased electron injection efficiency and, consequently, the photovoltaic conversion efficiency of DSSC.Common approaches for overcoming this issue include the co-adsorption of dye molecules with bulky aliphatic carboxylic acids, such as chenodeoxycholic acid (CDCA) [3], and introducing bulky substituents into the dye molecules themselves [4].These methods increase the distance between the dye molecules adsorbed on TiO 2 , alleviating aggregation.Consequently, interactions between the dye molecules are reduced, leading to an improvement in the electron injection efficiency, ultimately enhancing the photovoltaic conversion efficiency of DSSCs.Furthermore, a groundbreaking DSSC was recently developed by Cao, Hagfeldt, and Grätzel, wherein simply pre-adsorbing a hydroxamic acid derivative to control the assembly of the dye improved the molecular packing, leading to an extremely high conversion efficiency reaching 30% [5].Thus, although research on perovskite solar cells continues to flourish, research on DSSCs is also being actively pursued.
We previously developed a method for the molecular analysis of the aggregation state of dyes by utilizing the nitroxyl radicals present in 2,2,6,6-tetramethylpiperidin-1-oxyl (TEMPO) [6].The presence of stable nitroxyl radicals in TEMPO allows for electron spin resonance (ESR) spectroscopy.Typically, nitroxyl radicals, owing to their large anisotropies in g value and hyperfine coupling constants (hfc) with the nitrogen atom, considerably influence the orientation and motion properties in ESR spectra.In this study, we devised a spin-probe method by applying the characteristics of ESR-active species, that is, the nitroxyl radicals in TEMPO, to analyze the aggregation state of dye molecules in DSSCs.In the course of our previous studies on spin probe methods using dyes containing TEMPO moieties, we observed an increase in the open-circuit voltage and, consequently, an improvement in the photovoltaic conversion efficiency when dye molecules containing TEMPO were utilized in DSSCs.Nishide et al. successfully applied the redox activity of nitroxyl radicals to electrode materials for chargeable and dischargeable batteries by taking advantage of their stable redox properties [7].Their work promoted us to hypothesize that the nitroxyl radicals in spin-probe molecules would exhibit a similar redox activity and that this property would also affect the DSSC properties.Subsequently, we confirmed the mechanism underlying this phenomenon by designing and synthesizing novel dyes incorporating the TEMPO moiety and investigating their effects on the photovoltaic conversion properties of DSSCs.
Fluorine-doped tin oxide (FTO) transparent conductive glass was purchased from Nippon Sheet Glass and cut into pieces with a width of 15 mm and length of 25 mm.Subsequently, they were subjected to ultrasonic cleaning by sequentially employing detergent, distilled water, acetone, and 2-propanol.After ultrasonic cleaning, they were air-dried for over 24 h before use.TiO 2 paste (PST-18NR) was obtained from JGC Catalysts and Chemicals, Ltd. (Kawasaki, Japan), and used as received.
Characterization
1 H nuclear magnetic resonance (NMR) and ESR spectra were recorded using a 500-MHz spectrometer (NMR System 500, Varian Inc., Palo Alto, CA, USA) and an ESR spectrometer (JES-RE1X, JEOL, Tokyo, Japan), respectively.Mass spectra were measured using gas chromatography-time-of-flight mass spectrometry (GC-TOFMS; JMS-T100GCV (AccuTOF GCv 4G), JEOL) and electron spray ionization or atmospheric pressure chemical ionization Fourier-transformed mass spectrometry (ESI-or APCI-FTMS; LTQ Orbitrap XL, Thermo Fisher Scientific, Waltham, MA, USA).UV-VIS absorption and emission spectra were recorded on a Shimadzu UV-3150 spectrophotometer and a Hitachi F-4500 fluorescence spectrophotometer, respectively.Cyclic voltammetry (CV) was performed with a potentiostat/galvanostat (HZ-3000, Hokuto Denko, Tokyo, Japan) using a three-electrode system with a spherical Pt working electrode, Pt wire counter electrode, and Ag/Ag + reference electrode; the supporting electrolyte was 0.1 M TBAP in CH 2 Cl 2 .The reference electrode potential was calibrated using ferrocene after each set of measurements.The potentials references to ferrocene were converted to the normal hydrogen electrode (NHE) standard by adding 0.63 V [8].
Fabrication of DSSCs and Photovoltaic Measurements
The TiO 2 paste was deposited on a FTO substrate via doctor-blading and sintered for 50 min at 450 • C. The 9 µm-thick TiO 2 electrode (0.5 × 0.5 cm 2 in a photoactive area) was immersed in 5 mL of a 0.1 mM dye solution of THF for adsorption of the photosensitizer.DSSCs were fabricated with the dye-adsorbed TiO 2 as the electrode; Pt-coated glass as the counter electrode; and a solution of 0.05 M I 2 , 0.1 M LiI, and 0.6 M DMPrII in acetonitrile as the electrolyte.The photocurrent density (J)-voltage (V) characteristics were measured using a potentiostat under simulated solar light (AM 1.5, 100 mW cm −2 ) supplied by a solar simulator (HAL-302, Asahi Spectra, Tokyo, Japan).Incident photon-to-current conversion efficiency (IPCE) spectra were measured under monochromatic irradiation using a tungstenhalogen lamp (AT-100HG, Shimadzu, Kyoto, Japan) and a monochromator (Shimadzu SPG-120 S).We calculated the standard deviation of the conversion efficiencies of multiple DSSCs fabricated with each dye, and we confirmed that the standard deviations were all approximately 0.1.
Compound 4: 5-Formyl-5 -(4-bromophenyl)-2,2 -bithiophene
A round-bottom flask was purged with N 2 and 1,4-dibromobenzene (1.95 g, 8.27 mmol), compound 3 (411 mg, 2.12 mmol), Pd(OAc) 2 (23 mg, 0.103 mmol), Bu 4 NBr (669 mg, 2.08 mmol), KOAc (493 mg, 5.02 mmol), and distilled DMF (20 mL) were added.The solution was stirred at 90 • C under an N 2 atmosphere for 12 h.After completion of the reaction, the solution was cooled to room temperature, washed with dichloromethane and a saturated saline solution, and the organic layer was extracted.The obtained solution was dried over anhydrous Na 2 SO 4 , filtered, and the solvent was removed using a rotary evaporator, resulting in a brown solid.The crude product was purified using silica gel column chromatography (eluent: dichloromethane/hexane = 2:1), yielding compound 4 as a yellow solid.Yield: 230 mg (0.659 mmol), 31.1%.A two-neck round-bottom flask was rigorously purged with N 2 .Compound 4 (155 mg, 0.356 mmol), ethylene glycol (0.98 mL, 17.8 mmol), p-TSA (34 mg, 0.178 mmol), and distilled toluene (10 mL) were added to the flask.The solution was refluxed at 110 • C for 24 h.After the reaction, the solution was cooled to room temperature, washed with dichloromethane and a saturated sodium bicarbonate solution, and the organic layer was extracted.The obtained solution was dried over anhydrous Na 2 SO 4 and filtered, and the solvent was evaporated using a rotary evaporator to yield compound 5 as a pale-yellow solid.The obtained solid was used without further purification to synthesize compound 6.A two-neck round-bottom flask was purged with N 2 , and 2 (290 mg, 0.834 mmol), 5 (298 mg, 0.758 mmol), Pd(OAc) 2 (11 mg, 0.049 mmol), t-BuONa (111 mg, 1.16 mmol), distilled toluene (15 mL), and P(t-Bu) 3 (0.50 mL, 2.06 mmol) were added.The mixture was refluxed at 110 • C for 12 h under stirring.After cooling to room temperature, the solution was washed with dichloromethane and saturated saline solution.The organic layer was dried over anhydrous Na 2 SO 4 and filtered, and the solvent was removed using a rotary evaporator.The resulting oily red substance was vacuum dried.The crude product was purified using silica gel column chromatography (eluent: dichloromethane) to yield compound 6 as a yellow-brown solid.Yield: 212 mg (0.322 mmol), 39.1%.HRMS (DI) m/z calculated for C 38 H 47 N 2 O 4 S 2 = 659.29772(M + ), found = 659.29630(M + ).Compound 6 (212 mg, 0.322 mmol), p-TSA (31 mg, 0.161 mmol), distilled water (6.5 mL), and distilled THF (20 mL) were added to a two-necked round-bottom flask, and the mixture was stirred at room temperature for 24 h.After removing THF using a rotary evaporator, dichloromethane and a saturated sodium bicarbonate solution were added, and the organic layer was extracted.The organic layer was dried over anhydrous Na 2 SO 4 and filtered, following which the solvent was removed using a rotary evaporator, resulting in an orange solid.As no byproducts were observed using thin-layer chromatography, they were used without purification in the synthesis of the next compound.Yield (crude): 130 mg.
The HRMS (APCI) m/z calculated for C 36 H 44 O 3 N 2 S 2 = 616.27879 The compound 7 (130 mg, 0.211 mmol), cyanoacetic acid (54 mg, 0.633 mmol), piperidine (180 mg, 2.11 mmol), and CHCl 3 (10 mL) were added to a two-neck round-bottom flask, and the mixture was stirred at 70 • C under an N 2 -atmosphere for 12 h.After the reaction was completed, the solution was cooled to room temperature, hydrochloric acid (pH = 3) was added, and the organic layers were extracted.The organic layer was dried over anhydrous Na 2 SO 4 , filtered, and the solvent was removed using a rotary evaporator, resulting in a purple solid.The crude product was purified by silica gel column chromatography (eluent: dichloromethane/methanol = 10:1), yielding TEMPO-dye as a dark red solid.Yield: 53.3 mg (0.078 mmol), 37.0%.HRMS (APCI) m/z calculated for C 39 H 44 O 4 N 3 S 2 = 682.27677(M + ), found = 682.27754(M + ).
Design and Synthesis of Dye Compound
We adopted a donor-π-acceptor (D-π-A) structure for the molecular framework of the dye, incorporating an aromatic amine as the donor moiety, bithiophene as the π-linker, and cyanoacrylic acid as the acceptor moiety.Generally, in this molecular framework, the highest occupied molecular orbital (HOMO) was localized on the donor moiety and the lowest unoccupied molecular orbital (LUMO) was localized on the acceptor moiety.During light irradiation, electrons from the donor moiety underwent intramolecular charge transfer (ICT) to the acceptor moiety via the π linker.Furthermore, the carboxylic group on the acceptor moiety adsorbed onto the surface of the TiO 2 electrode, facilitating the smooth injection of the transferred electrons into the TiO 2 electrode.In our previous study, we also employed the D-π-A framework for the dye incorporating TEMPO as a spin probe.However, highly reactive N-H groups were retained in the donor moiety in these spin probe dyes.Therefore, to prevent unexpected side reactions that might have occurred with these N-H groups, a phenyl group was introduced into the donor moiety in the dye molecule in the present study.
The dye was synthesized as shown in Scheme 1.The synthesis of the donor moiety involved a Buchwald-Hartwig reaction [11] between 1 and 4-aminoTEMPO.The π-linker moiety, compound 4, was synthesized through a direct C-H arylation reaction [12] between 5-formyl-2,2 -bithiophene (compound 3) and 1,4-dibromobenzene.The coupling of the donor and π-linker moieties was achieved by protecting the formyl group of compound 4 as an acetal [13], followed by a Buchwald-Hartwig reaction.After deprotection of the protecting group, the final target, a D-π-A type dye referred to as TEMPO-dye, was synthesized through the Knoevenagel condensation [14] of compound 7 with cyanoacetic acid.Owing to the presence of paramagnetic radical species in the compounds containing the TEMPO moiety (compounds 2, 6, 7, and TEMPO-dye), NMR identification was challenging because the strong interaction between electronic spins and hydrogen nuclear spins caused broadening of the signals.However, accurate molecular weights were confirmed by HRMS (see also Figure S1 in Supplementary Materials), and the results indicated the successful synthesis of the target compounds.For TEMPO-dye, an additional signal corresponding to a hydrogen adduct formed by the deactivation of the radical species (calculated for C 39 H 46 O 4 N 3 S 2 = 684.29242([M + H] + ), found = 684.29304([M + H] + )) was also observed, in addition to a signal corresponding to the molecular weight of the target compound (calculated for C 39 H 44 O 4 N 3 S 2 = 682.27677(M + ), found = 682.27754(M + )).In the positive mode of HRMS, the molecular ion peak of TEMPO-dye was observed, and its theoretical value agreed well with the calculated value up to two decimal places, confirming the correctness of its molecular formula.However, the signal intensity was weak and obscured by noise.In contrast, in the negative mode of HRMS, the ion peak of the target compound could be clearly identified.
we also employed the D-π-A framework for the dye incorporating TEMPO as a spin probe.However, highly reactive N-H groups were retained in the donor moiety in these spin probe dyes.Therefore, to prevent unexpected side reactions that might have occurred with these N-H groups, a phenyl group was introduced into the donor moiety in the dye molecule in the present study.
The dye was synthesized as shown in Scheme 1.The synthesis of the donor moiety involved a Buchwald-Hartwig reaction [11] between 1 and 4-aminoTEMPO.The π-linker moiety, compound 4, was synthesized through a direct C-H arylation reaction [12] between 5-formyl-2,2′-bithiophene (compound 3) and 1,4-dibromobenzene.The coupling of the donor and π-linker moieties was achieved by protecting the formyl group of compound 4 as an acetal [13], followed by a Buchwald-Hartwig reaction.After deprotection of the protecting group, the final target, a D-π-A type dye referred to as TEMPO-dye, was synthesized through the Knoevenagel condensation [14] of compound 7 with cyanoacetic acid.Owing to the presence of paramagnetic radical species in the compounds containing the TEMPO moiety (compounds 2, 6, 7, and TEMPO-dye), NMR identification was challenging because the strong interaction between electronic spins and hydrogen nuclear spins caused broadening of the signals.However, accurate molecular weights were confirmed by HRMS (see also Figure S1 in Supplementary Materials), and the results indicated the successful synthesis of the target compounds.For TEMPO-dye, an additional signal corresponding to a hydrogen adduct formed by the deactivation of the radical species (calculated for C39H46O4N3S2 = 684.29242([M + H] + ), found = 684.29304([M + H] + )) was also observed, in addition to a signal corresponding to the molecular weight of the target compound (calculated for C39H44O4N3S2 = 682.27677(M + ), found = 682.27754(M + )).In the positive mode of HRMS, the molecular ion peak of TEMPO-dye was observed, and its theoretical value agreed well with the calculated value up to two decimal places, confirming the correctness of its molecular formula.However, the signal intensity was weak and obscured by noise.In contrast, in the negative mode of HRMS, the ion peak of the target compound could be clearly identified.
Calculating Proportion of Residual Radicals and Controlling It via Reaction Time
As mentioned earlier, the nitroxyl radicals of freshly synthesized TEMPO-dye (hereinafter referred to as pristine-TEMPO-dye) were partially deactivated, resulting in the Scheme 1. Synthesis of TEMPO-dye.
Calculating Proportion of Residual Radicals and Controlling It via Reaction Time
As mentioned earlier, the nitroxyl radicals of freshly synthesized TEMPO-dye (hereinafter referred to as pristine-TEMPO-dye) were partially deactivated, resulting in the generation of hydrogen-addition products.To calculate the proportion of residual nitroxyl radicals in the obtained product, ESR spectra were measured.Figure 1a illustrates the ESR spectrum of pristine-TEMPO-dye, which exhibits the characteristic equidistant triplet lines associated with nitroxyl radicals.The proportion of residual radicals in TEMPO-dye molecules (C radical ) was estimated using the following equations: x = (S(TEMPO-dye)/(S(M n )), ( 2) where S(TEMPO-dye), S(TEMPO), and S(Mn) correspond to the double integrated values of TEMPO-dye, TEMPO with a clear spin concentration (99%), and the manganese marker used in ESR spectra, respectively.It was calculated that the proportion of the residual radicals in pristine-TEMPO-dye was 43%.
generation of hydrogen-addition products.To calculate the proportion of residual nitroxyl radicals in the obtained product, ESR spectra were measured.Figure 1a illustrates the ESR spectrum of pristine-TEMPO-dye, which exhibits the characteristic equidistant triplet lines associated with nitroxyl radicals.The proportion of residual radicals in TEMPO-dye molecules (Cradical) was estimated using the following equations: x = (S(TEMPO-dye)/(S(Mn)), (2) y = (S(TEMPO))/(S(Mn)), where S(TEMPO-dye), S(TEMPO), and S(Mn) correspond to the double integrated values of TEMPO-dye, TEMPO with a clear spin concentration (99%), and the manganese marker used in ESR spectra, respectively.It was calculated that the proportion of the residual radicals in pristine-TEMPO-dye was 43%.To modify the proportion of residual radicals in TEMPO-dye, we treated samples with phenylhydrazine as a reducing agent and copper acetate as an oxidizing agent; the resulting dyes are hereinafter referred to as red-TEMPO-dye and ox-TEMPO-dye, respectively.The ESR signal of red-TEMPO-dye exhibited a reduced intensity, whereas that of ox-TEMPO-dye exhibited an increased intensity.These results indicate that in red-TEMPO-dye, the proportion of residual radicals decreased as the nitroxyl radicals were reduced to N-OH groups by phenylhydrazine, while in ox-TEMPO-dye the proportion increased as the N-OH groups were oxidized back to nitroxyl radicals by copper acetate.The change in the proportion of the residual radicals could be controlled by varying the reaction time with each reagent.Therefore, the proportion of the residual radicals in TEMPO-dye was successfully varied between 19% and 86% (Figure 2).In this experiment, it was not possible to synthesize dye molecules with 0% or 100% residual radicals.To modify the proportion of residual radicals in TEMPO-dye, we treated samples with phenylhydrazine as a reducing agent and copper acetate as an oxidizing agent; the resulting dyes are hereinafter referred to as red-TEMPO-dye and ox-TEMPO-dye, respectively.The ESR signal of red-TEMPO-dye exhibited a reduced intensity, whereas that of ox-TEMPOdye exhibited an increased intensity.These results indicate that in red-TEMPO-dye, the proportion of residual radicals decreased as the nitroxyl radicals were reduced to N-OH groups by phenylhydrazine, while in ox-TEMPO-dye the proportion increased as the N-OH groups were oxidized back to nitroxyl radicals by copper acetate.The change in the proportion of the residual radicals could be controlled by varying the reaction time with each reagent.Therefore, the proportion of the residual radicals in TEMPO-dye was successfully varied between 19% and 86% (Figure 2).In this experiment, it was not possible to synthesize dye molecules with 0% or 100% residual radicals.However, given that this study aimed to investigate the influence of this proportion on DSSC characteristics, it would be ideal to synthesize dye molecules with residual radical proportions of 0% and 100% to more clearly observe these effects.Therefore, in future investigations, we would like to explore DSSC characteristics using dyes with a broader range of proportions, including 0% and 100%.The chemical stability of the dye developed in this study and the durability of the DSSC using it are crucial factors for practical application.However, as this paper focused on preliminary results regarding the unique characteristics of the dye with TEMPO radical, discussions on these factors were insufficient.In the future, we aim to discuss more deeply into these aspects.
However, given that this study aimed to investigate the influence of this proportion on DSSC characteristics, it would be ideal to synthesize dye molecules with residual radical proportions of 0% and 100% to more clearly observe these effects.Therefore, in future investigations, we would like to explore DSSC characteristics using dyes with a broader range of proportions, including 0% and 100%.The chemical stability of the dye developed in this study and the durability of the DSSC using it are crucial factors for practical application.However, as this paper focused on preliminary results regarding the unique characteristics of the dye with TEMPO radical, discussions on these factors were insufficient.In the future, we aim to discuss more deeply into these aspects.
Optical and Electrochemical Properties of TEMPO-Dye
The absorption and emission spectra of pristine-TEMPO-dye are presented in Figure 3 and their data are summarized in Table 1.The absorption originating from ICT was observed at approximately 460 nm, while emission was observed at approximately 620 nm.The bandgap of the dye, which was calculated from the intersection of these signals and is known as the zero-zero transition (E0-0), was determined to be 2.30 eV.The absorption and emission spectra of dyes with different proportions of residual radicals showed no significant differences in the absorption or emission wavelength regions, indicating that the proportion of residual radicals did not significantly affect the optical properties.
Optical and Electrochemical Properties of TEMPO-Dye
The absorption and emission spectra of pristine-TEMPO-dye are presented in Figure 3 and their data are summarized in Table 1.The absorption originating from ICT was observed at approximately 460 nm, while emission was observed at approximately 620 nm.The bandgap of the dye, which was calculated from the intersection of these signals and is known as the zero-zero transition (E 0-0 ), was determined to be 2.30 eV.The absorption and emission spectra of dyes with different proportions of residual radicals showed no significant differences in the absorption or emission wavelength regions, indicating that the proportion of residual radicals did not significantly affect the optical properties.
DSSC characteristics, it would be ideal to synthesize dye molecules with residual radical proportions of 0% and 100% to more clearly observe these effects.Therefore, in future investigations, we would like to explore DSSC characteristics using dyes with a broader range of proportions, including 0% and 100%.The chemical stability of the dye developed in this study and the durability of the DSSC using it are crucial factors for practical application.However, as this paper focused on preliminary results regarding the unique characteristics of the dye with TEMPO radical, discussions on these factors were insufficient.In the future, we aim to discuss more deeply into these aspects.
Optical and Electrochemical Properties of TEMPO-Dye
The absorption and emission spectra of pristine-TEMPO-dye are presented in Figure 3 and their data are summarized in Table 1.The absorption originating from ICT was observed at approximately 460 nm, while emission was observed at approximately 620 nm.The bandgap of the dye, which was calculated from the intersection of these signals and is known as the zero-zero transition (E0-0), was determined to be 2.30 eV.The absorption and emission spectra of dyes with different proportions of residual radicals showed no significant differences in the absorption or emission wavelength regions, indicating that the proportion of residual radicals did not significantly affect the optical properties.Figure 4 shows CVs of TEMPO-dyes and their data are summarized in Table 1.The HOMO level of the dye was more positive than the redox level of the electrolyte (I 3 − /I − ) (0.4 V vs. NHE), and the LUMO level of the dye was more negative than the conduction band of TiO 2 (−0.5 V vs. NHE) [15].Therefore, it is suggested that the dye functioned as a sensitizer for DSSCs.Interestingly, compared with the CV curve of pristine-TEMPO-dye, the CV curve of red-TEMPO-dye was positively shifted.Additionally, while two reduction peaks were observed in the CV curve of pristine-TEMPO-dye, only one reduction peak was observed for red-TEMPO-dye.The high contribution of the overlapping of the oxidationreduction waves of the TEMPO moiety in pristine-TEMPO-dye caused the reduction wave to shift to the negative side, and each peak could be observed for the reduction wave.In red-TEMPO-dye, however, the lower proportion of residual radicals, which were the redox centers, resulted in a lower contribution of the oxidation-reduction waves of the TEMPO moiety, leading to the observed differences.
/nm
/nm /V NHE NHE 461 624 2.30 0.93 -1.37 Figure 4 shows CVs of TEMPO-dyes and their data are summarized in Table 1.The HOMO level of the dye was more positive than the redox level of the electrolyte (I3 − /I − ) (0.4 V vs. NHE), and the LUMO level of the dye was more negative than the conduction band of TiO2 (−0.5 V vs. NHE) [15].Therefore, it is suggested that the dye functioned as a sensitizer for DSSCs.Interestingly, compared with the CV curve of pristine-TEMPO-dye, the CV curve of red-TEMPO-dye was positively shifted.Additionally, while two reduction peaks were observed in the CV curve of pristine-TEMPO-dye, only one reduction peak was observed for red-TEMPO-dye.The high contribution of the overlapping of the oxidation-reduction waves of the TEMPO moiety in pristine-TEMPO-dye caused the reduction wave to shift to the negative side, and each peak could be observed for the reduction wave.In red-TEMPO-dye, however, the lower proportion of residual radicals, which were the redox centers, resulted in a lower contribution of the oxidation-reduction waves of the TEMPO moiety, leading to the observed differences.
Device Characteristics
We fabricated a DSSC using TiO2 coated with TEMPO-dye and evaluated its conversion efficiency.When varying the concentration of the dye solution used to adsorb the dye onto TiO2, we evaluated the surface coverage of dyes adsorbed on TiO2.We found that the surface coverage of adsorbed dyes became constant at a concentration of 0.1 mM and above, indicating that the surface coverage of dyes adsorbed at saturation was 1.4 × 10 14 [cm −2 ].This value was comparable to that of dye adsorbed at saturation (1.1-1.7 × 10 14 [cm −2 ]) previously investigated by our group as a spin-probe dye, confirming that the molecular structure of the dye did not significantly influence the surface coverage of the adsorbed dyes.Therefore, when fabricating DSSCs in this study, the concentration of the TEMPO-dye solution for adsorption onto the TiO2 electrode was set to 0.1 mM. Figure 5 depicts the IPCE spectrum and characteristic J-V curves.The IPCE spectrum reflected the optical absorption characteristics of the TEMPO-dye, which demonstrated a broad response in the visible light range.Comparing the IPCE characteristics of the devices using pristine-TEMPO-dye and red-TEMPO-dye, we observed a reduction in the maximum value at approximately 500 nm for red-TEMPO-dye to approximately 50% compared with the approximately 65% for pristine-TEMPO-dye.This reduction suggests that the difference in the proportion of residual radicals significantly affected the charge injection efficiency.
Device Characteristics
We fabricated a DSSC using TiO 2 coated with TEMPO-dye and evaluated its conversion efficiency.When varying the concentration of the dye solution used to adsorb the dye onto TiO 2 , we evaluated the surface coverage of dyes adsorbed on TiO 2 .We found that the surface coverage of adsorbed dyes became constant at a concentration of 0.1 mM and above, indicating that the surface coverage of dyes adsorbed at saturation was 1.4 × 10 14 [cm −2 ].This value was comparable to that of dye adsorbed at saturation (1.1-1.7 × 10 14 [cm −2 ]) previously investigated by our group as a spin-probe dye, confirming that the molecular structure of the dye did not significantly influence the surface coverage of the adsorbed dyes.Therefore, when fabricating DSSCs in this study, the concentration of the TEMPO-dye solution for adsorption onto the TiO 2 electrode was set to 0.1 mM. Figure 5 depicts the IPCE spectrum and characteristic J-V curves.The IPCE spectrum reflected the optical absorption characteristics of the TEMPO-dye, which demonstrated a broad response in the visible light range.Comparing the IPCE characteristics of the devices using pristine-TEMPO-dye and red-TEMPO-dye, we observed a reduction in the maximum value at approximately 500 nm for red-TEMPO-dye to approximately 50% compared with the approximately 65% for pristine-TEMPO-dye.This reduction suggests that the difference in the proportion of residual radicals significantly affected the charge injection efficiency.
We compared the open-circuit voltages (V oc ) and short-circuit current densities (J sc ) obtained from the average values of five devices (Table 2).The short-circuit current density, which reflected the difference in IPCE, was 1.2 times higher for devices using pristine-TEMPOdye than for those using red-TEMPO-dye.Furthermore, the open-circuit voltages of the devices using pristine-TEMPO-dye and red-TEMPO-dye were 618 and 485 mV, respectively, a difference of 133 mV.Although J sc and V oc reflected noticeable differences in the proportion of residual radicals, the fill factor (ff ) of the devices was the same.These differences indicate that the DSSC using the pristine-TEMPO-dye exhibited an approximately 1.5 times higher photoelectric conversion efficiency (η) than the DSSC using red-TEMPO dye, indicating that the redox activity of the TEMPO moiety was beneficial to DSSC performance.We compared the open-circuit voltages (Voc) and short-circuit current densities (Jsc) obtained from the average values of five devices (Table 2).The short-circuit current density, which reflected the difference in IPCE, was 1.2 times higher for devices using pristine-TEMPO-dye than for those using red-TEMPO-dye.Furthermore, the open-circuit voltages of the devices using pristine-TEMPO-dye and red-TEMPO-dye were 618 and 485 mV, respectively, a difference of 133 mV.Although Jsc and Voc reflected noticeable differences in the proportion of residual radicals, the fill factor (ff) of the devices was the same.These differences indicate that the DSSC using the pristine-TEMPO-dye exhibited an approximately 1.5 times higher photoelectric conversion efficiency (η) than the DSSC using red-TEMPO dye, indicating that the redox activity of the TEMPO moiety was beneficial to DSSC performance.In general, DSSCs using D-π-A-type dyes undergo ICT as the dye molecules are excited by light, and electrons from the donor moiety move via the π linker to the acceptor moiety.Several such dyes contain functional groups that, in addition to facilitating dye absorption, allow the acceptor moiety to approach the TiO2 electrode.Thus, electrons moving towards the acceptor moiety and approaching the TiO2 electrode can be injected through the adsorption site.When the electrons move from the dye molecules to the TiO2 electrode, a positive charge remains on the dye molecules.However, if reverse electron transfer from the TiO2 electrode to oxidized dye molecules occurs, the open-circuit voltage decreases.
TEMPO-dye incorporates the redox-active species TEMPO at the donor site, which has a lower oxidation-reduction potential than aromatic amines.Therefore, when the TEMPO site was active, the positive charge generated after electron injection from the dye to the TiO2 electrode moved from the aromatic amine site to the TEMPO site.This stabilization of the positive charge may have suppressed reverse electron transfer from the TiO2 electrode, potentially explaining the improvement in the open-circuit voltage of devices when TEMPO-dye with a higher proportion of residual radicals was used.Therefore, we concluded that using ox-TEMPO-dye, which has a higher proportion of residual radicals than pristine-TEMPO-dye, can further enhance the photoelectric conversion efficiency of DSSCs.However, owing to the limited sample quantity at present, it is challenging to In general, DSSCs using D-π-A-type dyes undergo ICT as the dye molecules are excited by light, and electrons from the donor moiety move via the π linker to the acceptor moiety.Several such dyes contain functional groups that, in addition to facilitating dye absorption, allow the acceptor moiety to approach the TiO 2 electrode.Thus, electrons moving towards the acceptor moiety and approaching the TiO 2 electrode can be injected through the adsorption site.When the electrons move from the dye molecules to the TiO 2 electrode, a positive charge remains on the dye molecules.However, if reverse electron transfer from the TiO 2 electrode to oxidized dye molecules occurs, the open-circuit voltage decreases.
TEMPO-dye incorporates the redox-active species TEMPO at the donor site, which has a lower oxidation-reduction potential than aromatic amines.Therefore, when the TEMPO site was active, the positive charge generated after electron injection from the dye to the TiO 2 electrode moved from the aromatic amine site to the TEMPO site.This stabilization of the positive charge may have suppressed reverse electron transfer from the TiO 2 electrode, potentially explaining the improvement in the open-circuit voltage of devices when TEMPOdye with a higher proportion of residual radicals was used.Therefore, we concluded that using ox-TEMPO-dye, which has a higher proportion of residual radicals than pristine-TEMPO-dye, can further enhance the photoelectric conversion efficiency of DSSCs.However, owing to the limited sample quantity at present, it is challenging to fabricate devices using ox-TEMPO-dye.We plan to increase the sample quantity in future studies and to investigate the photoelectric conversion properties of devices using ox-TEMPO-dye.
Conclusions
We developed a novel D-π-A-type dye by incorporating the redox-active species TEMPO at the donor site.The proportion of residual nitroxyl radicals in the TEMPO unit of this dye could be widely controlled using phenylhydrazine and copper(II) acetate.Although the proportion of residual radicals did not affect the light absorption range or bandgap, it significantly influenced the redox properties, indicating a substantial impact on the redox activity of the TEMPO unit.The DSSC fabricated using the TEMPO dye with a higher proportion of residual radicals exhibited a higher IPCE, resulting in a 1.2-fold improvement in short-circuit current values.Moreover, the presence of a redox-active TEMPO unit at the donor site, which possibly suppressed reverse electron transfer, led to a
Figure 2 .
Figure 2. Modification of the proportion of residual nitroxyl radicals in TEMPO-dye.
Figure 3 .
Figure 3. Absorption and emission spectra of TEMPO-dye.
Figure 2 .
Figure 2. Modification of the proportion of residual nitroxyl radicals in TEMPO-dye.
Figure 2 .
Figure 2. Modification of the proportion of residual nitroxyl radicals in TEMPO-dye.
Figure 3 .
Figure 3. Absorption and emission spectra of TEMPO-dye.
Figure 3 .
Figure 3. Absorption and emission spectra of TEMPO-dye.
Figure 4 .
Figure 4. CV curves of pristine-and red-TEMPO-dyes and TEMPO.
Table 1 .
Optical and electrochemical properties of TEMPO-dye.
Table 1 .
Optical and electrochemical properties of TEMPO-dye.
Table 1 .
Optical and electrochemical properties of TEMPO
(a)Average values of the data obtained from the five devices. | 7,746.2 | 2023-12-23T00:00:00.000 | [
"Materials Science",
"Chemistry",
"Physics",
"Environmental Science"
] |
Cournot-Bayesian General Equilibrium: A Radon Measure Approach
: In this paper, we consider a Cournot duopoly, in which any firm does not know the marginal costs of production of the other player, as a Bayesian game. In our game, the marginal costs depend on two infinite continuous sets of states of the world. We shall study, before the general case, an intermediate case in which only one player, the second one, shows infinitely many types. Then, we shall generalize to the case in which both players show infinitely many types depending on the marginal costs, where the marginal costs are given by the nature and each actual marginal cost is known only by the respective player. We find, in both cases, the general Nash equilibrium.
Bayesian-Cournot Games
In this paper, we consider a Cournot duopoly, in which any firm does not know the marginal costs of production of the other player, as a Bayesian game. In our game, the marginal costs depend on two infinite continuous sets of states of the world. We shall study, before the general case, an intermediate case in which only one player, the second one, shows infinitely many types. Then, we shall generalize to the case in which both players show infinitely many types depending on the marginal costs, where the marginal costs are given by the nature and each actual marginal cost is known only by the respective player. We find, in both cases, the general Nash equilibrium.
Preliminaries: Bayesian Games
We adopt a highly general definition of Bayesian game. Definition 1. In a two-player Bayesian game, we need to specify: 1. two type spaces S, T; 2.
two families of individual strategy spaces, for the first player, and: F = (F t ) t∈T ,
1.
A type space for a player is just the set of all possible "types" of that player.
2.
The individual strategy space E s belongs to the type s first player, for every s ∈ S, symmetrically for the second family F.
3.
According to the definition of the general Cartesian product of a set family, a pure strategy of Player 1 is, by definition, a function: x : S → E satisfying the property: x(s) ∈ E s , for all s ∈ S. 4.
A possible action for the first player is conceivable as a pair: (s, x s ) ∈ S × E with x s ∈ E s and such that there exists a pure strategy x ∈ E with: Therefore, the set of all possible actions of the first player is the union of all graphs gr(x) with x ∈ E.
5.
The payoff functions e, f are two-place functions of type profiles (s, t) and strategy profiles (x, y). For example, we consider a first player payoff: e : (S × T) × (E × F) → R : ((s, t), (x, y)) → e((x, y), (s, t)), we will see it, under natural integrable conditions, as a family of functions: e s : E × F → L 1 q (T), with s ∈ S. 6. The beliefs of a player describe the uncertainty of that player about the types of the other players, and often, we can consider p as a probability measure on S and q as a probability measure on T.
Definition 2.
Consider the above game (e, f , p, q). The expected payoff of Player 1 upon the strategy profile: when the first player knows himself/herself to be of type s, is the integral: u s (x, y) = E q (e s (x, y)) = T e s (x, y) q, whenever the function: for every t ∈ T, is integrable under the measure q. Analogously, we define the expected payoff function: concerning the second player, for every t in T. The families of expected payoff functions, upon the bi-strategy determine a game bi-labeled family (u, v). The correspondence: sending each type pair (s, t) to the corresponding Nash equilibrium set of the game (u s , v t )-between the s-type first player and the t-type second player-can be see as a dynamic multi-path (multi-surface) designing the (non-cooperative) Nash solution of the bi-labeled family game (u, v). A Nash-Bayesian equilibrium for the Bayesian game (e, f , p, q) is defined as the Nash equilibrium multi-surface of the associated game (u, v).
Remark 2.
We note that the certainty of a player of its own type is represented by a Dirac delta measure centered at that type. For instance, we note that: for every s in S.
Brief Description of the Standard Cournot's Game
The classic model is a non-linear two-player (gain) game G = ( f , >). The two players produce and offer the same commodity. In more specific terms: the payoff function f of the game is defined on a subset of the positive cone of the Cartesian plane, interpreted as a space of bi-quantities. We assume that the set of all strategies (of each player) is the interval E = [0, +∞]. The bi-gain function is defined by: for every bi-strategy (x, y) of the game in the positive cone of the Euclidean plane, for convenient positive constants a, b (measured in euro/quantity 2 ), maximum reasonable quantity Q, marginal costs s, t (measured in euro/quantity), and fixed costs c, d (measured in euros).
Bayesian Games in Industrial Organization
The novelty and interest of the present paper lie in the use of the Bayesian game analysis for the Cournot duopoly in a scenario of infinitely many possible pairs of marginal costs. a direct, natural generalization of the classic result of microeconomic theory. It seems, indeed, hard to find in the literature such a comprehensive and clear general statement providing a straightforward extension of the Cournot-Nash equilibrium state to the case of complete uncertainty on marginal costs of the other players, which is, by far, the most common and serious issue in the economic duopoly analysis.
Mostly, in the literature, we see developments of the Nash-Cournot equilibrium towards the resolution of other research questions, especially connected with the asymptotic stability of a variously associated dynamical system (see [1]).
Several books and papers introduced, with the classic models of oligopoly (Cournot model), the use of Bayesian games, for the case of finitely many possible marginal costs. We start from those studies and from the non-cooperative game and incomplete information game analysis in industrial organization (see [2][3][4]).
We have considered also studies about static and dynamic information games and Bayesian Nash equilibrium, the normal form of Bayesian games, the extensive-form of Bayesian games with observable actions, and games with incomplete information played by Bayesian players [5][6][7]. Some discrete cases of our model could be found in [8]. As concerns the complete analysis of classic duopoly model, see [9,10]. Other applications of game theory to economic duopolies can be found in [11][12][13][14][15][16][17][18]. For a Radon and Schwartz distribution approach to probability, see [19][20][21].
Methods: Derivatives in Infinite Dimensions
In this paper, in order to compute the Nash-Bayesian equilibria, we shall use a convenient type of partial derivation in infinite dimensions for functions of the type: F is the convex strategy sub-space: of all real (bounded) q-integrable functions on a (cost) compact interval T = [0, c 2 ] with values belonging to [0, Q].
Our partial derivative: with t ∈ T, is not standard in infinite dimensional vector spaces and is defined as follows. we consider the real function: where U is a convenient pierced (right) neighborhood of zero; -the partial derivative ∂ t φ(y) is the following limit (if it exists): where χ t is the characteristic function of the singleton {t} in the interval [0, c 2 ], that is the function: Definition 4. We can define that derivative also in the following convenient way. Let: for every z ∈ [0, Q], where x, y, t are fixed in their respective spaces. If g is differentiable at the point y(t), we put: Example 1. In our game, the payoff function of the second player t th type will be defined by: Taking into account that: for every z ∈ [0, Q], so that:
The Model: Uncertainty on the Costs of One Player
Let Q ∈ R > be the maximum reasonable quantity in our symmetric Cournot duopoly. Strategy set of the first player: The (essential) strategy set E of our first player is the production compact interval: Type set of the second player: The type-set of the second player is the marginal cost interval: State of the world space of the second player: The state of the world space of the second player is the marginal cost interval:
Strategy Set of the Second Player
The second player t th type adopts the strategy set: so that the second player's total strategy profile space is the Cartesian product: In other terms, any (pure) strategy of the second player is an infinite profile strategy: this function specifies, for any type t ∈ T, a certain production y(t) in [0, Q].
Radon Measure Probability of the Second Player
Assume that: is a probability measure (private information of the second player, determined by the nature) on the type space T = [0, c 2 ] of the second player.
Reduced Strategy Set of the Second Player
Assumption 1. As a pretty reasonable strategy set of the second player, we restrict our attention to the convex strategy sub-space: Remark 3. Note that the space L 1 q (T) is a space of functions and not a space of equivalence classes of functions, so that two functions can be different even if they are equal almost everywhere with respect to the measure q.
Remark 4.
Moreover, we desire to notice that the Lebesgue measure does not play any special role here.
Payoff Functions of the First Player
In order to write down the payoff function of the first player, let us indicate the first player stochastic payoff function by e. The payoff function: where s represents the (unique) marginal cost of the first player, known also with certainty by the second player (it is-in this case-public information); we could consider s as belonging to a certain possible cost compact interval S = [0, c 1 ], but, in this case, s is a fixed constant, the only interesting point of S. Note, now, that the payoff: for every type t ∈ [0, c 2 ] of the second player.
Expected Payoffs of the First Player
For every bi-strategy (x, y) ∈ E × F, the function: is therefore a random variable on the probability space (T, q). The expected value of this random variable, with respect to the probability measure q, is: with respect to the probability measure q.
Payoff Function(s) of the Second Player
The payoff function f of the second player can be identified with a family: The payoff function of the second player t th type is defined by:
Results: Nash-Bayesian Analysis
In this section, we show the results of our first analysis.
Best Reply Correspondence of the First Player
Consider the first player. Fixing y in F, let us study the partial derivative of the expected utility function u with respect to its first argument: It is positive when: Therefore, the best reply of the first player is the function: note that the best reply of the first player to the action y of the second player depends on y, and it belongs indeed to the compact interval:
Best Reply Correspondence of the Second Player
We study the best reply correspondence of the second player's t th type. Therefore, fixing x ∈ E, let us study the partial derivative of the section f t (x, ·) with respect to its t th argument: It is positive when: Therefore, the best reply of the second player's t th type is the function: Remark 5. Note that the best reply of the t th type second player, to the action x of the first player, depends on x, and it belongs to the strategy set F t = [0, Q]; more specifically, it belongs to the set:
Nash-Cournot-Bayesian Equilibrium
Let (x * , y * ) ∈ E × F be a Nash equilibrium, if any of our game G = (e, f ), where f is the family ( f t ) t∈t . By definition, this means that: We shall study the case in which every component of the second player equilibrium strategy y * is strictly positive, as well as the equilibrium strategy x * . That is, we want, for every t ∈ [0, c 2 ], This happens, for sure, when Q > max{2c 1 , 2c 2 }.
Proof. Indeed, we immediately obtain: for every s in [0, c 1 ], taking into account that: as concerns the equilibrium strategy x * . Equivalently, we obtain: for every t in [0, c 2 ], taking into account that: if any such equilibrium (x * , y * ) exists there. Theorem 1. Let G = (e, f ) be the Cournot-Bayesian game defined above. Assume: in other words, we assume that the maximal reasonable individual (and collective) production is greater than the maximal possible double marginal costs. Then, the Nash equilibrium of the game G is the pair: for any probability measure q on T, where τ is the identity function on T and: is the mean-value of the probability measure q.
Proof of Theorem 1. Therefore, assume Q ≥ max{2c 1 , 2c 2 }; we know that: Then, we deduce: As concerns the equilibrium function y * , we have only to plug in the equilibrium strategy: x * = (Q + q(τ))/3 into the expression: obtaining: Finally, the Nash equilibrium is the pair: In other terms: as we desired.
Remark 6.
By the way, we notice that:
Examples
Now, we consider here some explicative examples.
Example 2.
Let q be the uniform distribution on T, that is the measure: where λ is the Lebesgue measure on T.
We have: We can see the above equilibrium as a function:
Example 4.
Note that the same result is true for every probability distribution q such that E(q) = c 2 /2, for instance the discretely-supported Radon measure: (where δ t is the Dirac delta measure centered at t, for every t in T).
Analysis: Uncertainty on the Costs of Both Players
Type set of the first player: The type set of the first player is the marginal cost interval: Type set of the second player: The type set of the second player is the marginal cost interval:
Strategy Set of the First Player
The first player's s th type adopts the strategy set: so that, the first player's total strategy profile space is the Cartesian product: the set of all functions from S into [0, Q]. In other terms, any strategy of the second player is an infinite profile strategy: specifying, for any type s ∈ S, a certain production x(s) in [0, Q].
Radon Measure Probability of the First Player
Assume that: p : C 0 (S) → R is a probability measure (private information of the first player, determined by the nature) on the compact type space S of the first player.
Reduced Strategy Set of the First Player
Assumption 2. As a pretty reasonable strategy set of the first player, we restrict our attention to the convex strategy sub-space: of all real bounded p-integrable functions on S with values into [0, Q].
Strategy Set of the Second Player
The second player's t th type has got the strategy set: so that the second player's total strategy profile space is the Cartesian product: the set of all functions from T into [0, Q]. In other terms, any strategy of the second player is an infinite profile strategy: specifying, for any type t ∈ T, a certain production y(t) in [0, Q].
Radon Measure Probability of the Second Player
Assume that: q : C 0 (T) → R is a probability measure (private information of the second player, determined by the nature) on the compact type space T of the second player.
Reduced Strategy Set of the Second Player
Assumption 3. As a pretty reasonable strategy set of the second player, we restrict our attention to the convex strategy sub-space: Note that the space L 1 q (T) is a space of functions and not a space of equivalence classes of functions, so that two functions can be different even if they are equal almost everywhere with respect to the measure q. Now, we can determine the payoff functions of the players.
Payoff Functions of the First Player
In order to write down the payoff function of the first player, let us indicate the first player's type s by the index s and its stochastic payoff function by e s . The payoff function: Note, now, that the payoff: is the function (random variable): for every type t ∈ T of the second player.
Expected Payoffs of the First Player
For every bi-strategy (x, y) ∈ E × F, the function e s (x, y) is therefore a random variable on the probability space (T, q). The expected value of this random variable, with respect to the probability measure q, is: for every (x, y) in E × F, where by q(φ), we denoted the integral of any function φ ∈ L 1 q (T):
Payoff Functions of the Second Player
The payoff function of the second player's t th type is defined by: for every (x, y) ∈ E × F.
Expected Payoffs of the Second Player
For every bi-strategy (x, y) ∈ E × F, the function: is therefore a random variable on the probability space (S, p). The expected value of this random variable, with respect to the probability measure p, is: for every (x, y) in E × F, where by p(φ), we denoted the integral of any function φ ∈ L 1 p (S):
Results: Nash-Cournot Bayesian Analysis
In this section, we state and prove the main general result of the paper.
Best Reply Correspondence of the First Player
Consider the first player's s-type. Fixing y in the function strategy space F, let us study the partial derivative of the section function: with respect to its s th argument: it is positive when: x(s) < 2 −1 (Q − q(y) − s).
Therefore, consider the part F (s,<) of F below: The best reply correspondence of the first player's s-type is the function: for every integrable profile strategy y ∈ F (s,<) and zero otherwise.
Remark 7.
Note that the best reply of the first player to the action y of the second player depends only on the average: of the random variable y with respect to the probability measure q, and it belongs indeed to the compact interval:
Best Reply Correspondence of the Second Player
Consider the second player (T, q, F, f ), and fix a type t of him/her. Fixing an integrable strategy profile x, in the first player's function convex space E, let us study the partial derivative of the section: with respect to its t th argument.
We immediately obtain: for every y in F. This derivative is positive when: Therefore, consider the part E (t,<) of E below: The best reply correspondence of the second player's t-type is the function: for every integrable profile strategy x ∈ E (t,<) and zero otherwise.
Remark 8.
Note that the best reply of the first player to the action x of the second player depends on the average p(x), and it belongs, indeed, to the compact sub-interval:
Nash-Cournot-Bayesian Equilibrium
Let (x * , y * ) be a Nash equilibrium, if any of our game G = (u, v). This is equivalent to asserting that (x * , y * ) ∈ E × F is a Bayesian-Nash equilibrium, if any of our game G = (e, f ), where e, f are the random variable families: e = (e s ) s∈S By definition, that means that: for every types s, t in S, T, respectively. We consider here the case in which, for every (s, t) ∈ S × T, we observe the two conditions: The above situation happens, for sure, when: for every t in [0, c 2 ], taking into account that: and: for every s in [0, c 1 ], taking into account that: if any equilibrium strategies exist there. Now, we can state and prove our main general result.
Theorem 2.
Let G be the Cournot-Bayesian game G = (u, v) defined above. Assume: In other words, we assume that the limit production is greater than both the double maximum marginal cost. Then, the Nash equilibrium of G is the function pair (belonging to the bi-strategy space E × F): for any probability measure p on S and q on T, where: Proof of Theorem 2. Therefore, assume: We know that: from which: Symmetrically, for the second equilibrium strategy function y * , we have: We immediately deduce, by applying p to (1) and q to (2): and: q(y * ) = 2 −1 (Q − p(x * ) − q(τ)).
From the latter and from: we deduce: By applying p to the preceding, we have: By substituting this last expression of p(x * ) in (2), we obtain: Symmetrically, we have: Finally, the Nash equilibrium is the pair: In other terms: as we desired.
Examples
Now, we consider here some explicative examples.
Example 6 (Uniform probability distributions). Let p and q be the uniform distributions on S and T, respectively, that is the measure: p = (1/c 1 )λ S and: where λ S and λ T are the Lebesgue measure on S and T, respectively. We obtain: (where δ t is the Dirac delta measure centered at a point t in T).
Discussion and Conclusions
We have considered a Cournot duopoly, in which any firm does not know the marginal costs of production of the other player, as a Bayesian game. In our game, the marginal costs depend on two infinite continuous sets of states of the world. We studied, before the general case, an intermediate case in which only one player, the second one, shows infinitely many types. Then, we have generalized to the case in which both players show infinitely many types depending on the marginal costs, where the marginal costs are given by the nature and each actual marginal cost is known only by the respective player. We found, in both cases, the general Nash equilibrium.
From the perspective of previous studies, we observe that our model constitutes a real step forward in the economic applicability of the duopoly model (see for the classical models [9,10]). It is indeed clear that the marginal costs and other characteristic coefficients and constants of a production process are completely known only by the owner of the process itself and not by the competitors. Future research directions may also be devised for the use of others stochastic variables instead of the other characteristic constants of the classic duopoly games.
Possible Application to the Auto Insurance Market
One possible field of application is the auto insurance market. There, uncertain costs arise from a variety of reasons: uncertain claims; fluctuating input prices; legal risk; exchange rate uncertainty; etc. Now, the marginal costs are particularly important for insurance companies, because the firm's solvency directly depends on its ability to accurately predict losses. On the other hand, insurance companies exist because the pooling of many risks reduces aggregate uncertainty, thereby aggregate losses could become predictable and loss sharing feasible. In practice, however, predictability appears incomplete and unsatisfactory, even when the number of insured individuals is large, leading to marginal cost uncertainty, for all the insurance players in the market. Provided that the marginal costs (of a certain insurance), at the end of the day, come (although approximately) from internal deep statistical, managerial, and business analysis, any insurance needs to consider the other insurance's costs as random variables with a forecasted probability distribution: here, they could use our model to calculate the expected competitive equilibrium strategies and relative payoffs. | 5,644.8 | 2018-12-22T00:00:00.000 | [
"Economics",
"Mathematics"
] |
Exploring Subpixel Learning Algorithms for Estimating Global Land Cover Fractions from Satellite Data Using High Performance Computing
Land cover (LC) refers to the physical and biological cover present over the Earth’s surface in terms of the natural environment such as vegetation, water, bare soil, etc. Most LC features occur at finer spatial scales compared to the resolution of primary remote sensing satellites. Therefore, observed data are a mixture of spectral signatures of two or more LC features resulting in mixed pixels. One solution to the mixed pixel problem is the use of subpixel learning algorithms to disintegrate the pixel spectrum into its constituent spectra. Despite the popularity and existing research conducted on the topic, the most appropriate approach is still under debate. As an attempt to address this question, we compared the performance of several subpixel learning algorithms based on least squares, sparse regression, signal–subspace and geometrical methods. Analysis of the results obtained through computer-simulated and Landsat data indicated that fully constrained least squares (FCLS) outperformed the other techniques. Further, FCLS was used to unmix global Web-Enabled Landsat Data to obtain abundances of substrate (S), vegetation (V) and dark object (D) classes. Due to the sheer nature of data and computational needs, we leveraged the NASA Earth Exchange (NEX) high-performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into four classes, namely forest, farmland, water and urban areas (in conjunction with nighttime lights data) over California, USA using a random forest classifier. Validation of these LC maps with the National Land Cover Database 2011 products and North American Forest Dynamics static forest map shows a 6% improvement in unmixing-based classification relative to per-pixel classification. As such, abundance maps continue to offer a useful alternative to high-spatial-resolution classified maps for forest inventory analysis, multi-class mapping, multi-temporal trend analysis, etc.
Introduction
During the past two and a half decades, various subpixel learning algorithms have been developed to disintegrate a pixel spectrum into its constituent spectra in medium to coarse spatial resolution multispectral data [1].The aim of these learning algorithms is to characterize mixed pixels through a mixture model assuming that observed data constitutes a mixture of two or more objects [2].Defining a direct observation model that links these object's quantities to the observed data is a non-trivial issue and requires an understanding of complex physical phenomena.A radiative transfer model (RTM) can accurately describe the light scattered by the objects in the observed scene [3].Theoretically, radiative transfer is the physical phenomenon of energy transfer and describes the propagation of radiation on the Earth's surface affected by interaction processes between radiation, atmospheric constituents and the Earth's physical surface [4].RTM solves the radiative transfer equation that describes these interaction processes in a mathematical way to develop numerical RTM, but would lead to very complex unmixing problems.Fortunately, imposing a few assumptions can lead to exploitable mixing models.
So far, two types of models have been proposed in the literature to unmix mixed pixels.First, a macroscopic mixture model where the incident light interacts with just one object (example, checkerboard type scenes) [5], because the spatial resolution of the instrument is not fine enough.Second, a microscopic mixture model (also called intimate spectral mixture), which is a nonlinear mixing of objects.This usually happens due to physical interaction between the light scattered by multiple objects in the scene, where the two objects are homogeneously mixed.Although researchers are debating the use of linear unmixing versus nonlinear unmixing (see [5] for an overview), the nonlinear mixture model is still immature compared to its linear counterpart.Consequently, in this paper, we will focus on the linear mixture model (LMM).Apart from its inherent simplicity, LMM is an acceptable approximation of the light scattering mechanism in many real scenarios [5][6][7][8][9].LMM infers a set of pure object's spectral signatures (called endmembers), and fractions of these endmembers (abundances) in each pixel of the image [10], i.e., the objects are present with relative concentrations weighted by their corresponding abundances.The endmembers can be either derived using endmember extraction algorithms from the image pixels or obtained from an endmember spectral library available a priori.
Various theories and methods have evolved to solve the mixed pixel problem, such as linear spectral unmixing [11], Gaussian mixture discriminant analysis [12], linear regression and regression tree [13], regression models based on random forest [14], spatial-correlation-based unmixing [15], unmixing based on distance geometry [16], and normal composite model [17].The approach to mixed pixel problem ranges from modeling the component mixtures to solving the linear combinations to obtain abundances through geometrical, statistical and sparse-regression-based techniques; a lengthy discussion on the review of these techniques can be continued ad nauseum [5,18].LMM renders an optimal solution that is in unconstrained or partially constrained or fully constrained form.A partially constrained model imposes either the abundance non-negativity constraint (ANC) or abundance sum-to-one constraint (ASC) while a fully constrained model imposes both.ANC restricts the abundance values from being negative and ASC confines the sum of abundances of all the classes to unity.Unconstrained least squares [18], orthogonal subspace projection [19], singular value decomposition [20], mixed tuned matched filter [21], and constrained energy minimization [18], etc. are examples of unconstrained models.Sum-to-one constrained least squares (SCLS) and non-negative constrained least squares (NCLS) are cases of partially constrained solutions, which are generally normalized to obtain normalized SCLS and normalized NCLS solutions [22].More often, unconstrained and partially constrained algorithms are appropriate for applications seeking target detection, identification and discrimination, while constrained models are more suitable for target quantification and abundance estimation, which can be supervised or unsupervised or automatic (used for anomaly detection without the need for target information) [18].
Despite previous research conducted on subpixel learning algorithm development, the most flexible, appropriate and robust approach for large scale classification in different types of landscape scenarios is still not recognized and remains debatable.Most published algorithms are restricted to the standard AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) cuprite dataset obtained over Nevada, USA [23], the hyperspectral image data collection experiment (HYDICE) distributed by the Army Geospatial Center, US Army Corps of Engineers [24], or datasets acquired by researchers using specialized sensors [25], with 1300 mineral signatures from the United States Geological Survey (USGS) digital spectral library [26] or the NASA (National Aeronautics and Space Administration) Jet Propulsion Laboratory ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) spectral library [27].There are some specific case studies limited to a few spatio-temporal snapshots of multispectral and hyperspectral data [28][29][30].Although the usage of these ready-to-use data and spectral libraries in testing algorithms is non-questionable, the performance of these techniques on real data and their application on large datasets are definitely of great importance.As an attempt to address this issue, the following two objectives were formulated: 1.
Perform a comparative analysis of the least squares, sparse regression, signal-subspace and geometrical methods for the subpixel classification of different datasets.
2.
Develop a method to utilize the abundance maps obtained from subpixel learning algorithms to retrieve fractional land cover (LC) classes representing forest, farmland (including agricultural and herbaceous lands), water, and urban areas.
We first present the qualitative and quantitative analysis of the subpixel learning algorithms to find out the best performing technique that renders fractional abundances of different classes with the highest accuracy through both computer-simulated and real-world data of an agricultural landscape and an urban scenario.These data were analyzed by deriving vegetation, substrate and dark objects (such as shadows and deep water) endmember fractions followed by comparison with the ground truth data for accuracy assessment using various measures.Subsequently, in the second part of this study, we show the scope of the best unmixing technique in an operational context to obtain global abundance maps from WELD (Web-Enabled Landsat Data) of the year 2011 using the global endmembers of substrate, vegetation and dark objects.The role of abundance maps in obtaining the fractional spatial extent of four LC classes such as forest, farmland, water bodies and urban areas is shown for the state of California, USA.
The paper is organized as follows: Section 2 briefly reviews the subpixel learning algorithms, Section 3 details the data and endmember generation, Section 4 discusses the LC classification methods and validation approaches, Section 5 presents the results and discussion, followed by concluding remarks in Section 6.
Review of Subpixel Learning Algorithms
The notion behind LMM is introduced below with a brief review of seven state-of-the-art subpixel learning algorithms.For each pixel, the observation vector y is related to endmember signature E by a linear model as where, each pixel is a M-dimensional vector y whose components are the digital numbers corresponding to the M spectral bands.E = [e 1 , . . .e n-1 , e n , e n+1 ..., e N ] is a M × N matrix, where N is the number of classes, {e n } is a column vector representing the spectral signature of the nth target material and η accounts for the measurement noise.For a given pixel, the abundance of the nth target material present in the pixel is denoted by α n , and these values are the components of the N-dimensional abundance vector α.We further assume that the components of the noise vector η are zero-mean random variables that are independent and identically distributed (i.i.d.).Therefore, the covariance matrix of the noise vector is σ 2 I, where σ 2 is the variance and I is M × M identity matrix.
Unconstrained Least Squares (UCLS)
The conventional approach to extract the abundance values is to minimize y − Eα as in (2): which is termed a UCLS estimate of the abundance.UCLS with full additivity is a non-statistical, non-parametric algorithm that optimizes a squared-error criterion but does not enforce the non-negativity and unity conditions.Our previous study [31] compared the performance evaluation of UCLS, orthogonal subspace projection and singular value decomposition, which indicated similar unmixing results on different datasets.UCLS was faster in terms of execution time, therefore it is being considered here for comparison with constrained algorithms [32].
Fully Constrained Least Squares (FCLS)
To avoid deviation of the estimated abundance fractions, the ANC given in (3) and the ASC given in ( 4) are imposed on the model: The ANC and ASC constrain the value of abundance in any given pixel between 0 and 1.When only ASC is imposed on the solution, the SCLS estimate of the abundance is where The SCLS solution may have negative abundance values but they add to unity.FCLS [18,33] extends the NNLS (non-negative least squares) algorithm [34] to minimize Eα − y subject to α ≥ 0 by including ASC in the signature matrix E by a new signature matrix (SME) defined by with θ in (7) and ( 8) regulates the ASC.Using these two equations, the FCLS algorithm is directly obtained from NNLS by replacing the signature matrix E with SME and pixel vector y with s.Further, the original NNLS algorithm was found to be computationally very slow in our experiments.Therefore, we modified the original FCLS by incorporating the faster NNLS proposed by [35].
Modified Fully Constrained Least Squares (MFCLS)
The ANC is a major difficulty in solving constrained linear unmixing problems as it forbids the use of Lagrange multiplier.Chang (2003) [18] proposed the replacement of The AASC allows usage of Lagrange multiplier along with the exclusion of negative abundance values.This leads to an optimal constrained least squares solution satisfying both the ASC and AASC, which is called MFCLS and is expressed as It turns out that the solution to ( 9) is where αUCLS = (E T E) −1 E T y as in (2).The ASC and AASC constraints are used to compute λ 1 and λ 2 by replacing α with αUCLS with the following constraints: MFCLS utilizes the SCLS solution and the algorithm terminates with all non-negative components.
Simplex Projection (SP)
SP finds the projection of a point onto a generic simplex and minimizes the least squares error while imposing the ANC and ASC.It reduces the computational complexity without any optimization, while recursively reducing the dimensionality of the problem to obtain a suitable abundance vector [2].At each run, the algorithm identifies an endmember that has zero abundance and orthogonally projects on a hyperplane of a dimension less than the previous one.Considering P points y p R M , p = 1, . . ., P with N endmembers {e 1 , . . .e n-1 , e n , e n+1 , ... , e N }, all points y p are projected on to a simplex S I spanned by N in the set I = {e 1 , . . .e n-1 , e n , e n+1 , ... , e N } producing the projected points y p and corresponding abundance vectors αp = α p1 , . . ., α pN .The projected points y p are determined through y p = E αp [36].
Sparse Unmixing via Variable Splitting and Augmented Lagrangian (SUnSAL)
Sparse regression [37] is related to both statistical and geometrical frameworks.The endmember search is conducted in a large library E ∈ R M x N , where M < N and α ∈ R N .Only a few of the signatures contained in E are involved in the mixed pixel spectrum, therefore α contains many values of zero and is a sparse vector.The sparse regression problem is expressed as where α 0 denotes the number of nonzero components of α and δ ≥ 0 is the noise and modeling error tolerance.α ≥ 0 and 1 T α = 1 refers to the ANC and ASC, respectively.A set of sparsest signals belonging to the (N-1) probability simplex and satisfying error tolerance inequality defines the solution of (12).When the fractional abundances follow the ANC and ASC, the problem is referred to as constrained sparse regression given by (13) [38]: where α 2 and α 1 are the l 2 and l 1 norms and λ ≥ 0 is a weighing factor.SUnSAL is based on the alternating direction method of multipliers (ADMM) and is derived as a variable splitting procedure followed by the adoption of an augmented Lagrangian method [39].
SUnSAL and Total Variation (SUnSAL TV)
Sparse unmixing techniques do not deal with the neighboring pixels and tend to ignore the spatial context.SUnSAL TV takes into account spatial information (the relationship between each pixel vector and its neighbors) by means of the TV regularizer with an assumption that two neighboring pixels will very likely have similar fractional abundances for the same endmember [40].The TV regularizer acts as a priori information and unmixing is achieved by a large non-smooth convex optimization problem.
Collaborative SUnSAL (CL SUnSAL)
Generally, the performance of sparse unmixing solution suffers from the limitation of a high degree of coherence between the signatures of the endmembers affecting the uniqueness of the solution.Collaborative sparse unmixing removes the above limitation where coherence has a weaker impact on unmixing since the pixels are constrained to share a small set of endmembers [10].
Here, the unmixing result is refined by solving a joint sparse regression problem in which the sparsity is simultaneously imposed on all the pixels [41].Consider (1) while assuming P data points in matrix Y and η = [η 1 , . . ., η P ] as the noise matrix.Assume α F ≡ trace{αα} T is the Frobenius norm of α, and let λ > 0 be the regularization parameter, then the optimization problem is where α k denotes the kth line of α. ∑ N k=1 α k 2 is the l 2,1 mixed norm that supports a small number of nonzero lines and sparsity in α among all the pixels, leading to the solution of ( 14) through the extended SUnSAL algorithm.CL SUnSAL solves the l 2 + l 2,1 optimization problem in addition to ANC.If α 2,1 = ∑ N k=1 α k 2 denotes the l 2,1 norm, then ( 14) can be rewritten as where ι R+ (α) = ∑ P i=1 ι R+ (α i ) is the indicator function.Equation ( 15) can be expressed in a constrained form and can be solved using the methods given in [42] to obtain the abundances.The abundance map allows a proportion of each pixel to be partitioned between classes.The value of abundance in any given pixel ranges from 0 to 1 (in an abundance map) with the number of abundance maps the same as the number of classes.The value 0 indicates absence of a particular class and 1 indicates the presence of only that class in a particular pixel.Intermediate values between 0 and 1 represent a proportion of that class.
Data
In this section, details about different experimental datasets, global Web-Enabled Landsat Data, and endmember generation are discussed.
Computer Simulated Data
One of the major problems in analyzing the quality of fractional estimation methods is that the exact ground truth information about the real abundances at subpixel level for all classes is difficult to obtain.Therefore, simulation of imagery is carried out as an intuitive way to perform evaluation of the techniques, for example, the simulated but realistic fully-calibrated citrus orchard virtual scene [43,44].Since all the details of the simulated images are known, the algorithm's performance can be examined in a controlled manner.For generating the synthetic multispectral data y of six bands with different levels of noise η, the linear mixture model y = Eα + η was used with a simulated abundance map α.The noise was drawn from an uncorrelated normal distribution with zero mean and variance σ 2 (noise level).The endmembers E were taken from a set of global spectra of the three-endmember libraries [45].At each pixel's location in the simulated abundance map α, one specific endmember among the three was made dominant (abundance ≥ 0.77), and the remaining two abundance values were chosen at random such that all the values were positive and add to one.To have synthetic data closer to reality, the dominant endmembers had spatial correlation among the neighboring pixels for the same endmember except at region boundaries.In a separate set of experiments, σ 2 was increased in multiples of two (i.e., σ 2 was set to 2, 4, 8, . . ., 256) in order to analyze the performance of various learning algorithms on subpixel classification with variable noise.
Landsat Data-An Agricultural Landscape and an Urban Scenario
A spectrally diverse collection of 11 time-series scenes of Level 1 (terrain-corrected), cloud-free Landsat-5 (obtained from the Web-Enabled Landsat Data (WELD) version 3.0 product) 16-bit data for Fresno, California, USA (WRS path 43, row 35) were used.These data were captured on 4 and 20 April, 22 May, 7 and 23 June, 9 and 25 July, 26 August, 11 and 27 September, and 13 October of the year 2008 and were calibrated to atmospheric reflectance [46].The atmospheric reflectance was converted to surface reflectance by means of the 6S code implementation in LEDAPS (Landsat Ecosystem Disturbance Adaptive Processing System).For the ground truth data corresponding to the above scenes, a coincidental set of ground canopy covers were collected for a number of surveyed fields that were located within an area of about 25 × 35 km 2 southwest of the city of Fresno (Figure 1).A total of 74 polygons of fractional vegetation cover were generated from digital photographs taken with a multispectral camera mounted on a frame at nadir view pointed 2.3 m above the ground.These photographs were acquired at the commercial agricultural fields of the San Joaquin Valley (in central California) on the 11 dates mentioned above, except for one date when the Landsat acquisition preceded the ground observation by one day.For each date, 2-4 evenly spaced pictures were taken for an area of 100 × 100 m 2 with center location marked by GPS [47].These fractional measurements belonged to a diverse set of seasonal and perennial crops in various developmental stages, from emergence to full canopy, that represented an agricultural environment in the real-world scenario.
A second set consisting of a pair of coincident clear sky Landsat TM-5 data and WV-2 data (World View-2 of 2 m spatial resolution) for an area of San Francisco, California, USA were used to assess the algorithms.San Francisco was chosen for the test site because of its urbanized landscape, with a mix of building architectures, vegetation and substrate.WV-2 data were acquired a few minutes after the Landsat-5 TM data acquisition on 1 May 2010 for an area near the Golden Gate Bridge, San Francisco (Figure 1).The spectral range of the first four bands of Landsat data correspond to the WV-2 bands 2, 3, 5 and 7, so they have similar spectral response functions.WV-2 data were converted to top of atmosphere reflectance values.The Landsat unmixed images were compared to the corresponding WV-2 fraction images for accuracy assessment.with a multispectral camera mounted on a frame at nadir view pointed 2.3 m above the ground.These photographs were acquired at the commercial agricultural fields of the San Joaquin Valley (in central California) on the 11 dates mentioned above, except for one date when the Landsat acquisition preceded the ground observation by one day.For each date, 2-4 evenly spaced pictures were taken for an area of 100 × 100 m 2 with center location marked by GPS [47].These fractional measurements belonged to a diverse set of seasonal and perennial crops in various developmental stages, from emergence to full canopy, that represented an agricultural environment in the real-world scenario.
A second set consisting of a pair of coincident clear sky Landsat TM-5 data and WV-2 data (World View-2 of 2 m spatial resolution) for an area of San Francisco, California, USA were used to assess the algorithms.San Francisco was chosen for the test site because of its urbanized landscape, with a mix of building architectures, vegetation and substrate.WV-2 data were acquired a few minutes after the Landsat-5 TM data acquisition on 1 May 2010 for an area near the Golden Gate Bridge, San Francisco (Figure 1).The spectral range of the first four bands of Landsat data correspond to the WV-2 bands 2, 3, 5 and 7, so they have similar spectral response functions.WV-2 data were converted to top of atmosphere reflectance values.The Landsat unmixed images were compared to the corresponding WV-2 fraction images for accuracy assessment.
Global Web-Enabled Landsat Data (WELD)
The NASA-funded WELD, through the Making Earth System Data Records for Use in Research Environments (MEaSUREs) project, has been systematically generating 30 m monthly and annual global composite products from Landsat 7 ETM+ and Landsat 5 TM data for all non-Antarctic land surface mosaics [48].The entire globe is covered with around 8000 scenes/month and the version 3.0
Global Web-Enabled Landsat Data (WELD)
The NASA-funded WELD, through the Making Earth System Data Records for Use in Research Environments (MEaSUREs) project, has been systematically generating 30 m monthly and annual global composite products from Landsat 7 ETM+ and Landsat 5 TM data for all non-Antarctic land surface mosaics [48].The entire globe is covered with around 8000 scenes/month and the version 3.0 global WELD (shown in Figure 2) are available in public domain for a three year period from 2008 to 2011 (~0.3 million scenes), spectrally calibrated and converted to surface reflectance and brightness temperature.
Endmember Generation
Global mixing spaces using a spectrally diverse LC and a diversity of biomes with 100 Landsat ETM+ subscenes were used to define a standardized set of spectral endmembers of substrate ("S"endmember 1), vegetation ("V"-endmember 2), and dark objects ("D"-endmember 3) that spans all terrestrial biomes determined by mean annual temperature and precipitation in proportion to land area [49].The geographical locations of the 100 Landsat scenes showing spectral diversity resulting from LC variety across biomes was established in an earlier study [50].The subscenes included within-scene spectral variability, LC transition and global land area distribution.With a linear stretch of 2% applied to the bands within a subscene, substrate, vegetation and water were apparent as brown, green and black color, respectively.The substrate includes urban structures, soils, sediments, rocks, and non-photosynthetic vegetation.Vegetation refers to green photosynthetic plants, and dark objects encompasses absorptive substrate materials, clear water, deep shadows, etc.The S-V-D endmember coefficients with dates and locations for each subscene are available in [45] and a plot of the endmembers is provided in [50].The estimates obtained from the global endmembers have been compared to the fractional vegetation cover derived vicariously by linearly unmixing nearcoincidental WV-2 acquisitions over a set of diverse coastal environments, using both global endmembers and image-specific endmembers to unmix the WV-2 images.The strong 1:1 linear correlation between the fractions obtained from the two types of image indicate that the mixture model fractions scale linearly from 2 m to 30 m over a wide range of terrains.When endmembers are derived from a large-enough sample of radiometric responses to encompass the Landsat spectral mixing space, they can be used to build a standardized spectral mixture model for global mapping applications.
Classification Methods and Validation Approaches
In this section, classification methods, validation datasets and validation approaches, along with the algorithm's parameter settings and computational requirements are discussed.
Land Cover Classification from Abundance Maps
Subsequent levels of classification can be carried from the S-V-D abundance maps to obtain fractional maps of forest, farmland/grassland, water bodies and urban areas, etc. for numerous other
Endmember Generation
Global mixing spaces using a spectrally diverse LC and a diversity of biomes with 100 Landsat ETM+ subscenes were used to define a standardized set of spectral endmembers of substrate ("S"-endmember 1), vegetation ("V"-endmember 2), and dark objects ("D"-endmember 3) that spans all terrestrial biomes determined by mean annual temperature and precipitation in proportion to land area [49].The geographical locations of the 100 Landsat scenes showing spectral diversity resulting from LC variety across biomes was established in an earlier study [50].The subscenes included within-scene spectral variability, LC transition and global land area distribution.With a linear stretch of 2% applied to the bands within a subscene, substrate, vegetation and water were apparent as brown, green and black color, respectively.The substrate includes urban structures, soils, sediments, rocks, and non-photosynthetic vegetation.Vegetation refers to green photosynthetic plants, and dark objects encompasses absorptive substrate materials, clear water, deep shadows, etc.The S-V-D endmember coefficients with dates and locations for each subscene are available in [45] and a plot of the endmembers is provided in [50].The estimates obtained from the global endmembers have been compared to the fractional vegetation cover derived vicariously by linearly unmixing near-coincidental WV-2 acquisitions over a set of diverse coastal environments, using both global endmembers and image-specific endmembers to unmix the WV-2 images.The strong 1:1 linear correlation between the fractions obtained from the two types of image indicate that the mixture model fractions scale linearly from 2 m to 30 m over a wide range of terrains.When endmembers are derived from a large-enough sample of radiometric responses to encompass the Landsat spectral mixing space, they can be used to build a standardized spectral mixture model for global mapping applications.
Classification Methods and Validation Approaches
In this section, classification methods, validation datasets and validation approaches, along with the algorithm's parameter settings and computational requirements are discussed.
Land Cover Classification from Abundance Maps
Subsequent levels of classification can be carried from the S-V-D abundance maps to obtain fractional maps of forest, farmland/grassland, water bodies and urban areas, etc. for numerous other applications.In the S-V-D abundance maps, dense forest pixels occur on and near the binary mixing trend between vegetation and dark (i.e., shadow) endmembers.The vegetation endmembers generally correspond to illuminated foliage.The higher the fraction of vegetation endmember, the less canopy shadow and illuminated substrate contribute to the pixel's reflectance.Herbaceous vegetation (grass) typically has less canopy shadow than closed canopy forest, so they occur closer to vegetation endmember (higher vegetation fractions).A continuum may exist between dense forest and herbaceous vegetation using a trade-off between vegetation and shadow fractions.Dense forest can be distinguished from farmland/grassland by assigning a threshold to both the vegetation and dark endmember fractions.Additionally, NDVI (Normalized Difference Vegetation Index) values can also be used as a supplement layer for discriminating dense forest by applying a suitable threshold.Sampled forest patches for training data revealed that vegetation abundance >0.2, dark objects abundance >0.6 and NDVI >0.7 represented forest areas.
In this work, the above approach is tested for the state of California in the United States for which training, test and relevant ancillary data were available.For classification of the three LC classes, 200,000 pixels belonging to homogeneous dense forest, homogeneous herbaceous vegetation/grassland, and water bodies (clear water, turbid water and green water caused by eutrophication) were selected as training samples using a stratified random sampling on the basis of administrative boundaries.The forest and grassland pixels were also confirmed by following the method in Huang et al., (2008) [51] and by visual interpretation of the high-resolution Google Earth TM images.A supervised classification was performed on S-V-D abundances using the random forest (RF) classifier [52] into dense forest, farmland and water bodies.Figure 3 shows the flowchart of the overall methodology.The same set of training polygons were also used to generate training samples to classify the original WELD data (bands 1-5 and 7) to obtain per-pixel classified maps in order to assess the advantages of fractional maps over per-pixel classification.Here, the number of trees was set to 250, the node size was set to five and the maximum number of terminal nodes was set to 500.When further parameter variations did not increase the accuracy, the final values of the parameters were obtained empirically depending on the minimum execution time and memory requirements.
For the classification of built and constructed impervious surfaces (such as buildings, driveways, sidewalks, roads, parking lots and other man-made surfaces), the common sources of error include underestimation in areas with extensive tree cover and overestimation in areas with extensive bare ground.In this regard, Defense Meteorological Satellite Program Operational Line Scanner (DMSP OLS) nighttime light data and the Visible Infrared Imaging Radiometer Suite (VIIRS) day-night band (DNB) carried by the Suomi National Polar-Orbiting Partnership (NPP) satellite are very useful in monitoring and analyzing human activities and natural phenomena.DMSP OLS and NPP-VIIRS pixels in the nighttime light images aid in classifying urban built-up areas since they capture illuminated surfaces than the surrounding dark areas at night [53].
Pixels with radiance values equal to or larger than a threshold that produces minimum difference between image-derived value and reference data are considered part of an urban built-up area [54].In this study, the NPP-VIIRS data at a resolution of 15 arcsec grids obtained from the NOAA (National Oceanic and Atmospheric Administration) National Geophysical Data Center were resampled to 30 m.A threshold of the minimum value radiance was used to segment the images into urban and non-urban areas for different cities.The absolute difference between the extracted area and the reference data was recorded and the process was repeated by varying the threshold to reach the maximum pixel value of the image.The threshold value that produced minimum difference was selected for urban built-up area extraction.For each city, FCC (false color composite) of urban areas was also adopted to analyze the spatial coherence of the extracted results.
Validation Methods
For the computer-simulated data, the estimated class abundance maps were first compared with the simulated true abundance maps using visual checks.Performance discriminators such as range of fractional estimates (minimum and maximum abundance values), correlation coefficient (r), RMSE (root mean square error), signal-to-reconstruction error (SRE), probability of success (p s ), and bivariate distribution function (BDF) were used for validation.A smaller RMSE indicates a better unmixing result i.e., higher accuracy.SRE and p s were computed for each algorithm for various noise levels [37].The quality of the reconstruction of a spectral mixture is measured using SRE ≡ E[ α ] 2 2 /E[ α − α 2 2 ] in decibels, where SRE(dB) ≡ 10 log 10 (SRE).It gives information by relating the power of the error to the power of the signal.It is to be noted that the higher the SRE(dB), the better the unmixing performance.p s ≡ P α−α 2 α 2 ≤ threshold , where p s is an estimate of the probability that the relative error power is smaller than a certain threshold and is also commonly used in the sparse regression community.It indicates the stability of the estimation, which is a complimentary measure to SRE (that is an average).BDF is used to visualize the accuracy of prediction by mixture models.Points along a 1:1 line on the BDF graph indicate predictions that match completely with the real abundances.Additionally, for each of the 74 surveyed field locations in the 11 Landsat scenes of the agricultural landscape, mean absolute error (MAE) was computed for all the subpixel learning algorithms.
Comprehensive validation of the entire fractional LC maps of the forest, farmland, water and urban areas was found to be extremely challenging as there are still limited datasets that can be used as a reference.Therefore, we focused on the National Land Cover Database 2011 (NLCD 2011) [55], NLCD 2011 percent tree canopy cover, NLCD 2011 percent developed imperviousness [56], North American Forest Dynamics (NAFD) 2010 product [56][57][58], and Google Earth TM imagery for both qualitative and quantitative methods.Qualitative validation included comprehensive visual assessment with local reference of high spatial resolution Google Earth TM imagery, whereas quantitative methods included a design-based accuracy assessment with NLCD 2011 products.For the quantitative assessment, the validation points were stratified into four groups based on NLCD 2011 LC classes: (1) Forest (NLCD classes: deciduous forest, evergreen forest, mixed forest, and woody wetland); (2) Agriculture/farmland (NLCD classes: grassland/herbaceous, pasture/hay, and cultivated crops); (3) Water (NLCD class: open water); and urban (NLCD classes: developed, open space, low intensity, medium intensity, and high intensity).Additionally, a separate per-pixel classified map from WELD was used for comparison with the fractional class maps.A total of 200,000 sampling pixels were chosen at random from 1000 validation sites for each LC type.Pixels with more than 50% presence of any class in the fractional maps were discretized to the respective class, although this threshold can be reduced if a smaller fraction of LC class is to be accounted.For urban areas, pixels with more than 30% fractions were considered to be urbanized, since urban areas have a mix of houses, parking spaces, walkways, lawns and roads.The per class producer's accuracy (PA), user's accuracy (UA), overall accuracy (OA), and kappa coefficient were calculated by comparing fractional maps with NLCD 2011 LC map and averaged over 10 iterations.
Parameter Settings and Computational Requirements
All the above subpixel learning algorithms (discussed in Section 2) were implemented in MATLAB ® .In FCLS, the tolerance was set to 1 × 10 −5 (the algorithms will converge at this value without entering an infinite loop) and the value of λ (weightage for the sum to one constraint) was set to 20, which was determined empirically.The tolerance in MFCLS was set to 1 × 10 −7 .The parameter λ in SUnSAL was set to 10 −5 , δ = 10 −4 ; λ in SUnSAL TV was set to 5 × 10 −4 and λ TV in SUnSAL TV was set to 5 × 10 −3 for computer-simulated data and 10 −3 for the Landsat data.λ in CL SUnSAL was set to 10 −2 for both without and with the ASC and ANC activated.Here, the optimal parameter values were obtained after several experiments with various values of λ and λ TV (see Tables IV-V in [37]; Tables I-II in [10,40]) and were finalized based on the higher SRE.It was observed that our observations of the best parameter values coincided with the values in [37,39].The maximum number of iterations was set to 100 and all other parameters were set to default.UCLS and SPU do not require any parameter settings.
In order to provide an open source solution and attain high execution speed for the global WELD classification, FCLS MATLAB ® implementation was converted to C++ syntax along with other open source packages such as OpenCV (Open Source Computer Vision) package, boost C++ libraries, gdal, and GRASS (Geographic Resources Analysis Support System) GIS on the NASA Earth Exchange (NEX) platform.The NEX framework combines state-of-the-art supercomputing (Pleiades supercomputer), Earth system modeling, remote sensing (RS) data from NASA and other agencies, workflow provenance, and a scientific social networking platform to deliver a complete end-to-end work environment.Since NEX is built upon the terrestrial observation and prediction system (TOPS) [59], a data assimilation and modeling framework developed at the NASA Ames Research Center, it provides software libraries and tools required by NEX to automate the processing of satellite and climate data, and manages workflows for data analysis and modeling studies.Each node in the Pleiades Harpertown compute cluster have CPUs consisting of 8 GB of memory and eight cores with 3-GHz processors per node.The deployment of the algorithm was done through the QSub routine and the message passing interface.Each WELD tile was fed to a separate core in parallel in the NEX high performance computing platform.
Computer Simulations
Figure 4a-c shows noise-free synthetic abundance maps for endmember 1, 2 and 3 and Figure 4d-f shows estimated abundance maps obtained for each signature class from FCLS, with the range of abundance fraction values underneath each panel.Note that the range of abundance values obtained from FCLS is exactly the same as in the synthetic abundance maps indicating that FCLS has been able to accurately perform subpixel classification.Results from the other algorithms are not presented due to space considerations.Visual examination of the abundance maps obtained from the seven algorithms revealed that they were similar in terms of the relative fractions and from the detection point of view.
Computer Simulations
Figure 4a-c shows noise-free synthetic abundance maps for endmember 1, 2 and 3 and Figure 4d-f shows estimated abundance maps obtained for each signature class from FCLS, with the range of abundance fraction values underneath each panel.Note that the range of abundance values obtained from FCLS is exactly the same as in the synthetic abundance maps indicating that FCLS has been able to accurately perform subpixel classification.Results from the other algorithms are not presented due to space considerations.Visual examination of the abundance maps obtained from the seven algorithms revealed that they were similar in terms of the relative fractions and from the detection point of view.Figure 5a-c shows r (statistically significant at 0.99 confidence level, p-value < 2.2 × 10 −16 ) and RMSE between real abundance and estimated abundance obtained from the seven algorithms for all the three endmembers corresponding to different levels of noise.For SUnSAL, SUnSAL TV and CL SUnSAL algorithms, three different cases-without any constraints, with ANC, and with both ANC and ASC imposed on the optimal solution were evaluated.For the noise-free data, most of the models had high r (close to 1) and low RMSE.All the models were robust until noise variance 32, beyond which r gradually decreased and reached a minimum of 0.12, producing higher RMSE following a hyperbolic curve.FCLS was robust until noise variance 128 for all the three endmembers and it showed highest r for endmember one at noise variance 256.For endmembers two and three, FCLS, MFCLS, SPU, SUnSAL (ANC + ASC) and CL SUnSAL (ANC + ASC) produced the highest r with lower RMSE.Figure 5d shows the plot of p s and SRE(dB) against various noise levels.Abundance values obtained from the unmixing algorithms were accepted when α − α 2 / α 2 ≤ threshold, where threshold (0.0005) is the 99th percentile of α − α 2 / α 2 of FCLS abundance at noise variance 32, where most of the methods rendered good performance.Figure 5d reveals that until noise variance 32, FCLS, MFCLS, SPU, fully-constrained SUnSAL and CL SUnSAL had p s close to 1, beyond which they decreased gradually.FCLS was robust even at noise variance 128 with p s = 0.6.With respect to SRE(dB), FCLS was better among all the algorithms at different noise levels.On the other hand, UCLS, SUnSAL (no constraints), CL SUnSAL (no constraints), along with SUnSAL TV (without and with constraints) had poor performance with low r, high RMSE, low p s and SRE.The plot of the execution time taken by each of the algorithms for unmixing computer-simulated data for different levels of noise on a 2.53 GHz Intel Core i5 processor with 8 GB RAM revealed that UCLS, FCLS, MFCLS, SPU and SUnSAL took much less execution time (<3 s), whereas SUnSAL TV and CL SUnSAL took higher execution time (15 s and 7 s respectively).The points on the BDF plots (not shown here) fell almost in a 1:1 line for all the three endmembers.As the noise variance increased, the estimated abundance deviated from the real abundance values.
plot of the execution time taken by each of the algorithms for unmixing computer-simulated data for different levels of noise on a 2.53 GHz Intel Core i5 processor with 8 GB RAM revealed that UCLS, FCLS, MFCLS, SPU and SUnSAL took much less execution time (<3 s), whereas SUnSAL TV and CL SUnSAL took higher execution time (15 s and 7 s respectively).The points on the BDF plots (not shown here) fell almost in a 1:1 line for all the three endmembers.As the noise variance increased, the estimated abundance deviated from the real abundance values.Considering the various measures of performance discriminators and execution time in the above analysis, it was found that overall, FCLS outperformed, followed by MFCLS, SPU and SUnSAL with marginally lower accuracies.Even though SUnSAL TV introduces regularization to enforce continuity of abundances among neighboring pixels, it performed the worst of all, with poor accuracy measures and higher execution times.In the next section, the implementation of the algorithms on real-world data for different landscapes are discussed.
Landsat Data-An Agritultural Landscape
Each of the 11 Landsat scenes was unmixed with S-V-D endmembers using different models to obtain the abundance estimates.For each scene, the vegetation abundances were compared with ground-based measurements.FCLS produced the least MAE of 0.03 and highest r of 0.99.All other methods except fully-constrained SUnSAL TV produced a lower MAE of 0.08 and r of 0.98.SUnSAL TV (ANC+ASC) rendered high MAE (0.3) and low r (0.21).FCLS was slightly better when compared to the other methods.On an average, FCLS took the minimum execution time (20 min) to process each Landsat scene (7321 rows × 8367 columns).SUnSAL and CL SUnSAL rendered good classification accuracies with high execution time (7200 s (2 h)/scene).Although time exhaustive (the algorithm took 118,800 s (33 h)/scene), SUnSAL TV did not produce satisfactory results.
Landsat Data-An Urban Scenario
Unmixed Landsat and WV-2 data of San Francisco with the S-V-D endmembers were compared for accuracy where each 2 m WV-2 pixel is less than 0.5% of the area within the 30 m full-width, halfmaximum of the Landsat point spread function.The WV-2 fractions were convolved with a Gaussian low pass filter with the point spread function of the Landsat sensor and resampled to 30 m. Coordinate comparison of the WV-2 and Landsat datasets at many random pixels did not reveal any systematic image registration error.Validation revealed that the unconstrained and partially constrained algorithms viz.UCLS, SUnSAL, SUnSAL TV and CL SUnSAL for the S, V and D classes produced MAE of 0.11, 0.07 and 1.99, respectively, and r of 0.86, 0.88 and −0.03, respectively.FCLS Considering the various measures of performance discriminators and execution time in the above analysis, it was found that overall, FCLS outperformed, followed by MFCLS, SPU and SUnSAL with marginally lower accuracies.Even though SUnSAL TV introduces regularization to enforce continuity of abundances among neighboring pixels, it performed the worst of all, with poor accuracy measures and higher execution times.In the next section, the implementation of the algorithms on real-world data for different landscapes are discussed.
Landsat Data-An Agritultural Landscape
Each of the 11 Landsat scenes was unmixed with S-V-D endmembers using different models to obtain the abundance estimates.For each scene, the vegetation abundances were compared with ground-based measurements.FCLS produced the least MAE of 0.03 and highest r of 0.99.All other methods except fully-constrained SUnSAL TV produced a lower MAE of 0.08 and r of 0.98.SUnSAL TV (ANC+ASC) rendered high MAE (0.3) and low r (0.21).FCLS was slightly better when compared to the other methods.On an average, FCLS took the minimum execution time (20 min) to process each Landsat scene (7321 rows × 8367 columns).SUnSAL and CL SUnSAL rendered good classification accuracies with high execution time (7200 s (2 h)/scene).Although time exhaustive (the algorithm took 118,800 s (33 h)/scene), SUnSAL TV did not produce satisfactory results.
Landsat Data-An Urban Scenario
Unmixed Landsat and WV-2 data of San Francisco with the S-V-D endmembers were compared for accuracy where each 2 m WV-2 pixel is less than 0.5% of the area within the 30 m full-width, half-maximum of the Landsat point spread function.The WV-2 fractions were convolved with a Gaussian low pass filter with the point spread function of the Landsat sensor and resampled to 30 m. Coordinate comparison of the WV-2 and Landsat datasets at many random pixels did not reveal any systematic image registration error.Validation revealed that the unconstrained and partially constrained algorithms viz.UCLS, SUnSAL, SUnSAL TV and CL SUnSAL for the S, V and D classes produced MAE of 0.11, 0.07 and 1.99, respectively, and r of 0.86, 0.88 and −0.03, respectively.FCLS for S, V and D classes showed MAE of 0.03, 0.02 and 0.02, respectively, and r of 0.94, 0.97 and 0.97, respectively, producing the best classification results with the smallest execution time (19 min) to process each Landsat scene (7151 rows × 8241 columns).The SUnSAL family of algorithms was the most time-exhaustive (SUnSAL and CL SUnSAL took 7200 s (2 h)/scene), while fully-constrained SUnSAL TV (for which the execution time was 118,800 s (33 h)/scene) did not produce reasonable results.Note the correspondence between the vegetation abundance and NDVI, which clearly indicates that the vegetation endmember was able to extract the vegetation component from the mixed pixels when investigated visually and spatially.To our best knowledge, this is the first attempt to produce time series global abundance maps at the native 30 m spatial resolution.With the extended WELD project, monthly and annual global products for six three-year periods spaced every five years (1985, 1990, 1995, 2000, 2005 and 2010) are planned in reverse chronological order (i.e., 2010, ..., 1985).
Subpixel Land Cover Classification
An elementary implementation of our framework produced reasonable results.Figure 7a,b shows the fractional forest cover and farmland/grassland maps of California. Figure 7c shows FCC (bands 3-5) for a small region from south-central California where dense forest pixels are highlighted in dark green, farmland as light green and water bodies in black.Figure 7d-f are the fractional LC maps of the three classes, (g) is the classified output from RF classifier and (h) is the forest cover map from NAFD.By comparing Figure 7d, g and h through visual inspection, it is clear that although RF Note the correspondence between the vegetation abundance and NDVI, which clearly indicates that the vegetation endmember was able to extract the vegetation component from the mixed pixels when investigated visually and spatially.To our best knowledge, this is the first attempt to produce time series global abundance maps at the native 30 m spatial resolution.With the extended WELD project, monthly and annual global products for six three-year periods spaced every five years (1985, 1990, 1995, 2000, 2005 and 2010) are planned in reverse chronological order (i.e., 2010, ..., 1985).
Subpixel Land Cover Classification
An elementary implementation of our framework produced reasonable results.Figure 7a,b shows the fractional forest cover and farmland/grassland maps of California. Figure 7c shows FCC (bands 3-5) for a small region from south-central California where dense forest pixels are highlighted in dark green, farmland as light green and water bodies in black.Figure 7d-f is the fractional LC maps of the three classes, Figure 7g is the classified output from RF classifier and Figure 7h is the forest cover map from NAFD.By comparing Figure 7d,g,h through visual inspection, it is clear that although RF seems to correctly classify the pixels, it has overestimated the classes because of per-pixel classification, while the NAFD product has overestimated forest areas over farmland.Figure 8a shows the spatial distribution of fractional water bodies and Figure 8b is a FCC of the San Luis Reservoir, with canals and highways along the water body.Figure 8c is the fractional water map, Figure 8d shows the classified map from RF and Figure 8e is the water body as depicted in the NAFD map.Linear canals have also been detected along with the reservoir in the unmixing based classification, unlike the output from the RF and NAFD product.It is clear that RF has overestimated the urban built-up areas since it also classifies open areas as urban within and outside the urban extent, causing high commission errors.If large open areas are separated from the urban areas during classification, it will decrease the total urban extent, especially when a definite urban boundary is not defined, as in the case of the SF Bay Area.
The minimum fractional forest cover estimated was 0.0016 ha (16 m²) and the total fractional area of forest cover was found to be 8,324,716 ha (19.7%).RF produced 8,978,601 ha (21.18%),NAFD showed 11,672,365 ha (27.53%), the NLCD 2011 LC map indicated 9,550,523 ha (22.54%) and the NLCD forest canopy percent indicated 7,456,588 ha (17.56%).For the farmland/grassland fraction map, the minimum fractional cover was 0.002 ha (20 m²) and the total fractional area was found to be 2,439,087 ha (5.8%), RF had 2,937,550 ha (6.9%), and NLCD showed 3,337,367 ha (7.8%).For farmland, the reason for the difference among NLCD, unmixing and RF could be the seasonal growth cycles in croplands.NLCD considers these classes as cultivated crops, which appear similar to farmland/herbaceous vegetation.The minimum fraction of water detected was 0.0036 ha (36 m²), which may correspond to small pools in residential areas.Unmixing-based classification estimated 1,888,727 ha (4.46%), RF yielded 1,948,956 ha (4.60%), NAFD indicated 1,769,822 ha (4.17%), and the It is clear that RF has overestimated the urban built-up areas since it also classifies open areas as urban within and outside the urban extent, causing high commission errors.If large open areas are separated from the urban areas during classification, it will decrease the total urban extent, especially when a definite urban boundary is not defined, as in the case of the SF Bay Area.
The minimum fractional forest cover estimated was 0.0016 ha (16 m 2 ) and the total fractional area of forest cover was found to be 8,324,716 ha (19.7%).RF produced 8,978,601 ha (21.18%),NAFD showed 11,672,365 ha (27.53%), the NLCD 2011 LC map indicated 9,550,523 ha (22.54%) and the NLCD forest canopy percent indicated 7,456,588 ha (17.56%).For the farmland/grassland fraction map, the minimum fractional cover was 0.002 ha (20 m 2 ) and the total fractional area was found to be 2,439,087 ha (5.8%), RF had 2,937,550 ha (6.9%), and NLCD showed 3,337,367 ha (7.8%).For farmland, the reason for the difference among NLCD, unmixing and RF could be the seasonal growth cycles in croplands.NLCD considers these classes as cultivated crops, which appear similar to farmland/herbaceous vegetation.The minimum fraction of water detected was 0.0036 ha (36 m 2 ), which may correspond to small pools in residential areas.Unmixing-based classification estimated 1,888,727 ha (4.46%), RF yielded 1,948,956 ha (4.60%), NAFD indicated 1,769,822 ha (4.17%), and the NLCD 2011 LC map had 1,938,462 ha (4.57%) as the water spread area.NAFD underestimated water bodies, as also seen in Figure 8e, while RF and NLCD had closer values.
For the SF Bay Area, unmixing-based classification using VIIRS data showed 139,152 ha as urban areas (most of the pixels have impervious surface fractions between 15 55%), RF gave a much higher estimate of 235,471 ha, NLCD indicated 231,179 ha, and the NLCD percent developed imperviousness layer had 141,522.73ha (most of the pixels have impervious surface fractions between 40 and 70%).One reason for the difference in the range of impervious surfaces between unmixing and NLCD impervious product could be the underlying methods: FCLS (used in unmixing) versus regression tree (used in NLCD).Also, FCLS imposes a sum-to-one constraint on the endmembers, whereas NLCD method does not.For the urban impervious surface in the LA area, unmixing-based classification yielded 189,179 ha, RF gave 315,496 ha, NLCD LC map resulted in 313,622 ha and the NLCD percent developed impervious layer indicated 191,134 ha.It can be seen that the unmixing and NLCD impervious percent values were very close.However, hard classification almost always overestimated the LC area compared to the fractional maps.These results also show that unmixing provides a physical basis to quantify the spectral characteristics of LC and distinguish spectrally heterogeneous areas from more spectrally homogeneous LC.One benefit of defining urban extent on the basis of spectral heterogeneity is the ability to generate a range of verifiable extent estimates that encompass a range of different definitions of urban areas, given that a unique spectral signature for urban LC is difficult to obtain.The above approach attempts to eliminate ambiguity resulting from varying administrative and political definitions of urban areas, as in many cases the exact definition of urban/non-urban remains ambiguous.As such, whether a park near the Golden Gate Bridge in San Francisco is urban or non-urban remains a question.It is to be noted that the park was classified as non-built-up in the original WELD data using RF.Beyond this, the lights in the park also contribute to the light emissions of the city.
Validation-For each of the 1000 validation sites as shown in Figure 10, (each point represents ~10 validation sites) reference datasets were derived for each LC type to create a confusion matrix.The results are tabulated in Table 1, which indicates that unmixing-based classification produced a higher overall TPR (true positive rate) of 91%, which is more than the per-pixel classification using RF (85%).Table 2 indicates PA and UA.The OA and kappa for unmixing-based classification (91.30% and 0.89) was higher than the RF classifier (85.31% and 0.83%).While the RF class areas are close to NLCD maps, it has wrongly classified many pixels belonging to barren/open land as built-up.Few water pixels were wrongly classified using unmixing-based classification, therefore, the PA decreased marginally.This is because some confusion between water and shadow might appear since both classes have pixels which are very dark in all the spectral channels.However, given the same geo-coordinates of training pixels for classification, the UA increased from 87.43 to 95%.The PA increased for forest (3.2%), farmland (9.76%), urban built-up (SF) (1.09%), urban built-up (LA) (12.15%), and the UA increased for forest (6.6%), water bodies (7.7%), urban built-up (SF) (11%), urban built-up (LA) (9.6%) in the unmixing based classification output.On the other hand, the UA decreased (∼1%) for farmland.
Unmixing was intended to improve classification accuracies by correctly classifying pixels, which were likely to be misclassified by RF classifier.Therefore, a cross-comparison of the two classified images located the pixels that were assigned different class labels at the same location.These wrongly classified pixels when validated revealed a 6% (29,830,877 pixels) improvement in unmixing-based classification.Had the WELD data been classified using a hard classification technique such as RF, the errors would accumulate for all the pixels.
Unmixing was intended to improve classification accuracies by correctly classifying pixels, which were likely to be misclassified by RF classifier.Therefore, a cross-comparison of the two classified images located the pixels that were assigned different class labels at the same location.These wrongly classified pixels when validated revealed a 6% (29,830,877 pixels) improvement in unmixing-based classification.Had the WELD data been classified using a hard classification technique such as RF, the errors would accumulate for all the pixels.Pixel to pixel correlation between the unmixed forest fraction and NLCD forest cover fraction was found to be 0.89.The SF and LA built-up fraction obtained through NPP-VIIRS showed a positive correlation of 0.87 and 0.91, respectively, with the NLCD percent developed imperviousness surface layer.The above validation approach was able to detect more than three-fourth of the class fractions with relatively low levels (i.e., 15-41% for the validation sites) of false alarm.
Along the sampling points shown in Figure 10, 1000 patches of size 10 × 10 pixels (300 m × 300 m on the ground) were selected for each LC type.The percentage of a class in each patch was computed as the ratio of the area of the class within the patch over the total area of that patch for the unmixed classified map, RF based framework, NLCD and NAFD maps. Figure 11 shows class fractions for 100 of the 1000 patches (for visual clarity) with the class means indicated by straight lines to visualize the error rates.Unmixing-based classification showed a 15%, 0% and 20% error rate, r of 0.84, 0.99 and 0.76, and RMSE of 0.23, 0.03 and 0.28 with NLCD, NLCD fractional forest cover and NAFD product, respectively.Obviously, the unmixing-based output was closest to the NLCD fractional forest cover with the maximum overestimation of 5% among the 1000 patches.Understanding the degree of underestimation/overestimation can lead to better estimates of tree cover and a better understanding of the potential limitations associated with the current classification estimates.RF was more positively correlated to the NLCD map (0.84) than NAFD (0.76), with RMSE of 0.21 and 0.26, and error rate of 10% and 15%.It is to be noted that the agreement between the NLCD and NAFD products was found to be only 74% for the 1000 patches.For the farmland class, unmixing-based classification was highly correlated (r = 0.94) to NLCD with a RMSE of 3, and a maximum underestimation of 6% was observed between fractions of both the maps with a mean error rate of less than 1% (as evident from Figure 11b).This difference was attributed to fine-scale variations in farmland (small patches) that were not detected by the unmixing method [60].For RF, r of 0.76 was found with the NLCD map, RMSE was 35.17, and a maximum difference of 85% was found between the fractions of both the patches with an error rate of ~30% (RF had highly overestimated farmland).
With the water class, a very high r of 0.99, RMSE of 0.98, and a maximum underestimation of 3% were observed between patches of unmixing-based classification and NLCD.With the NAFD product, r was 0.92, RMSE was 11.23, and a maximum difference of 85% between patches was observed.The overall performance of RF was similar to the unmixing-based classification with very small mean error rates among the different products.Figure 11d,e enumerates the fractions of urban region against the corresponding 100 patches for the SF Bay Area and LA area.NLCD and RF seem to overestimate both the urban areas.The r between unmixing-based classification and NLCD percent developed imperviousness was higher (0.89 for the SF Bay Area and 0.91 for LA) with lower RMSE (3.16 for the SF Bay Area and 3.07 for LA) than the NLCD map (r of 0.79 for the SF Bay Area and 0.85 for LA and RMSE of 23 and 16, respectively), with a maximum underestimation of ~8% observed between the fractional maps and the NLCD fractional maps for the 1000 patches.For urban areas, the potential for greater underestimation may be exacerbated as the urban surfaces become more fragmented or unevenly distributed.On the other hand, RF produced significantly less accurate results, with a higher average error rate of around 10% (see Figure 11d,e), lower r and high RMSE with both NLCD products for the SF Bay and LA areas.The poor performance of RF maps can be attributed to hard classification, where an urban pixel is classified in its entirety and approximated as either an urban or non-urban region based on whether most of the pixel area dominated by impervious surfaces are urban or not.
LC studies have important climatic, hydrologic, biophysical, ecological and socio-economic impacts on the environment.To date, most studies involve a simple per-pixel classification of RS data for LC maps, since an automated characterization of large-scale subpixel LC classification remains a challenge due to the inherent complexity and variability in vegetation dynamics and urban environment.In per-pixel classification, objects are classified based on their spectral characteristics, however, in most practical applications, each image pixel is usually composed of different (multiple) objects, which cannot be analyzed by per-pixel classification methods.The purpose of this study was to examine (i) how well the unmixing models perform towards processing Landsat data with global endmembers? and (ii) how to obtain different LC class fractions from the S-V-D abundance maps?
The relative comparison of one algorithm to the others was challenging due to lack of standardized data and the absence of defined rules [25].
for LC maps, since an automated characterization of large-scale subpixel LC classification remains a challenge due to the inherent complexity and variability in vegetation dynamics and urban environment.In per-pixel classification, objects are classified based on their spectral characteristics, however, in most practical applications, each image pixel is usually composed of different (multiple) objects, which cannot be analyzed by per-pixel classification methods.The purpose of this study was to examine (i) how well the unmixing models perform towards processing Landsat data with global endmembers? and (ii) how to obtain different LC class fractions from the S-V-D abundance maps?
The relative comparison of one algorithm to the others was challenging due to lack of standardized data and the absence of defined rules [25].In the Fresno area of central California, representing an agricultural scenario, the acquisitions were taken during clear sky conditions (except for three days) that also coincided with the approximate time of the satellite overpass which took into account the illumination effect.To avoid geolocation errors caused due to misregistration, atmospheric effects, presence of background mixed with substrate, etc., a matrix of 3 × 3 pixels centered over the GPS location was used.Thus, estimates of ground fractional cover from digital photographs obtained using image segmentation represented the field conditions well within the Landsat IFOV (instantaneous field of view).Since field data were In the Fresno area of central California, representing an agricultural scenario, the acquisitions were taken during clear sky conditions (except for three days) that also coincided with the approximate time of the satellite overpass which took into account the illumination effect.To avoid geolocation errors caused due to misregistration, atmospheric effects, presence of background mixed with substrate, etc., a matrix of 3 × 3 pixels centered over the GPS location was used.Thus, estimates of ground fractional cover from digital photographs obtained using image segmentation represented the field conditions well within the Landsat IFOV (instantaneous field of view).Since field data were gathered in the absence of topography, soils from two different field conditions may differ, causing minor errors in abundance estimates of substrate and dark objects.This difference is anticipated to be greater with a lower vegetation fraction cover than at dense vegetation sites.The image-derived fraction estimates closely matched the ground observations of sparse vegetation conditions, appreciating the fact that the vegetation fraction from the image is modeled only for the portion that is illuminated by sunlight and the shaded portions of the canopy are likely to be assigned to the dark fractions.Shadows are not a LC type but a result of the shaded portions of the vegetation canopy above soil or the consequence of tall buildings in urban areas and hillocks in mountainous terrain.Dark shadows often appear similar to tarred roads, deep water bodies, etc. in RS imageries and are therefore difficult to separate.Although shadows are implicitly modeled in RTM, they cannot be easily separated from other dark objects in the scene using LMM.Therefore, in this work shadows have been placed in the dark objects category that also encompasses absorptive substrate materials and clear deep water.The algorithms with the available global endmembers accounted for the variance in the soil by substrate and dark object fractions, given the fact that overall the crop conditions were very uniform.
For the SF area, which depicted an urban scenario, most of the urban pixels were mixed with vegetation (urban trees), roads, shadows or appear as dark objects due to the varied materials used in the construction of the terraces.Nevertheless, this study showed that urban reflectance could be adequately modeled with a three-endmember mixture model using Landsat and WV-2 data.The S-V-D endmember model characterized the fraction of illuminated vegetation, substrate or impervious materials and the shadowed or non-reflective surfaces such as water, roofing tar, etc. High substrate fractions are rational estimates of the impervious surface in developed land in temperate and tropical regions, as pervious surfaces are mostly covered by some kind of vegetation.Exposed substrates are also most likely to be impervious.In such cases, the vegetation fraction can be used as a proxy for fractional pervious surfaces because vegetation cannot thrive on impervious surfaces.Therefore, the presence of vegetation implies the presence of some amount of pervious surface.Hence, using detectable vegetation as an indicator of the permeable surface can account for the range of different natural and manmade surfaces [61].
The abundance estimation approach is very different from the classical per-pixel classification approach where each pure pixel is assigned to one and only one class.Owing to the consistency in the accuracy estimation procedures, standards and results, this study gives a clear vision of the accuracies of the different algorithms.Overall, FCLS performed better, was more robust and computationally faster compared to the other techniques.Nevertheless, MFCLS, SPU, fully-constrained SUnSAL and CL SUnSAL also proved to fit the data very well with equally high performances, whereas SUnSAL TV performed poorly.Despite our effort to conduct comprehensive and rigorous comparative analyses of various constrained unmixing algorithms, there are certain limitations of this study.The first limitation is that the number of endmembers used in the library was three.Therefore, it is acknowledged that fractional errors can occur occasionally in cases when few endmembers are used, resulting in spectral information that cannot be accounted for by the existing endmembers.Fractional errors can also occur when too many endmembers are present, in which case minor departures between the measured and modeled spectra are often assigned to an endmember that is used in the model, but not actually present.However, since the set of endmembers was derived for global Landsat data, they have wider usage than image or region-specific endmembers.In a real scenario, any landscape is likely to have mixture of the S-V-D endmembers.They can be easily used to assess the state, distribution and quantification of first level LC classes for obtaining spatio-temporal information from the large repository of Landsat data.Global endmembers have diverse applications from continental to global LC mapping, where local endmembers are restricted to a particular geographical location and may not be easily available for an unapproachable terrain.Therefore, the global endmembers can be used with monthly WELD to study the changes in vegetation and urban areas.The second limitation is that the fractional maps do not account for the endmember variability [62]; the discussion of this topic is beyond the scope of this work.
Although global LC data exists at spatial resolutions of 300 m, 1000 m, and also at 30 m resolution at a limited frequency, the subpixel mapping approach is now a feasible option for the next generation of global LC products.As a case study, a consistent, robust and highly scalable methodology was presented for the characterization of different LC classes.Applying this method to the WELD database now provides a robust automation module for large scale mapping of changes in LC.While there are commercial GIS software packages (such as ENVI ® , ERDAS IMAGINE ® , etc.) for unmixing/classification, they are not scalable across millions of scenes in an automated manner.The limited requirements of the presented approach make it suitable for estimating vegetation, water and urban fractions globally.The future extension of this study will include (i) analysis of the models with MODIS data that may reveal additional differences between the algorithm's performance; and (ii) use of S-V-D abundance maps for the classification of other spatial features such as buildings, highways, crop types, etc.The fractional LC maps can also be used as an input to ensemble classifiers such as deep learning frameworks [63,64] to further improve the classification accuracy.
Conclusions
In this paper, an evaluation of six state-of-the-art constrained subpixel learning algorithms was performed on computer-simulated data and Landsat data of both an agricultural landscape and an urban scenario.The analysis revealed that fully constrained least squares (FCLS) outperformed all the other techniques (such as sparse regression, signal-subspace and geometrical methods) with the highest classification accuracy and the smallest execution time.FCLS was used to produce global abundance maps of substrate, vegetation and dark objects (S-V-D) endmembers with global Web-Enabled Landsat Data.Further, the S-V-D abundance maps were classified into dense forest, farmland/grassland, water bodies and urban areas over California with the random forest classifier.Validation of these fractional land cover maps with the National Land Cover Database (NLCD) and North American Forest Dynamics (NAFD) products revealed 91% accuracy and showed 6% improvement over a per-pixel classified map, making this approach feasible for land cover mapping on a global scale.The current study may be a pathfinder in recognizing the mapping of land cover classes from abundance estimates.LC mapping from regional to global levels will continue to improve over time and would serve as a key data layer for various scientific studies.
Figure 1 .
Figure 1.Field data collection site in the San Joaquin Valley, central California, with surveyed boundaries (black polygons) from which ground fractional cover were derived for validation (left).Part of San Francisco city with a mix of substrate, vegetation, roads, shadow and dark objects (right).
Figure 1 .
Figure 1.Field data collection site in the San Joaquin Valley, central California, with surveyed boundaries (black polygons) from which ground fractional cover were derived for validation (left).Part of San Francisco city with a mix of substrate, vegetation, roads, shadow and dark objects (right).
ground.In this regard, Defense Meteorological Satellite Program Operational Line Scanner (DMSP OLS) nighttime light data and the Visible Infrared Imaging Radiometer Suite (VIIRS) day-night band (DNB) carried by the Suomi National Polar-Orbiting Partnership (NPP) satellite are very useful in monitoring and analyzing human activities and natural phenomena.DMSP OLS and NPP-VIIRS pixels in the nighttime light images aid in classifying urban built-up areas since they capture illuminated surfaces than the surrounding dark areas at night[53].
Figure 3 .
Figure 3. Overall methodology of classification.Figure 3. Overall methodology of classification.
Figure 3 .
Figure 3. Overall methodology of classification.Figure 3. Overall methodology of classification.
Figure 4 .
Figure 4. (a-c) shows synthetic abundance maps for endmember 1, 2 and 3; (d-f) shows abundance maps obtained from FCLS (Fully Constrained Least Squares).Black indicates the absence of a particular class (the minimum abundance value) and white indicates the full presence of that class in a pixel (the maximum abundance value).Intermediate values of the shades of gray represent a mixture of more than one class in a pixel.
Figure 5 .
Figure 5. Correlation coefficient (r) and RMSE (root mean square error) between abundance values obtained from the seven algorithms and the true abundance for (a) endmember 1, (b) endmember 2, (c) endmember 3, and (d) plot of probability of success ( s ) and SRE (signal-to-reconstruction error in dB) at different levels of noise.
Figure 5 .
Figure 5. Correlation coefficient (r) and RMSE (root mean square error) between abundance values obtained from the seven algorithms and the true abundance for (a) endmember 1, (b) endmember 2, (c) endmember 3, and (d) plot of probability of success (p s ) and SRE (signal-to-reconstruction error in dB) at different levels of noise.
Figure 6
Figure 6 is a mosaic of 8003 scenes showing the global (a) substrate, (b) dark objects, (c) vegetation abundance map and (d) NDVI generated from 2011 annual WELD.Each WELD scene is composed of 5295 rows and 5295 columns, therefore, a single snapshot of global data consists of 224.3 billion pixels with six spectral dimensions, processed in 29 min utilizing 200 cores.
Figure 6 .
Figure 6.Global abundance maps of (a) substrate, (b) dark objects, (c) vegetation and (d) NDVI-Normalized Difference Vegetation Index obtained from the 2011 annual WELD.
Figure 6 .
Figure 6.Global abundance maps of (a) substrate, (b) dark objects, (c) vegetation and (d) NDVI-Normalized Difference Vegetation Index obtained from the 2011 annual WELD.
Figure 7 .
Figure 7. (a) Forest fractional map and (b) farmland/grassland fractional map for the state of California.(c) FCC (false color composite) from Landsat (bands 5-4-3 as Red-Green-Blue) showing forest patch, grassland and water bodies; (d) forest fractional map for the corresponding area in (c); (e) farmland/grassland fractional map; (f) water fractional map; (g) classification of original WELD data by RF (Random Forest); and (h) forest cover map from NAFD (North American Forest Dynamics) product.
Figure
Figure 9a,b shows the RGB composites (bands 3-5) for the San Francisco (SF) Bay Area, popularly known as Silicon Valley, and the Los Angeles (LA) area, (c) and (d) are corresponding NPP-VIIRS nighttime data, (e) and (f) are the fractional urban built-up areas obtained using the combination of substrate and nighttime light data, and (g) and (h) shows the corresponding built-up area obtained from the classification of original WELD bands using RF.
Figure 7 .
Figure 7. (a) Forest fractional map and (b) farmland/grassland fractional map for the state of California.(c) FCC (false color composite) from Landsat (bands 5-4-3 as Red-Green-Blue) showing forest patch, grassland and water bodies; (d) forest fractional map for the corresponding area in (c); (e) farmland/grassland fractional map; (f) water fractional map; (g) classification of original WELD data by RF (Random Forest); and (h) forest cover map from NAFD (North American Forest Dynamics) product.
Figure
Figure 9a,b shows the RGB composites (bands 3-5) for the San Francisco (SF) Bay Area, popularly known as Silicon Valley, and the Los Angeles (LA) area, Figure 9c,d is corresponding NPP-VIIRS nighttime data, Figure9e,f is the fractional urban built-up areas obtained using the combination of substrate and nighttime light data, and Figure9g,h shows the corresponding built-up area obtained from the classification of original WELD bands using RF.
Figure
Figure9a,b shows the RGB composites (bands 3-5) for the San Francisco (SF) Bay Area, popularly known as Silicon Valley, and the Los Angeles (LA) area, (c) and (d) are corresponding NPP-VIIRS nighttime data, (e) and (f) are the fractional urban built-up areas obtained using the combination of substrate and nighttime light data, and (g) and (h) shows the corresponding built-up area obtained from the classification of original WELD bands using RF.
Figure 8 .
Figure 8.(a) Spatial distribution of the fractional water bodies in California; (b) RGB composite of the Landsat bands (3-5) showing the San Luis Reservoir in central California; (c) fractional map of the San Luis reservoir; (d) classification of original WELD data by RF classifier; and (e) San Luis reservoir from the NAFD product.
Figure 8 .
Figure 8.(a) Spatial distribution of the fractional water bodies in California; (b) RGB composite of the Landsat bands (3-5) showing the San Luis Reservoir in central California; (c) fractional map of the San Luis reservoir; (d) classification of original WELD data by RF classifier; and (e) San Luis reservoir from the NAFD product.Remote Sens. 2017, 9, 1105 16 of 25
Figure 9 .
Figure 9.The urban built-up areas extracted from substrate fractional map and NPP-VIIRS data of the SF Bay Area and LA area.Note: (a,b) are Landsat RGB composite (bands 3-5); (c,d) are NPP-VIIRS data; (e,f) are the extracted urban areas; (g,h) are the urban settlement obtained from classification of original WELD data by RF classifier.
Figure 9 .
Figure 9.The urban built-up areas extracted from substrate fractional map and NPP-VIIRS data of the SF Bay Area and LA area.Note: (a,b) are Landsat RGB composite (bands 3-5); (c,d) are NPP-VIIRS data; (e,f) are the extracted urban areas; (g,h) are urban settlement obtained from classification of original WELD data by RF classifier.
Figure 10 .
Figure 10.Map showing 100 validation points chosen over California for forest (dark green circles), farmland (light green circles), water bodies (blue boxes) and urban built-up classes (red boxes), each representing multiple (~10) points.
Figure 10 .
Figure 10.Map showing 100 validation points chosen over California for forest (dark green circles), farmland (light green circles), water bodies (blue boxes) and urban built-up classes (red boxes), each representing multiple (~10) points.
25 Figure 11 .
Figure 11.(a) Fraction of forest cover, (b) farmland, (c) water bodies, (d) urban built-up in the SF Bay Area and (e) LA area from unmixing, RF, NLCD and NAFD.The straight lines indicate the mean of the corresponding classes.
Figure 11 .
Figure 11.(a) Fraction of forest cover, (b) farmland, (c) water bodies, (d) urban built-up in the SF Bay Area and (e) LA area from unmixing, RF, NLCD and NAFD.The straight lines indicate the mean of the corresponding classes. | 17,718.4 | 2017-10-29T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Mechanical Properties of Functionally Graded Concrete Lining for Deep Underground Structures
With the mining depth of coal resources increasing, the thickness of traditional lining for deep mines becomes large. The bearing capacity of the outer lining cannot be fully utilized, so a new type of functionally graded lining structure (FGL) with radial Young’s modulus varying in gradient is proposed. In this study, through theoretical analysis and numerical simulation, the mechanical properties of the functionally graded lining were studied, including the characteristics of the elastic working state and the ultimate bearing state. The influences of the structural parameters of the functionally graded lining, including the inner radius, the thickness, Young’s modulus, the compressive strength of the concrete, Poisson’s ratio, and the number of layers on the mechanical properties were analyzed. The calculation formula of the ultimate bearing capacity of the lining and the calculation formula of the maximum tangential strain at the time of lining failure were put forward. The accuracy of the formula was verified by comparing with the numerical simulation results. The research results provide the basis for the construction of the design theory of a functionally graded lining structure and have great engineering significance for the construction of the kilometer-deep mines in the future.
Introduction
With the depletion of shallow coal resources in China, the coal mining is developing towards the direction of deep and large scale. e annual production capacity of the new and expanded large vertical mines has reached 10 million tons, and the maximum mining depth has reached 1500 m. At present, the total proved coal resources are about 5.57 trillion tons, and 2.95 trillion tons are under 1 km buried depth, accounting for 53% of the total. So far, 47 mines with mining depth greater than 1000 m have been built in China. In the next 5-10 years, more than 30 coal mines with a depth of more than 1 km will be built. e in situ stresses increase with depth. For example, the vertical stress at 1910 m depth is about 43.5 MPa [1]. erefore, the continuous deepening of the mining depth means that lining structures with higher bearing capacity must be adopted. In order to ensure that the lining works in a safe state, there are usually two ways to improve the bearing capacity of the lining: increasing the material strength and increasing the thickness of the lining. For the first way, the bearing capacity of the lining can be significantly improved by increasing the concrete strength. For example, if the concrete strength is increased by 10 MPa, the bearing capacity of the lining will be increased by about 13.8% [2]. e high-grade cement was used to improve the strength of concrete, resulting in the increasing cost and the brittle failure of lining. For the second way, the improvement effect of increasing lining thickness is limited. Statistics show that rock excavation accounts for 40%-60% of the total cost in shaft construction. For shafts extending to 1000 m depth, an increase of 10 mm in the lining thickness will increase the total cost by 1% for reinforced concrete and 0.25% for plain concrete [3]. Both theoretical research and practical experience show that using a traditional lining structure in the complex stratum of over kilometer-deep mine will face the problem of increasing material and construction cost. e thickness of the traditional lining makes the bearing capacity of the outer half material ineffective, which leads to serious waste of economy and resources.
To solve this problem, the functionally graded material (FGM) is used to design the lining structure in this study. In 1987, Japanese scientists first proposed a new concept and idea of using ceramic functionally graded materials. e physical properties of materials show gradient change in space, so there is no sudden change in material properties, which can effectively avoid and reduce stress concentration, so as to meet the different needs of different parts of the structure, and finally achieve the purpose of optimal overall performance of the structure [4]. e first developed FGM uses ceramic materials at high temperature to improve its high temperature resistance, and metal materials at the cold end in the presence of liquid nitrogen and liquid oxygen to provide good thermal conductivity and mechanical strength. From the micro point of view, the continuous change in the interlayer material from the hot end to the cold end shows nonuniform material properties [5]. Later on, its applications have been expanded to also the components of chemical plants, solar energy generators, heat exchangers, nuclear reactors, and high-efficiency combustion systems [6]. At present, the application fields of FGMs involve commerce, electronics, automobile, defense, aerospace, medical treatment, thermal barrier coating, optoelectronics, industry, and other fields [7]. In the field of construction engineering, the concept of functionally graded materials has been widely applied in the topical composites technology to develop more efficient materials [8][9][10][11]. Under the specific application requirements, materials with different properties are selected, and advanced composite technology is adopted to make the composition and structure of materials continuously change in gradient, and the properties and functions of materials also change continuously along the thickness direction. According to the distribution of component phases, functionally graded materials can be divided into two types, that is, continuous or discontinuous (stepwise or layered) gradation of materials. In addition, based on manufacturing techniques, these can be further classified as thin and bulk functionally graded materials [12].
is study proposes the functionally graded lining (FGL) as a new type of rock support for kilometer-deep underground engineering construction in order to make full use of the bearing capacity of lining materials (see Figure 1). It is assumed that the lining is homogeneous and isotropic in the tangential direction, and the material parameters change in gradient in the radial direction. Scholars at home and abroad have studied the theory of stress solution and displacement solution of functionally graded thick-walled hollow cylinder under different stress conditions. Tutuncu [13] gave the closed solutions of stress and displacement of functionally graded cylindrical and spherical vessels under internal pressure by using the microelement theory of elasticity.
Abdelhakim [14] obtained the analytical solution of radial displacement and stress distribution of hollow cylinder under uniform internal and external pressure, assuming that the elastic modulus changes nonlinearly along the radial direction of the material. eotokoglou and Loannis [15,16] gave the exact analytical solution of the radial nonuniform spherical shell with equal thickness and Young's modulus distributed as power function and exponential function in plane coordinates and spherical coordinates, and derived the displacement field and stress field. Under the action of uniform magnetic field and internal pressure, Li [17] has derived the analytical solutions of radial displacement, strain and stress components of a thick-walled hollow cylinder, and the vector of disturbed magnetic field by using the Voigt method. Ahmad [18] used the meshless local Petrov-Galerkin method to study the dynamic response of a functionally graded viscoelastic hollow cylinder subjected to thermo-mechanical loads. Based on the 3-delicacy theory, Ye [19] analyzed the three-dimensional hygrothermal vibration of multilayer cylindrical shell under general boundary conditions. Yavar [20] obtained the field equations and general solutions of axisymmetric thick shells made of functionally graded incompressible hyperelastic materials. Wang [21] deduced the two-dimensional elastic solution under the generalized plane strain assumption and gave the solution process of the separable problem of multilayer cylinder. Nie [22] used the Airy stress function to derive the exact solution of the plane strain of a functionally graded hollow cylinder with isotropic incompressible linear elastic materials on the inner and outer surfaces with different boundary conditions. Chen [23] proposed the method of transmission matrix to solve the stress and displacement theoretical solutions of thick-walled cylinders made of multilayer functionally graded materials with arbitrary Young's modulus. Shi and Xiang [24,25] obtained the hypergeometric equation of the contact pressure between layers by the displacement method and obtained the exact solution of the n-layer elastic hollow cylinder under the compression of the inner and outer surfaces. Zhang [26] proposed a method to solve the stress and displacement of the plane strain on the inner and outer surface of the radial inhomogeneous cylinder.
In this study, based on the previous theoretical research results, through theoretical analysis and the numerical calculation method, the mechanical properties of functionally graded lining were systematically and comprehensively studied. By changing a single material parameter, the effects of lining parameters on its mechanical properties such as stress, strain, and displacement were analyzed. Finally, the calculation formula of the ultimate bearing capacity of lining and the calculation formula of the maximum tangential strain of lining inner edge are put forward. e accuracy of the formula was verified by comparing the calculation results with the numerical simulation results.
Theoretical Basis and Experimental
Design of FGL
Exact Solution of the N-Layer FGL.
e stress problem of shaft lining equals to the stress problem of the thick-walled hollow cylinder. e calculation model of N-layer functional graded lining is shown in Figure 2. It is specified that E i represents the elastic modulus of layer i and R i represents the outer radius of layer i. e thickness of each layer is determined according to the number of layers, with average layers. Assume that each layer has the same value of Poisson's ratio ]. e lining is only subjected to the uniform load Q on the outer surface. For convenience, the symbols N i+1 � E i+1 /E i and c i+1 � R i+1 /R i are introduced. Compressive stress is defined as negative in this study.
Based on lame's solution, the radial stress (σ r ) i , tangential stress (σ θ ) i , and displacement (u r ) i of the layer i can be written as where A i , C i , I i , and K i are constants to be determined. For the sake of symmetry of lining, we have I i � K i � 0. e radial stress and displacement at the interfaces each layer should be continuous, so e relationship of extrusion stress between adjacent layers is as follows: Here, q i−1 (i � 2,. . .,N) is the extrusion stress between the layer i − 1 and the layer i. q 0 and q N are the inner and outer pressure of lining, respectively. Considering q 0 � 0 and q N � Q, we have ...
Advances in Civil Engineering
Combining equations (1)-(3), we obtained the relationship between (A i , C i ) and (q i−1 , q i+1 ) as follows: Here, In any layer, the radial stress satisfies Substituting (5) into (7), we obtain erefore, (5) can be written as In order to express q i , we define q 0 and q 1 first. According to (8), we get where Δ 0 � −1; Δ 1 � δ 1 /(c 2 1 − 1)c 2 2 . (8) can be written as Considering q N � Q, we have erefore, the extrusion stress of each layer can be obtained as Equation (13) gives the exact solution of the extrusion stress of two adjacent layers of the N-layer thick-walled cylinder.
erefore, the stress solution and displacement solution at any position of the thick-walled cylinder can be obtained by lame's solution [24].
Failure Criterion of Concrete.
Many concrete strength criteria have been proposed, and many scholars would select appropriate concrete failure criteria according to their own requirements, including von Mises strength criterion [27,28], Mohr-Coulomb strength criterion [29,30], Hsieh-Ting-Chen strength criterion [31,32], Bresler-Pester criterion [33], William-Warnke failure criterion [34], Drucker-Prager criterion [35,36], Kupfer strength criterion [37,38], Guo-Wang multi-axial strength criterion [39,40], multiparameter unified strength criterion [41], and multidirection stress state failure criterion [42,43]. e lining is mainly subjected to external confining pressure, and the lining concrete is under multi-axial stress. e multi-axial strength of concrete should be fully exploited [44]. e power function failure criterion [45], which is in accordance with the experimental results and expressed dimensionless by octahedral stress, is adopted. e general equation is as follows: According to the test results at home and abroad, the constant values, that is, a � 6.9638, b � 0.09, c t � 12.2445, c c � 7.3319, d � 0.9297, which can be applied to all kinds of test conditions and all multi-axial stress ranges, were obtained. e calculation accuracy of this failure criterion was relatively high [46].
Based on the abovementioned multi-axial failure criterion, the envelope equation of lining failure is established as follows: When f � 0, the lining concrete reaches the critical state of failure; when f > 0, the lining works normally; when f < 0, the lining is damaged.
Function of Young's Modulus of FGL.
e material parameters of functionally graded thick-walled hollow cylinder are assumed to be graded in the radial direction. Young's modulus of the material changes in a radial direction, and Poisson's ratio is assumed constant [13][14][15][16]. e compressive strength of the lining concrete remains the same. Common function forms include linear function [24,25], exponential function [15,22], power function [20,22], and other function forms [23,26]. For the purpose of plastic yielding of the whole lining at the same time, through the method of back analysis [47], Young's modulus function is obtained by using the required stress distribution. Based on the unified strength theory, the function form is as follows [3]: where C is the integral constant. By specifying Young's modulus E * at position
Test Design.
e parameter equation affecting the ultimate bearing capacity of FGL is as follows: Here, t and n are the lining thickness and number of layers, respectively. In order to study the influence of various structural parameters on the mechanical properties of FGL, a set of fixed parameters is taken as the typical parameters first and then as the control group. e values of FGL calculation parameters are listed in Table 1. When studying the mechanical properties of FGL under the elastic state, the liner is assumed to be linear elastic material, and its failure is not considered temporarily. Let load Q � 15 MPa.
Analysis of Mechanical Properties of FGL in
Elastic Working State e radial stress distribution trend of FGL is similar to that of single-layer homogeneous lining [24], and the tangential stress is more important in lining design, so only the tangential stress is analyzed. In the later analysis, the reference path is defined as the path from the inner side of the lining to the outer side of the lining on the radial section of the lining. By normalizing the reference path length and tangential stress, we get Δt/t and σ θ /Q, respectively. Figure 3 illustrates the variations of tangential stress, radial displacement, and tangential strain of FGL with internal radius. In the section of lining, the distribution of tangential stress, radial displacement, and tangential strain of 3-layers functionally graded lining obtained by the change of inner radius are similar. From the point of view of the whole functional graded lining, the tangential stress of the lining increases gradually from the inside to the outside of the lining. From the point of view of each layer of lining, the tangential stress of the lining decreases gradually from the inside to the outside of the lining, which is the same as that of the homogeneous lining (Figure 3(a)). e radial displacement of FGL is gradually reduced from the inner side to the outer side of lining, and the variation range of a specific value is very small (Figure 3(b)). e tangential strain of FGL is obviously reduced from the inner side to the outer side of lining, and the value of the tangential strain of the inner and outer edge of lining is quite different (Figure 3(c)).
Influence of Inner Radius
As seen in Figure 3, at the inner and outer edges of the functionally gradient lining, as the inner radius of the lining gradually increases, both the tangential stress and the tangential strain of the lining increase linearly, and the radial displacement of the lining increases nonlinearly. When the inner radius of the lining is 4 and 8 m, the tangential stress of the inner edge of the lining is 53.84 and 93.22 MPa, the tangential strain of the inner edge of the lining is 1580 and 2620 με, and the radial displacement of the inner edge of the lining is 6.31 and 20.98 mm. When the inner radius of lining is doubled, the tangential stress, tangential strain, and radial displacement of the inner edge of lining are increased by 73.14%, 65.8%, and 232.5%, respectively. Figure 4 illustrates the variations in tangential stress, radial displacement, and tangential strain of FGL with thickness. In the section of lining, the distribution of tangential stress, radial displacement, and tangential strain of 3-layers functionally graded lining obtained by the change in the thickness are similar. With the increase in the lining thickness, the tangential stress, radial displacement, and tangential strain all decrease. e variations of the lining tangential stress tends to be constant with the increase of the lining thickness. erefore, the reduction effect of stress concentration due to the Advances in Civil Engineering increase in the thickness of 3-layer functionally graded lining is limited, which is similar to that of single-layer lining (Figure 4(a)). When the lining thickness is small, the radial displacement of each point gradually decreases from inside to outside on the reference path of lining. With the increase in the lining thickness, the radial displacement increases first
Influence of ickness t of FGL.
Tangential strain (e-3) Tangential strain (e-3) Lining inner edge Lining outer edge -4.0 and then decreases in the reference path, and the maximum radial displacement of the lining occurs in the middle of the lining (Figure 4(b)). Figure 5 illustrates the variations in tangential stress, radial displacement, and tangential strain of FGL with Young's modulus E * . e change in Young's modulus E * of functionally graded lining has no effect on the distribution and value of tangential stress. However, the tangential stress redistribution of FGL is greatly changed compared with that of homogeneous lining (Figure 5(a)). Young's modulus E * affects the deformation behavior of functionally graded lining. e radial displacement and tangential strain of the inner and outer edge of FGL decrease linearly with the increase in Young's modulus E * (Figure 5(b) and 5(c)). In the design of functionally graded lining, the appropriate Young's modulus E * can effectively control the displacement within a safe and reasonable range.
Influence of Concrete Compressive
Strength f * c of FGL. Figure 6 shows that the concrete compressive strength f * c has no direct influence on the stress distribution, radial displacement, and tangential strain of functionally graded lining under elastic state. Figure 7 illustrates the variations in tangential stress, radial displacement, and tangential strain of FGL with Poisson's ratio ]. e change in Poisson's ratio ] will have a slight influence on the distribution of tangential stress. With the increase in Poisson's ratio ], the tangential stress increases in the inner edge of the lining and decreases in the outer edge of the lining, but the variation is slight overall (Figure 7(a)). When Poisson's ratio increases, the tangential strain and radial displacement decrease (Figures 7(b) and 7(c)). When Poisson's ratio ] changes, the radial displacement distribution of lining section is different. When ] � 0.2 and 0.35, radial displacement difference between inner and outer edges of lining is 0.23 and 0.74 mm, respectively. e change in Poisson's ratio has a great influence on radial displacement, but has little influence on stress distribution, which is consistent with [48]. Figure 8(a), the distribution form of tangential stress on the lining section is similar by changing the number of layers, and the number of layers of FGL determines the number of segments of the tangential stress. e tangential stress of the inside and outside of the functionally gradient lining has the opposite trend. e tangential stress at the inner edge of the lining decreases, and the tangential stress at the outer edge of the lining increases with the increase in the number of layers. As shown in Figures 8(b) and 8(c), the radial displacement and the tangential strain of the lining section have the same change trend with the change in the number of layers, and both decrease from the inside to the outside of the lining. Both the radial displacement and tangential strain
Influence of FGL Stratification Number n. As shown in
Tangential strain (e-3) Figure 9 shows the failure mode of 3-layer FGL under horizontal confining pressure load. e f determined by (13) is used to judge whether the lining is damaged. e gray area indicates that the lining has reached stress failure, the white area indicates that the lining is still in elastic working state, and the blue dotted line is the critical line of stress failure of the lining in Figure 9. e failure mode of 3-layer FGL under horizontal confining pressure load can be divided into the following stages:
Failure Mode of 3-Layer FGL.
(1) Normal working stage: the whole lining is in the elastic working state, that is, when Q � 10 MPa. rest of the lining is in the elastic working state, that is, when Q � 24 MPa. (4) e first layer of lining is completely damaged: the inner layer of lining is completely damaged, the second layer of lining is continuously damaged, and the rest of lining is in the elastic working state, that is, when Q � 30 MPa. (5) e outer layer of the lining starts to damage: the inner layer of the lining is completely damaged, the damage degree of the second layer of lining continues to increase, and the third layer of lining starts to damage from the inner edge of the lining, that is, when Q � 34 MPa. (6) e first two layers of lining are destroyed completely: the second layer is destroyed completely, and the third layer is destroyed continuously. When the first layer of the lining reaches failure, the lining can no longer be used in the actual project. erefore, the failure of FGL depends on whether the lining inner edge is damaged.
Analysis of Mechanical Properties of FGL in Ultimate Bearing State
When the inner edge of the lining reaches stress failure, it is considered that the lining can no longer be used in a safe way. erefore, the ultimate bearing capacity, the maximum radial displacement, and tangential strain of the inner side of FGL will be obtained. In engineering, the working status of the lining can be judged by monitoring the radial displacement or tangential strain of the inner edge of the lining [49]. Figure 10 shows the relationship between the inner radius of FGL and its ultimate bearing capacity, maximum radial displacement, and maximum tangential strain. As seen in Figure 10(a), the change in the inner radius of lining will have a great influence on the ultimate bearing capacity of FGL. e ultimate bearing capacity of lining decreases with the increase in the inner radius of lining. When the inner radius is 8 m, the ultimate bearing capacity of homogeneous lining and 3-layer FGL is 5.31 and 5.87 MPa, respectively, and the latter increases by 10.5%. When the inner radius is 4 m, the ultimate bearing capacity of homogeneous lining and 3-layer FGL is 8.59 and 10.16 MPa, respectively, and the latter increases by 18.3%. When the inner radius of the lining is small, the bearing capacity increases greatly. e maximum radial displacement is approximately linear with the inner radius of the lining. e maximum radial displacement inside the lining increases with the increase in the inner radius (Figure 10(b)). e maximum tangential strain of lining inner edge is less affected by the change in the lining inner radius. Compared with the homogeneous lining, the maximum tangential strain of FGL changes more obviously with the inner radius (Figure 10(c)). Figure 11 shows the relationship between the thickness of FGL and its ultimate bearing capacity, maximum radial displacement, and maximum tangential strain. As seen in Figure 11(a), both the ultimate bearing capacity of homogeneous lining and 3-layer FGL will be increased with the increase in the lining thickness. When the thickness of lining is 1 m, the ultimate bearing capacity of 3-layer FGM lining and homogeneous lining is 7.45 and 6.57 MPa, respectively. When the thickness of lining is 3 m, the ultimate bearing capacity of 3-layer FGL and homogeneous lining is 15.7 and 12.28 MPa, respectively, and the ultimate bearing capacity of both is increased by 110.7% and 86.9%, respectively. As seen in Figure 11(b) and 11(c), the maximum radial displacement and the maximum tangential strain of the lining have the same trend with the change in the thickness, and both increase with the increase in the lining thickness. e influence of lining thickness on the maximum radial displacement of homogeneous lining is less but on the 3-layer FGL is greater. Figure 12 shows the relationship between Young's modulus E * of FGL and its ultimate bearing capacity, maximum radial displacement, and maximum tangential strain. e ultimate bearing capacity of lining is independent of Young's modulus E * (Figure 12(a)). e maximum radial displacement and the maximum tangential strain of the lining increase linearly with the increase in Young's modulus E * (Figure 12(a) and 12(b)). Figure 13 shows the relationship between the concrete compressive strength f * c of FGL and its ultimate bearing capacity, maximum radial displacement, and maximum tangential strain. As seen in Figure 13(a), the ultimate bearing capacity of lining depends on the compressive strength of concrete. e ultimate bearing capacity of lining increases linearly with the increase in compressive strength of lining concrete. Compared with the homogeneous lining, the ultimate bearing capacity of the 3-layer FGL increases slightly with the same compressive strength. When the compressive strength of lining concrete is 27.5 and 35.5 MPa, the ultimate bearing capacity of homogeneous lining is 8.59 and 11.09 MPa, and the ultimate bearing capacity of 3-layer FGL is 10.16 and 13.12 MPa. When the compressive strength of lining concrete increases from 27.5 MPa to 35.5 MPa, the ultimate bearing capacity of homogeneous lining and 3-layer FGL increases by 2.5 (29.10%) and 2.96 MPa (29.13%), respectively. As seen in Figures 13(b) and 13(c), when the compressive strength of lining concrete is only changed, the maximum radial displacement of lining is the same as the maximum tangential strain. e maximum radial displacement and the maximum tangential strain of the lining increase linearly with the concrete compressive strength of the lining. Figure 14 shows the relationship between Poisson's ratio ] of FGL and its ultimate bearing capacity, maximum radial displacement, and maximum tangential strain. As seen in Figure 14(a), the ultimate bearing capacity of lining will increase slightly with the increase in Poisson's ratio of lining. When Poisson's ratio increases from 0.2 to 0.35, the ultimate bearing capacity of homogeneous lining and 3-layer FGL increases by 0.4 (4.6%) and 0.45 MPa (4.4%), respectively. In practical operation, on the one hand, it is difficult to control Poisson's ratio of concrete accurately when preparing concrete; on the other hand, Poisson's ratio has relatively small impact on the mechanical properties of lining, so Poisson's ratio of concrete is generally taken as 0.2 for calculation in design, which is relatively conservative and reasonable. As seen in
Influence of FGL Stratification Number n.
Let λ � t/(t + R 0 ), which is determined by the thickness and inner radial of the lining. Figure 15 shows the relationship between the number n of layers of FGL and its ultimate bearing capacity and maximum tangential strain. Both the ultimate bearing capacity and maximum tangential strain of lining increase with the increase in lining layers number n (Figure 15(a) and 15(b)). Figure 16 shows the relationship between λ and ultimate bearing capacity, maximum radial displacement, and maximum tangential strain. Both the ultimate bearing capacity and maximum tangential strain of lining increase approximately linearly with the increase in λ (Figure 16(a) and 16(c)). ere is no obvious rule between the maximum radial displacement of lining and λ (Figure 16(b).
Calculation Formula of Ultimate Bearing Capacity of FGL.
According to the test results, the ultimate bearing capacity P u of FGL is related to the concrete compressive strength f * c , λ, Poisson's ratio ], and stratification number n, among 14 Advances in Civil Engineering which f * c and λ are the most important factors. By fitting the test results, the formula of ultimate bearing capacity of FGL can be deduced as follows: Here, k P is a coefficient of stratification number n (n � 2, 3, 4 . . . .) and m P is a coefficient of Poisson's ratio.
In order to verify the accuracy of the fitted ultimate bearing capacity calculation formula, six group parameters of FGL are randomly selected, and the parameters are not listed in accordance with the typical parameters. For comparison, the result of the fitting formula and ABAQUS are listed in Table 2. e difference between the calculation result of fitting formula and that of ABAQUS is between 0.05 MPa, so the fitting formula can be considered accurate.
Calculation Formula of Maximum Tangential Strain of FGL.
According to the test results, the maximum tangential strain ε θu of lining inner edge is related to Young's modulus E * of lining, the compressive strength f * c of the concrete, λ, Poisson's ratio ], and stratification number n, among which E * , f * c , and λ are the most important factors. By fitting the test results, the formula of maximum tangential strain of lining inner edge can be deduced as follows: Here, k ε is a coefficient of stratification number n (n � 2, 3, 4....) and m ε is a coefficient of Poisson's ratio. In order to verify the accuracy of the fitted maximum tangential strain calculation formula, six group parameters of FGL are randomly selected, and the parameters are not listed in accordance with the typical parameters. For comparison, the results of the fitting formula and ABAQUS are listed in Table 3.
e difference between the calculation result of fitting formula and that of ABAQUS is between 6 με, so the fitting formula can be considered accurate.
Discussion and Conclusions
In order to improve the bearing capacity of lining and give full play to the performance of lining materials, the study propose a new circular concrete lining structure for underground structure and shaft. According to the concept of functionally graded concrete lining, multilayered lining is designed to be rock support in mine. e mechanical properties of multilayered FGL are studied completely in this article. Based on the exact solution of the N-layer thick wall cylinder and multi-axial failure criterion of concrete, the stress, deformation, and ultimate bearing capacity of FGL can be determined. rough the one-factor test, the influence of each structure parameter on tangential stress, radial displacement, and tangential strain of FGL is analyzed. e mechanical and deformation characteristics of lining under elastic working condition, as well as the ultimate bearing capacity, deformation characteristics, and failure characteristics of lining when it reaches failure are studied. According to the experimental results, the calculation formula of the ultimate bearing capacity of FGL and the maximum tangential strain of FGL inner edge is obtained. e accuracy of the formula is verified by comparing with the calculation results of ABAQUS. e conclusions are as follows: (1) e stress distribution of FGL is mainly related to the inner radius, thickness, and number of layers. e tangential stress increases with the increase in the inner radius and the decrease in the thickness of the lining, and the stress concentration decreases with the increase in the number of layers. (2) e deformation behavior of FGL is mainly related to the elastic modulus, radius, thickness, and Poisson's ratio. e radial displacement and circumferential strain increase with the increase in the lining radius, thickness, Young's modulus, Poisson's ratio, and number of layers.
(3) e ultimate bearing capacity of FGL is related to the concrete compressive strength, λ, Poisson's ratio, and number of layers. e ultimate bearing capacity of lining increases with the decrease in the inner radius, the increase in the thickness, the increase in the compressive strength and Poisson's ratio, and the increase in λ and the number of layers. (4) e maximum tangential strain of lining inner edge is related to Young's modulus, the compressive strength of the concrete, λ, Poisson's ratio, and number of layers. e maximum tangential strain increases with the increase of Young's modulus, compressive strength, Poisson's ratio, number of layers, and λ. (5) As the same as the homogeneous single-layer lining, the damage of FGL starts from the inner edge of lining. e tangential strain of lining inner edge can be controlled within the maximum value to ensure that the lining works in elastic state in engineering.
Data Availability
e data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 7,709.8 | 2022-03-30T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Low-frequency ambient noise generator with application to automatic speaker classification
A novel low-frequency 1/f ambient noise generator using fractional statistics is proposed in this article. The noise samples are obtained by transformation functions performed on pseudo-random uniform sequences. The 1/f spectrum representation achieved for the generated noise samples, shows that this proposition is very promising for the investigation of the low-frequency noise effect in signal processing techniques, devices and systems. It is also demonstrated that it can be useful to serve as background ambient noise in speaker classification applications.
Introduction
In the last decades, the presence of low-frequency or 1/f noise has been widely observed in such a variety of systems [1,2]. In particular, 1/f -spectra acoustic noise has been measured in ocean noise [3], music [4] and speech [5]. Noisy environments can severely degrade the performance of speech and speaker classification applications [6][7][8]. These background noise sources can have different temporal and spectral statistics. Therefore, 1/f acoustic noise shall be considered to achieve robust signal processing techniques.
Noises are random processes described by the shape of its power spectral density (PSD). The PSD of noises [1,9] is defined by S(f ) ≈ 1 f β with 0 ≤ β ≤ 2. Generally, the PSD shape family is achieved by filtering Gaussian white noise (fgwn) sequences using digital finite impulse response (DFIR) filters and signal processing techniques [10][11][12]. However, the wide-sense stationary can only be measured for very long sample sequences. Mandelbrot and Van Ness [9] showed that the 1/f noise statistics can be accurately represented by the fractional Brownian motion (fBm). fBm is defined as a non-stationary stochastic process. Nevertheless, the shape of the PSD and the β exponent can be quasi-stationary if the observed time is short compared to the process life time [1,13]. And thus, *Correspondence<EMAIL_ADDRESS>Electrical Engineering Department, Acoustic Signal Processing Laboratory of the Military Institute of Engineering (IME), Rio de Janeiro, RJ 22290-270, Brazil it enables the application of the estimation theory for 1/f processes [14,15].
1/f fractional noise has S(f ) ∝ f 1−2H , where 1/2 < H < 1 is the Hurst parameter [16]. The H parameter is described by the slow-decaying rate of the autocorrelation function (ACF) of the noise samples. It represents the low-frequency or scaling invariance degree of the fractional noises and it is frequently close to 1.
This article proposes the generation of 1/f ambient noise samples based on the fBm statistics. In the present approach, the 1/f spectral behavior is obtained from the ACFs of the noise samples generated by the fBm process. The 1/f ambient noise sample generation is based on transformation functions performed on uniform random sequences. These functions are defined by the successive random addition algorithm using the midpoint displacement (SRMD) technique [17]. In a previous study, these transformation functions were successfully evaluated for a low-frequency optical noise samples generation [18].
The solution presented for the SRMD algorithm to generate the 1/f acoustic noise samples, is also implemented in a high-speed field-programmable gate array (FPGA) Development Kit. Each noise output value is then pulse coded modulation (PCM) encoded/quantized and sampled at 8 KHz to produce the ambient noise levels.
For the experiments, it is considered the real or natural 1/f Airport [19] and Airplane [20] ambient noises and also an artificial Pink [20] noise. The validation results http://asp.eurasipjournals.com/content/2012/1/175 include the estimation of the main parameters or statistics (β exponent, H, mean (μ), variance (σ 2 ) and Kurtosis (K)), the PSD and the heavy-tail distribution (HTD) curves and the Bhattacharyya distance (B d ). These results are obtained from the real and the generated noise samples. For the experiments, 1/f sample sequences are also generated by filtering a Gaussian white noise using the Al-Alaoui transfer function [21]. Furthermore, the performance of the proposed 1/f acoustic ambient noise generation is evaluated for a speaker identification task considering different signal to noise ratio (SNR) values.
The rest of the article is organized as follows. Section "1/f fractional Brownian noise: an overview" gives an overview of the 1/f fractional Brownian noise and describes the SRMD technique. Section "Implementation setup" introduces the implementation setup of the proposed 1/f ambient noise generator. The main validation results are reported and discussed in Section "Validation results and discussion". The speaker classification task and the related results are shown in Section "Speaker classification experiments". Finally, Section "Conclusion" presents the main conclusions of this work.
1/f fractional Brownian noise: an overview
For any instant t > 0, X H (t) is a fractional random function with Gaussian independent increments [9]. The fBm is known as the unique Gaussian H-self-similar with self-similarity parameter and stationary increments (sssi) random process. The variance of the independent increments is proportional to its time interval accordingly to for all instants t 1 and t 2 and, 3. X H (t) presents continuous sample paths.
In other words, its statistical characteristics hold for any time scale. Thus, for any τ and r > 0, where d ≈ means similar in distribution and r is the random process scaling factor. Note that X H (t) is a Gaussian process completely specified by its mean, variance, H parameter. The ACF of 1/f X H , i.e., 1/2 < H < 1 is for k ≥ 0 and ρ X (k) = ρ X (−k) for k < 0. In the present proposition, the spectral density is derived from the ACF of the 1/f fBm noise samples defined in (3). This is ensured by the PSD and ACF exponents that are both related to the H parameter.
SRMD
Considering a time index t defined at the interval [ 0, 1], the SRMD algorithm establishes that setting X(0) = 0 and X(1) as a Gaussian random variable (RV) with zero-mean and variance σ 2 then, and To achieve this property a random offset displacement (D i ) with zero-mean and variance For example, the X(1/2) value is obtained by the interpolation of X(0) and X(1) with variance δ 2 /2 2H+1 . Several iterations are then proceeded to compose a 1/f fBm noise sample sequence. In order to find stationary increments, after the midpoints interpolation, a D i of a certain variance, ∝ (r n ) 2H (r is the scaling factor), is applied to all points (time increments) and not just the midpoints. The maximum number of iterations is defined by N = 2 maxlevel where maxlevel is generally applied in the interval [0,16] [9]. The other SRMD inputs are the standard-deviation and the H parameter. Besides the real ambient noises and the artificial Pink noise, samples obtained by filtering a Gaussian white noise are considered for the validation of the proposed method. The applied method uses the Al-Alaoui digital integrator transfer function [21] with β/2 as the fractional order exponent to compose the transfer function H(z),
Implementation setup
where T is the sampling period. The filter coefficients are obtained by the convolution h(k) = a(k) * b(k), where a(k) and b(k) are the first N/2 terms obtained by expanding, respectively, the numerator and denominator of (6) in power series [12].
Validation results and discussion β and H estimation results
The β exponent is estimated from the linear regression applied to the PSD function curves. Table 1 shows the β exponent, the mean square error (MSE) of the β estimation, and the H results obtained from the real and artificial noises, and from the noise samples generated by the proposed and the fgwn methods. The results are presented for 320,000 samples since this is the size of the real ambient noise sequences. For the H estimation it is used the wavelet-based method [23]
Kurtosis, mean, variance statistics
Kurtosis measures the skewness of a sample from a Gaussian distribution. The K, mean and variance estimation results of the noise sequences are presented in Table 2. As expected, the K values are close to 3. Thus confirming that the noise samples are Gaussian distributed.
Bhattacharyya distance
The Bhattacharyya distance measures the separability between two sample sequences with Gaussian distribution and is defined by where μ i is the mean vector and C i is the covariance matrix of class i = 1, 2. The B d are measured between the generated sequences and the corresponding Airplane, Airport and Pink noises. It can be seen from Table 3 the Pink noise samples distribution produced by both methods, is very similar to the distribution of the artificial Pink noise. However, the samples distribution obtained from the proposed method are much similar to the distribution of the real ambient noises.
PSD results
The power spectral densities obtained from the real and the generated 1/f acoustic noise samples are presented in Figure 2. The PSDs were measured using a high-performance 300 MHz bandwidth spectrum analyzer. These results demonstrate the slow-decaying (3 dB/octave) behavior of the PSD shape of the 1/f noises. It can also be seen that the proposed method better represents the PSD behavior of the real acoustic noises. they exhibit very close tails. This also confirms the H results (see Table 1) obtained from the proposed solution.
Speaker classification experiments
In a speaker identification process, a speech utterance has to be identified as to which of the registered speakers it belongs. For the experiments, the speech utterances were corrupted with the real and generated noise samples. For the speaker identification were considered the mel-cepstral coefficients (MFCC) and the Gaussian mixed model (GMM) [24] which are respectively, the most commonly used speech features and classifier employed in speaker recognition tasks. A mixture of Gaussian probability densities is a weighted sum of M densities, and is given by with mean vector μ i and covariance matrix K i , where T denotes the transpose operation and |.| is the determinant. The GMM (λ) is parametrized by the mean vectors, covariance matrices, and mixture weights. The model parameters are estimated for a set of training data as the ones that maximize the likelihood of the GMM. The expectation-maximization (EM) algorithm [24] is used for the model parameters estimates. Considering a sequence of T independent training vectors X = { x 1 , . . . , x T }, the normalized log-likelihood of the GMM is The decision rule of the speaker identification system chooses the speaker model for which this value is maximum.
Speaker identification accuracy results
The speaker identification task evaluation is performed on the KING speech corpus. This is composed of conversational sessions of speech recorded by 49 male speakers.
For the experiments, five sessions are used resulting in 100 s average of speech per speaker, after silence removal. Three of these sessions (60s) are applied for the speaker model training. The remaining two sessions (40s), are used to evaluate the identification accuracies.
The speaker classification results are presented for test duration of 5 and 1 s. The real and generated 1/f noise samples are added to the speech utterances serving as background ambient noise. For the investigation, it is also considered the SNR 0 dB, 5 dB, 10 dB, 15 dB, and 20 dB to evaluate the system under different noisy conditions. For the identification task it was considered speech feature vectors with 25 MFCCs, extracted from 20 ms speech frames, and M = 32 GMM components. The speaker identification accuracies are shown in Figure 4. The results show that the generated 1/f noise produced similar effect when compared to the real ambient noise. This means that it could be applied as artificial background noise.
Conclusion
A new low-frequency 1/f ambient noise generator using fractional statistics is described in this article. The PSD shape of the 1/f generated noise samples is achieved from the ACFs of the noise samples generated with the fBm process. The implementation of the 1/f ambient noise generator enables the validation of the pattern and the PSD representation. It is shown that this proposition is very promising for the investigation of this noise effect in the signal processing techniques. Furthermore, the speaker identification experiments demonstrate that the generated ambient noise samples can be useful to serve as background or additive noise. | 2,812.6 | 2012-08-17T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Prions of Yeast and Fungi
The genetic properties of [URE3] and [PSI], two non-chromosomal genetic elements of Saccharomyces cerevisiae, indicated that they were infectious proteins (prions) (1). Subsequent studies have supported this proposal, and the genetic criteria we proposed have been used in the discovery of another new prion, [Het-s], in the filamentous fungus Podospora anserina (2). The prion hypothesis has long been an intriguing explanation of the transmissible spongiform encephalopathies, such as scrapie, Creutzfeldt-Jakob disease, and “mad cow disease” (3–5) (reviewed in Refs. 6 and 7). Studies using Saccharomyces and Podospora have provided evidence of a type not available from studies of scrapie that there can be such a thing as an infectious protein. This work also revealed that prions can be the basis for inherited traits and initiated the use of the powerful yeast system to study this phenomenon. Here we review the basis for the proposal that [URE3], [PSI], and [Hets] are prions of the chromosomally encoded Ure2p, Sup35p, and Het-s protein, respectively. We also review the properties of [URE3] and [Het-s]. Further studies of [PSI] are reviewed by Liebman and Derkatch in the following minireview (8), and other reviews of these subjects have appeared (9–12).
The genetic properties of [URE3] and [PSI], two non-chromosomal genetic elements of Saccharomyces cerevisiae, indicated that they were infectious proteins (prions) (1). Subsequent studies have supported this proposal, and the genetic criteria we proposed have been used in the discovery of another new prion, [Het-s], in the filamentous fungus Podospora anserina (2). The prion hypothesis has long been an intriguing explanation of the transmissible spongiform encephalopathies, such as scrapie, Creutzfeldt-Jakob disease, and "mad cow disease" (3)(4)(5) (reviewed in Refs. 6 and 7). Studies using Saccharomyces and Podospora have provided evidence of a type not available from studies of scrapie that there can be such a thing as an infectious protein. This work also revealed that prions can be the basis for inherited traits and initiated the use of the powerful yeast system to study this phenomenon. Here we review the basis for the proposal that [URE3], [PSI], and [Hets] are prions of the chromosomally encoded Ure2p, Sup35p, and Het-s protein, respectively. We also review the properties of [URE3] and [Het-s]. Further studies of [PSI] are reviewed by Liebman and Derkatch in the following minireview (8), and other reviews of these subjects have appeared (9 -12).
[URE3], a Non-Mendelian Genetic Element Affecting Nitrogen Catabolism
Because ureidosuccinate (USA), 1 an intermediate in uracil biosynthesis, happens to resemble allantoate, a poor but usable nitrogen source, USA uptake is repressed by good nitrogen sources ( Fig. 1) (reviewed in Refs. 13 and 14).
Lacroute and co-workers (15, 16) discovered ure2 by selecting mutants able to take up USA on media with a good nitrogen source (ammonia). The USA uptake of one isolate was dominant, showed irregular segregation in meiosis (17), and was transmissible by cytoplasmic mixing (18), confirming its determination by a non-chromosomal gene that Lacroute named [URE3].
Three Genetic Criteria for a Prion Illustrated by [URE3]
Yeast viruses are found as nonchromosomal genetic elements (19) and do not venture out of the cell but are able to become widespread as a result of transmission through mating. We likewise expect infectious proteins of yeast to be found as non-chromosomal genetic elements. We proposed three genetic criteria that should distinguish a prion from a nucleic acid replicon (1) (Fig. 2). A prion is an altered form of a normal chromosomally encoded protein. This prion form has lost its normal function but has acquired the ability to convert the normal form of the protein into this same abnormal (prion) form.
Reversible Curability-If a prion can be cured, it should be possible for it to arise again in the cured strain at some low frequency, because the same event that initially gave rise to the prion could occur again.
[URE3] can be cured by the growth of cells on media containing 1-5 mM guanidine HCl, but from cured clones [URE3]-carrying colonies can again be isolated (1). These [URE3] clones arise at frequencies similar to the 10 Ϫ6 frequency with which [URE3] arises in most wild-type strains.
Overproduction of the Normal Protein Increases the Frequency of Prion Formation-Because the initial prion formation event is a spontaneous change of the protein, it is expected that overproducing the normal protein will increase the number of molecules that are candidates to undergo this change. Indeed, overproduction of Ure2p increases the frequency of de novo generation of [URE3] by 20 -200-fold (1).
Phenotype Relationship of Prion and Mutation of the Gene for the Protein-Because the prion is propagated by conversion of the normal form to the prion form, mutations of the chromosomal gene for the protein that prevent it being synthesized will prevent propagation of the prion. So the chromosomal gene for the protein will be seen (perhaps discovered) as a gene necessary for propagation of the non-Mendelian genetic element. But unlike such genes necessary for propagation of mitochondrial DNA (pet) or of the L-A dsRNA virus (mak), the phenotype of the mutant should be the same as the phenotype of the presence of the non-Mendelian genetic element. This is because either the presence of the prion or the mutation in the gene for the protein results in a deficiency of the normal protein.
The genetic criteria satisfied for yeast prions and, in two of three cases, by the Podospora prion are not yet satisfied by the transmissible spongiform encephalopathies. Thus, these new findings have provided a major foundation for the notion that there can be such a thing as an infectious protein.
[PSI] Satisfies the Genetic Criteria as a Prion Form of Sup35p
Sup35p is a subunit of the translation release factor that recognizes termination codons and releases the completed peptide from the last tRNA.
[PSI] is a non-Mendelian genetic element that, like sup35 mutations, increases the strength of weak suppressor tRNAs (20,21). We pointed out that [PSI], like [URE3], satisfies all three genetic criteria to be a prion form of Sup35p (1).
[PSI] may be cured by various agents (22,23) but from the cured strains may again be obtained clones in which [PSI] has arisen de novo (24). Overproduction of Sup35p increases the frequency with which [PSI] arises about 100-fold (25). Finally, the phenotype of sup35 mutants is similar to that of [PSI], and Sup35p is necessary for the propagation of [PSI] (26,27). Further discussion of [PSI] will be found in the minireview by Liebman and Derkatch (8).
Further Support for the Prion Model for [URE3]
is a prion form of Ure2p, it is expected that there will be a structural difference of some kind between Ure2p in wildtype and [URE3] strains. In fact Ure2p is more proteaseresistant in extracts of [URE3] strains than wild-type strains (28). The 40-kDa Ure2p is quickly and completely degraded, but Ure2p in extracts of [URE3] cells is first partially degraded to species of 30 and 32 kDa before eventually being completely degraded.
[URE3] Is Not a Stable Regulatory Circuit-Ure2p is a transcription regulator, and stable transcription circuits are known in some cases to produce inheritable ("epigenetic") traits (29). However, the frequency with which [URE3] arises is independent of the nitrogen-repressed or derepressed state of cells (30). Moreover, [URE3] can be propagated in cells that are either repressed or derepressed (30). Finally, as discussed below, the parts of the Ure2p molecule responsible for prion propagation and nitrogen regulation are completely distinct (28).
[URE3] Really Arises de Novo-[URE3] is isolated as an apparent mutant; it is not found in "wild-type" strains. Could [URE3] be a defective interfering derivative of a normal plasmid dependent on Ure2p for its propagation, analogous to a defective interfering virus or a "suppressive petite" deletion derivative of mitochondrial DNA? In that case, [URE3] would produce its phenotype by eliminating the normal version of the plasmid, thus explaining the dominance of [URE3]. The ure2 mutants would also produce their phenotype by resulting in loss of the normal plasmid, and so the identity of phenotypes of [URE3] and ure2 strains would be explained. However, replacing the URE2 gene in a ure2 mutant would then fail to correct its phenotypic defect, and this is not the case (31). This model would also predict that the [URE3] element could not be gen-erated in a ure2 strain because the parent "normal" plasmid would be already lost. In fact, introduction of a URE2 plasmid into a ure2 strain makes it able to become [URE3] at some low frequency (30).
This de novo induction of [URE3] is, in fact, because of overproduction of the Ure2 protein, not the URE2 mRNA or the gene itself (30). Thus, [URE3] arises de novo, not as a derivative of a pre-existing nucleic acid replicon. Moreover, it is the Ure2 protein that leads to [URE3] arising.
Prion Domain and Nitrogen Regulation Domains of Ure2p
Deletion analysis of URE2 showed that residues 66 -354 determine the nitrogen regulation function of Ure2p (28,31), whereas prion induction and propagation involve primarily residues 1-65 (Fig. 3) (28, 30). Although overexpression of intact Ure2p increases the frequency with which [URE3] arises by 20 -200-fold (1,28), the N-terminal 65 residues are sufficient to induce [URE3] at even higher efficiency (28). Thus, the two functions of Ure2p are attributed to separate parts of the molecule.
Deletions of parts of the C-terminal nitrogen regulation domain increase by 100-fold or more the frequency with which it converts to the prion form (28), suggesting that the C-terminal domain normally stabilizes the N-terminal domain and prevents it from converting to the prion form. Although they interact, the prion domain and nitrogen regulation domain can function completely separately in the cell when produced as separate molecules. The prion domain alone can propagate [URE3] and is not affected by the concurrent presence of separate molecules of the nitrogen regulation domain (30). Likewise, only when the nitrogen regulation domain has a covalently attached prion domain is the nitrogen regulation function affected by the presence of the [URE3] prion (30).
FIG. 1. Ure2p is a regulator of nitrogen catabolism that indirectly blocks uptake of ureidosuccinate, an intermediate in uracil biosynthesis.
Yeast growing on a rich nitrogen source, such as ammonia or glutamine, repress transcription of enzymes and transporters needed for utilization of poor nitrogen sources. Ure2p senses the availability of a rich nitrogen source and blocks the action of Gln3p, a positive transcription regulator of many genes whose products facilitate utilization of poor nitrogen sources (37,38). Among these genes is DAL5, encoding the allantoate importer (39). The incidental chemical similarity of ureidosuccinate to allantoate makes the former a substrate for the allantoate importer, Dal5p (40). USA (N-carbamyl aspartate) is an intermediate in uracil biosynthesis, the product of aspartate transcarbamylase. Cells blocked in this enzyme can take up ureidosuccinate to make uracil only if they are growing on a poor nitrogen source. Lacroute and co-workers (15,16) found that chromosomal ure2 mutants, selected for ability to take up USA on a rich nitrogen source (ammonia), were unable to repress enzymes for utilization of poor nitrogen sources.
[Het-s], a Prion of P. anserina, Carries Out a Normal Cellular Function
A fungal colony is a syncytium so that cells can exchange cytoplasm and even nuclei. When two colonies grow together, the cellular processes (hyphae) of one colony generally fuse (anastomose) with those of the other, so the colonies can share nutrients (32). This "hyphal anastomosis" has the disadvantage that viruses present in one colony are readily transmitted throughout the other. To limit this problem, the process is completed only by closely related colonies, likely to already carry the same viruses. Colonies of Podospora must be identical at 8 "het" loci to fuse. Colonies differing in alleles at one or more het locus will initially fuse hyphae, but the fused hyphae undergo a rapid degeneration process with formation of a barrier to further hyphal fusions. This is called "heterokaryon incompatibility" (Fig. 4).
Recently, Coustou et al. (2) showed that the product of one of these loci, het-s, only functions when it is in a prion form. As with [URE3] and [PSI], the Podospora prion was first detected as a non-Mendelian (non-chromosomal) genetic element. The het-s locus can have either of two alleles, called het-S and het-s. Whereas het-s cells fuse readily with other het-s cells and het-S cells fuse with het-S partners, a meeting of het-s and het-S cells results in the heterokaryon incompatibility reaction. Rizet (33) found that cells with genotype het-s could have either of two phenotypes. One, called [Het-s], shows this heterokaryon incompatibility when meeting a het-S strain. The other, called [Het-s*], is compatible with both het-s and het-S partners.
Coustou et al. (2) used two of the genetic criteria and the protease resistance of the Het-s protein in [Het-s] strains as evidence that [Het-s] is a prion form of the Het-s protein.
[Het-s] can be efficiently eliminated but will spontaneously return at low frequency. Moreover, overproduction of the Het-s protein results in an increase in the frequency with which [Het-s] arises de novo. The het-s gene is necessary for the propagation of [Het-s], as expected if [Het-s] is a prion form of its product.
These findings are important in showing that a prion need not be a cause of disease but rather may be the mediator of a normal cellular function. Heterokaryon incompatibility is a normal event in the life of a fungus, with a physiological purpose.
Cortical Inheritance in Ciliates and the Parallel with Prions
Sonneborn showed that the pattern of cilia on the surface (cortex) of Paramecium could be altered by microsurgery or accidents occurring during mating, events unlikely to alter the genome. These changes were transmitted to the offspring through mitosis and meiosis, in a kind of Lamarkian process (35). The implication of these studies is that the pattern of cilia on the cell surface acts as a template in the formation of the new cell. Similar phenomena have been demonstrated in other ciliates (36). This phenomenon has some formal similarity to the template function of prions believed to be the basis of prion propagation (36). Are other aspects of cell morphology heritable? Do other cellular structures act as templates in the generation of their offspring?
Comparison of Prion Systems
Whereas the prion domains of Ure2p and Sup35p are rich in asparagine and glutamine, there are no such regions in either PrP or the Het-s protein. This suggests that there will be differences in the detailed mechanisms of prion propagation, although there may yet be important similarities. Whereas [URE3] and [PSI] produce their phenotypes by inactivation of Ure2p and Sup35p, respectively, scrapie and [Het-s] are detectable because of positive activities of the altered forms of PrP and the Het-s protein, respectively.
Prospects for Future Work
The purification of Ure2p 2 will allow detailed structural studies of differences between the normal and prion forms of the molecule. Further dissection of prion domains and nitrogen regulation functions of Ure2p using molecular genetics are also under way. 3 How widespread is the prion phenomenon? The Saccharomyces and Podospora systems should be adaptable to general screens for prion domains of any organism, and we have begun such an approach. 4 Transmissible spongiform encephalopathies, [URE3] and [PSI], are diseases, but [Het-s] mediates a normal cellular function. We can anticipate that some other cellular functions (perhaps some aspects of differentiation and development) will be mediated by prions. The yeast prions determine inherited traits; could some inherited traits of mammals be determined by prions? | 3,596.6 | 1999-01-08T00:00:00.000 | [
"Biology"
] |
WD40 repeat proteins striatin and S/G(2) nuclear autoantigen are members of a novel family of calmodulin-binding proteins that associate with protein phosphatase 2A.
Protein phosphatase 2A (PP2A) is a multifunctional serine/threonine phosphatase that is critical to many cellular processes including development, neuronal signaling, cell cycle regulation, and viral transformation. PP2A has been implicated in Ca(2+)-dependent signaling pathways, but how PP2A is targeted to these pathways is not understood. We have identified two calmodulin (CaM)-binding proteins that form stable complexes with the PP2A A/C heterodimer and may represent a novel family of PP2A B-type subunits. These two proteins, striatin and S/G(2) nuclear autoantigen (SG2NA), are highly related WD40 repeat proteins of previously unknown function and distinct subcellular localizations. Striatin has been reported to associate with the post-synaptic densities of neurons, whereas SG2NA has been reported to be a nuclear protein expressed primarily during the S and G(2) phases of the cell cycle. We show that SG2NA, like striatin, binds to CaM in a Ca(2+)-dependent manner. In addition to CaM and PP2A, several unidentified proteins stably associate with the striatin-PP2A and SG2NA-PP2A complexes. Thus, one mechanism of targeting and organizing PP2A with components of Ca(2+)-dependent signaling pathways may be through the molecular scaffolding proteins striatin and SG2NA.
B-type subunits exist, including B (or B55), BЈ (or B56), and BЈЈ (or PR72/130) classes (5)(6)(7)(8)(9). To enable utilization of this phosphatase for numerous substrates in different pathways, PP2A is regulated at multiple levels, including covalent modifications, interaction with inhibitory proteins and lipids, and association with the various B-type subunits. For example, BЈ subunits were recently shown to target PP2A to the adenomatous polyposis coli tumor suppressor scaffolding protein, physically associating PP2A with specific substrates and thus regulating Wnt--catenin signaling (10).
PP2A has also been shown to form complexes with CaM-dependent kinase IV (CaMKIV) (11), suggesting a role for PP2A in Ca 2ϩ -dependent signaling. This possibility is further supported by patch clamp experiments with both neuronal (12) and smooth muscle cells (13) that have used both okadaic acid and recombinant PP2A C subunit to implicate PP2A in the regulation of calcium-activated potassium channels and L-type Ca 2ϩ channels (14).
To better understand how PP2A is targeted to various microenvironments and signal transduction pathways within the cell, we have looked for additional PP2A targeting subunits. Here we report the identification of two PP2A-associated proteins that may represent a novel family of B-type subunits. These two proteins contain WD40 repeats and bind to CaM in a calcium-dependent manner. One member of this family, striatin, is localized to the post-synaptic densities of neuronal dendrites (15), whereas the other, SG2NA, has been reported to be localized to the nucleus (16). Striatin-PP2A and SG2NA-PP2A complexes contain several additional unidentified proteins, suggesting that striatin and SG2NA may function as scaffolding proteins involved in Ca 2ϩ -dependent signal transduction pathways.
EXPERIMENTAL PROCEDURES
Metabolic Labeling and Immunoprecipitations-For metabolic labeling of NIH3T3 cells with methionine, subconfluent dishes of cells were labeled for 4 -6 h with 0.25 mCi/ml [ 35 S]methionine in methionine-free Dulbecco's modified Eagle's medium supplemented with dialyzed 0.5% fetal calf serum. Cells were washed twice with phosphate-buffered saline and once with IP wash buffer (0.135 M NaCl, 1% glycerol, 20 mM Tris, pH 8.0) and then were lysed in 1 ml of IP lysis buffer (1% Nonidet P-40, 0.135 M NaCl, 1% glycerol, 20 mM Tris, pH 8.0, 0.03 units/ml aprotinin, and 1-2 mM phenylmethylsulfonyl fluoride) while rocking for 20 min at 4°C. Lysates were cleared at 13,000 ϫ g for 10 min at 4°C and then incubated at 4°C for 90 min while rocking in 1.5-ml Eppendorf tubes with protein A-Sepharose and the appropriate antisera. Immune complexes were precipitated by centrifugation for 1 min at 700 ϫ g, and the supernatants were removed. Immune complexes were washed twice with 1 ml of IP lysis buffer and three times with 1 ml of phosphatebuffered saline. Two-dimensional gel electrophoresis was performed as described previously (17), and proteins were transferred to nitrocellu-* This work was supported by National Institutes of Health Grant CA57327. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
¶ Preparative Immunopurification-Using affinity-purified AR-1 antisera, samples were immunopurified from 40 15-cm dishes of polyomavirus MT-transformed NIH3T3 cells as described (17) except that the IP lysis buffer contained 20% Triton and 1% glycerol. For preparative immunoprecipitations, PP2A complexes were immunoaffinity purified using the 1d6 anti-C subunit monoclonal antibody (mAb) chemically cross-linked to protein A-Sepharose as described previously (18) except that 10 15-cm dishes of cells were used for one batch purification and whole immune complexes were analyzed on two-dimensional (2D) gels (17). Control immunopurifications were performed with 7-34-1 mAb chemically cross-linked to protein A-Sepharose using five 15-cm dishes of cells. Proteins were visualized with Coomassie Brilliant Blue R250 (Bio-Rad).
Ion Trap Mass Spectroscopy-Specific spots at 93 and 110 kDa were subjected to in-gel reduction, carboxyamidomethylation, and tryptic digestion (Promega). Multiple peptide sequences were determined in a single run by microcapillary reverse-phase chromatography directly coupled to a Finnigan LCQ quadrupole ion trap mass spectrometer. The ion trap was programmed to acquire successive sets of three scan modes consisting of full scan MS over alternating ranges of 395-800 m/z or 800 -1300 m/z, followed by two data-dependent scans on the most abundant ion in those full scans. These data-dependent scans allowed: 1) the automatic acquisition of a high resolution (zoom) scan to determine charge state and exact mass and 2) MS/MS spectra for peptide sequence information. MS/MS spectra were acquired with a relative collision energy of 30%, an isolation width of 2.5 daltons, and dynamic exclusion of ions from repeat analysis. Interpretation of the resulting MS/MS spectra of the peptides was facilitated by programs developed in the Harvard Microchemistry Facility and by data base correlation with the algorithm SEQUEST (19,20).
Antibodies-Lasergene DNASTAR Protean software was utilized to identify highly hydrophilic and antigenic sequences for selection of peptide antigens. Rabbit polyclonal pan-BЈ (AR-1), striatin, and SG2NA antisera were generated using keyhole limpet hemocyanin (KLH)-conjugated peptides as immunogens. Peptide DP47 (ELFDSEDPRERD-FLKTC) corresponds to residues 194 -209 of the human ␣-isoform of BЈ (B56) (7) with an additional carboxyl-terminal cysteine for coupling to KLH. Peptide DP52 (GESPKQKGQEIKRSSGDC) corresponds to residues 227-243 of SG2NA with a carboxyl-terminal cysteine for coupling. Peptide DP53 (SVGSPSRPSSSRLPEC) corresponds to residues 373-387 of striatin with an added carboxyl-terminal cysteine for coupling. Peptides were conjugated to KLH using the Imject maleimide KLH conjugation kit (Pierce) according to the manufacturer's instructions. The methylation-sensitive PP2A C subunit mAb, 4b7, was generated to a 15-residue unmethylated carboxyl-terminal peptide with an additional amino-terminal cysteine added for coupling to KLH (18). 2 Anti-BЈЈ (PR72/130) polyclonal antibodies were provided by Brian Hemmings. 3 A methylation-insensitive mAb to PP2A C subunit was obtained from Transduction Laboratories (Lexington, KY). The control mAb 7-34-1 (American Type Culture Collection) is directed against major histocompatibility complex class I swine leukocyte antigen.
Calmodulin-Sepharose Precipitations-NIH3T3 cells were treated for 30 min with 1 M ionomycin, washed twice with 8 ml of TNC buffer (50 mM Tris, pH 7.5, 100 mM NaCl, 2 mM CaCl 2 ) and once with 8 ml of IP wash buffer containing 2 mM CaCl 2 , and lysed with 1 ml of IP lysis buffer containing 2 mM CaCl 2 . Untreated cells were washed twice with 8 ml of TNE buffer (50 mM Tris, pH 7.5, 100 mM NaCl, 1 mM EGTA) and once with 8 ml of IP wash buffer containing 1 mM EGTA and then lysed with 1 ml of IP lysis buffer containing 1 mM EGTA. Calmodulin-Sepharose precipitations were performed as for immunoprecipitations described above, except that complexes from ionomycin-treated cells were washed in IP lysis buffer with 2 mM CaCl 2 and TNC, and those from untreated cells were washed in IP lysis buffer with 1 mM EGTA and TNE.
Cell Culture-All cells were cultured in Dulbecco's modified Eagle's medium, 10% bovine calf serum (Life Technologies, Inc.). The medium for NIH3T3 cells stably expressing hemagglutinin (HA)-tagged B subunit 2 was supplemented with 100 g/ml hygromycin. In the case of the GREonly, 36wt, 301Stop, and T304A cell lines (22), the medium was supplemented with 100 g/ml hygromycin and 25 g/ml geneticin.
FIG. 1. Identification of 110-and 93-kDa PP2A-associated proteins as striatin and SG2NA. A, immunoblot of NIH3T3 lysate using affinity-purified AR-1 antisera (AR-1) and affinity-purified AR-1 antisera pre-blocked with 100 g/ml of AR-1 peptide (AR-1 ϩ peptide). Proteins that are recognized by the AR-1 antisera and their apparent molecular mass are indicated. Although only the top portion of the gel is shown, no lower molecular weight bands were visible. Although the 74-kDa protein is barely visible here, it has been observed repeatedly. The intensity of the 131-and 74-kDa bands can vary from cell line to cell line and blot to blot. B, autoradiograph of 2D gel analysis of anti-PP2A C subunit (1d6 mAb; PP2A IP panel) and control (7-34-1 mAb; Control IP panel) immunoprecipitates (IP) prepared from 35 Slabeled NIH3T3 cells. 93-and 110-kDa PP2A-associated proteins are indicated. C, immunoblot of SDS-PAGE analysis of 7-34-1 (Control IP) and 1d6 (PP2A IP) immunoprecipitates probed with AR-1 antisera (AR-1), anti-striatin antisera (Striatin), and anti-SG2NA antisera (SG2NA). Ab, antibody. D, protein sequence alignment of striatin and SG2NA. Tryptic peptide sequences determined by ion trap mass spectrometry of the 93-and 110-kDa proteins are underlined. Peptides used to generate polyclonal antisera to striatin and SG2NA are double underlined. Probable recognition sites for the AR-1 antisera are shown in italics.
RESULTS AND DISCUSSION
Identification of 110-and 93-kDa PP2A-associated Proteins as Striatin and SG2NA-In an effort to discover previously unknown members of the BЈ family of PP2A subunits, a pan-BЈ subunit antibody termed AR-1 was generated against a sequence highly conserved in all known BЈ subunits (Figs. 1A and 2A). AR-1 recognized known BЈ subunits that migrated at 56 and 74 kDa, as well as three unknown proteins that migrated at 93, 110, and 131 kDa (Fig. 1A). In a parallel attempt to identify PP2A-associated proteins, PP2A immunoprecipitations were prepared from [ 35 S]methionine-labeled NIH3T3 cells using a monoclonal antibody (1d6) directed against the carboxyl terminus of the PP2A C subunit. Several specific spots were observed on 2D gels, including two at 93 and 110 kDa (Fig. 1B). To determine whether the 93-and 110-kDa proteins in 1d6 immunoprecipitates were the same proteins detected in AR-1 immunoblots of whole cell lysates (Fig. 1A), 1d6 immunoprecipitates were immunoblotted with AR-1 (Fig. 1C, lanes 1 and 2). The two proteins that migrated at 93 and 110 kDa were specifically recognized by the AR-1 antisera, suggesting that they might be novel BЈ-type subunits.
To identify the 110-kDa protein, a large scale direct immu-noprecipitation using the AR-1 antisera was subjected to 2D gel electrophoresis, and peptide sequences were obtained by ion trap mass spectrometry (Fig. 1D). These sequences corresponded to a previously cloned gene of unknown function, striatin, which was originally purified from rat brain (15). Striatin has been reported to bind CaM with a 40 nM K D in a calciumdependent manner at a half-maximal Ca 2ϩ concentration of 0.5 M (23). Moreover, striatin contains two polybasic domains that may facilitate association with the post-synaptic membrane (15). Immunolabeling has shown that striatin is excluded from neuronal axons but is found throughout dendrites and is abundant in the post-synaptic densities of neuronal dendritic spines (15). These data suggest that striatin targets PP2A to a cellular microenvironment in which it may play a role in the modulation of calcium-dependent neuronal signaling. Although striatin was originally described as a brain-specific protein, we have observed striatin protein in murine NIH3T3 fibroblasts and human Jurkat T lymphocytes. We have also detected striatin mRNA in human HeLa cervical cancer cells, and expressed sequence tags (ESTs) were found in the dBEST data base that represent partial striatin cDNAs from human B lymphocytes, human heart, murine myotubules, and murine testis (data not shown). Thus, striatin is present in both dividing and nondividing cells and is much more widely expressed than previously thought.
To identify the 93-kDa protein and obtain further peptide sequence from the 110-kDa protein, large scale PP2A (1d6) immunoprecipitations were subjected to 2D gel electrophoresis. Mass spectrometric sequencing revealed multiple additional peptides from striatin for the 110-kDa protein and identified the 93-kDa protein to be a highly related protein, S/G 2 nuclear autoantigen (SG2NA) (Fig. 1D). Little is known about SG2NA other than that it is localized to the nucleus, it contains WD40 repeats, and as assayed by immunofluorescence, its expression appears to be cell cycle-regulated, peaking during the S and G 2 phases (16). Striatin and SG2NA bear little homology to the B, BЈ, or BЈЈ subunits, raising the possibility that they might comprise a new family of PP2A B-type subunits. At least one homolog of striatin and SG2NA exists in Caenorhabditis elegans (GenBank TM accession no. CAA94873) (24), suggesting that this form of PP2A may play an important role in all metazoans. Although there is no obvious homolog for either of these proteins in yeast, potential WD40-containing open reading frames of unknown function with some homology to striatin and SG2NA do exist in both Saccharomyces cerevisiae, (Gen-Bank TM accession no. CAA89144) and Schizosaccharomyces pombe (GenBank TM accession no. CAA21906).
Striatin and SG2NA Share a Conserved Epitope with BЈ Subunits-Because striatin and SG2NA showed no obvious homology to PP2A BЈ subunits, it was puzzling that they were recognized efficiently by the pan-BЈ antiserum, AR-1. However, a careful comparison revealed that, whereas neither striatin nor SG2NA contains the precise consensus sequence used to generate the AR-1 antisera, they both contain sequences that share some homology with this motif at positions corresponding to striatin 277-294 and 551-566 (Fig. 2B). To determine whether the AR-1 antisera recognized either or both of these sequences, two peptides were synthesized corresponding to these two sequences as well as two control peptides corresponding to randomized sequences containing the same amino acids. A dot blot (Fig. 2C) demonstrated that the AR-1 antisera recognizes the STR277 peptide corresponding to striatin 277-294 but not the striatin 551-566 peptide or the control peptides.
The STR277 peptide was also tested to determine whether it could block immunoprecipitations with the AR-1 antisera (Fig. 2D). As expected, both the STR277 peptide and the AR-1 pep-
FIG. 2. Sequence alignments showing potential interaction motif between A/C heterodimer and B subunits or striatin and SG2NA.
A, alignment of the AR-1 peptide (DP47) with representative BЈ sequences from different species. Identities are indicated with a dot and similarities are underlined. B, alignment of the DP47 BЈ peptide with striatin and SG2NA sequences recognized by the AR-1 antisera. For reference, the latter sequences are italicized in Fig. 1D. Identities are indicated with a dot, similarities are underlined, and gaps are shown with a hyphen. C, immunoblot of striatin peptides. Twenty g each of four peptides, STR277, RDM277, STR551, and RDM551, were bound to nitrocellulose. The dot blot was then probed with the pan-BЈ AR-1 antisera. STR277, EDRDTKEALKEFDFLVT (striatin amino acids 277-294); RDM277, TLDELETREVFFKAKDD (randomized STR277 sequence); STR551, DPYDSYDPSVLRGPLL (striatin amino acids 551-566); RDM551, LYPRYVPPLDGLSSDD (randomized STR551 sequence). D, AR-1 immunoprecipitates immunoblotted with striatin and SG2NA antisera. AR-1 immunoprecipitations (lanes 2-5) were carried out in 20% Triton either with no peptide added or preincubated with 100 g/ml of the STR277, RDM277, or AR-1 peptides, respectively. Each set of lanes is from the same gel, but the lanes were not all originally adjacent. tide used to generate the AR-1 antisera were effective at blocking immunoprecipitation of both striatin and SG2NA, whereas the randomized control peptide did not block AR-1 immunoprecipitations. The conservation of the sequence (D/E)X 2 D(S/ T)X(D/E)X 1-2 (R/K)EX(D/E)(F/Y)LXT between most BЈ subunits and striatin and SG2NA suggests that it may be involved in interactions between these proteins and the A/C heterodimer. Consistent with this hypothesis, AR-1 was able to immunoprecipitate striatin and SG2NA only in the presence of 20% Triton. Under these conditions, little or none of the PP2A A and C subunits are co-immunoprecipitated (data not shown), suggesting that 20% Triton may have dissociated striatin and SG2NA from the A/C heterodimer, revealing the AR-1-specific epitope.
To facilitate the investigation of striatin and SG2NA interactions with the A/C heterodimer, polyclonal antisera were raised to both proteins. The striatin antiserum was raised against a peptide antigen corresponding to amino acids 373-387 of human striatin. This sequence is located within a basic domain that may be important for striatin association with the cellular membrane (15). Although this region is 100% identical between mouse and human striatin, it is not found in SG2NA (Fig. 1D). The SG2NA antisera was raised against a peptide antigen corresponding to amino acids 227-243 of human SG2NA in a region that has little homology with striatin sequences. 1d6 immunoprecipitations of PP2A were probed with the anti-striatin and anti-SG2NA antisera, confirming that these antisera recognize the 110-and 93-kDa proteins, respectively (Fig. 1C).
PP2A A and C Subunits Coimmunoprecipitate with Striatin and SG2NA-To confirm that striatin and SG2NA form stable complexes with PP2A, striatin and SG2NA complexes were immunoprecipitated with the anti-striatin and anti-SG2NA antisera, respectively, analyzed by SDS-PAGE, and immunoblotted with a commercially obtained mAb to PP2A C subunit (Fig. 3A). The C subunit was present in immunoprecipitations of both striatin and SG2NA but not in immunoprecipitations using preimmune sera from the same rabbits. The presence of A subunit was similarly detected (Fig. 3A).
Several lines of evidence support the observation that no other B-type subunits were present in striatin-A/C or SG2NA-A/C complexes. First, 2D analysis of striatin and SG2NA immune complexes labeled with [ 35 S]methionine or stained with Coomassie Blue showed no spots that corresponded to the known migration positions of the B, BЈ, or BЈЈ subunits ( Fig. 4B and data not shown). Second, it has previously been shown (22) that the 1d6 mAb used to precipitate the striatin-A/C or SG2NA-A/C complexes does not immunoprecipitate any B subunit. Third, immunoblots of striatin and SG2NA immune complexes with the AR-1 antisera do not detect any other BЈ subunits (Fig. 1C). Finally, immunoblots of NIH3T3 whole cell lysates did not detect the presence of any BЈЈ subunits (data not shown).
SG2NA Binds to CaM in a Calcium-dependent Manner-Because striatin previously has been shown (23) to bind to CaM in a calcium-dependent manner in a region (amino acids 149 -166) that is nearly identical with SG2NA, we hypothesized that SG2NA would also bind to CaM. Both striatin (data not shown) and SG2NA (Fig. 4A) were precipitated with CaM-Sepharose from NIH3T3 cells stimulated with the calcium ionophore ionomycin but not from untreated control cells, as expected. This is the first evidence that SG2NA binds to CaM in a calcium-dependent manner similar to striatin. However, immunoblots of striatin and SG2NA immunoprecipitations probed with anti-CaM antibodies failed to detect CaM in both treated and untreated cells (data not shown). One hypothesis for the failure to detect CaM in immunoblots of striatin-PP2A and SG2NA-PP2A complexes is that the immunoprecipitating antibodies to striatin and SG2NA may sterically interfere with CaM binding.
Striatin and SG2NA Complexes Contain Phosphatase Activity That Is Sensitive to Okadaic Acid and Does Not Require
Calcium-To test whether immunoprecipitated striatin/PP2A and SG2NA/PP2A complexes contained enzymatically active PP2A, phosphatase assays were performed using 32 P-labeled phosphorylase a as a substrate, and sensitivity to the PP2A inhibitor okadaic acid was measured. Immunoprecipitations using antisera to striatin, SG2NA, and C subunit (4e1) all contained significant phosphatase activity (Fig. 3B). The measured activity was largely inhibited by 2 nM okadaic acid, as expected for PP2A. These assays were performed in the presence of 1 mM EGTA, indicating that calcium is not required for the activity of striatin-PP2A and SG2NA-PP2A complexes. The addition of 2 mM calcium to these phosphatase reactions had a variable but slightly inhibitory effect on PP2A activity (data not shown).
Striatin and SG2NA Activate A/C Heterodimers toward cdc2-phoshorylated Histone H1-Different B-type subunits have been shown to differentially activate the PP2A A/C heterodimer toward different substrates. For example, B subunit is the only known B-type subunit reported to activate A/C heterodimers toward cdc2-phosphorylated histone H1 substrate (50 -100-fold) (25)(26)(27). To determine whether the presence of striatin and SG2NA in PP2A complexes modulates the activity of A/C heterodimers, radiolabeled cdc2-phosphorylated histone H1 substrate was used to compare the activity of striatin and SG2NA complexes with other forms of PP2A. Immunoprecipitations were prepared from NIH3T3 cells that stably express HA-tagged C subunit (36wt cells) using two different mAbs to the carboxyl terminus of C subunit (1d6 and 4e1) and one mAb to the amino-terminal epitope tag (12CA5). Approximately 10 -30% of 12CA5 complexes prepared from 36wt cells contain B subunit, whereas approximately 5% of 1d6 complexes contain striatin and SG2NA (data not shown); 4e1 complexes contain C subunit alone and A/C heterodimers (18). Striatin and SG2NA complexes immunoprecipitated with anti-striatin and anti-SG2NA antisera, respectively, were highly activated toward histone H1 substrate compared with PP2A in 4e1 and 1d6 immune complexes (Fig. 3C). PP2A immunoprecipitated by 12CA5 had similar (within 2-fold) activity to striatin and SG2NA complexes. Based on our estimate of the percentage of 12CA5 complexes containing B subunit, we would estimate that striatin and SG2NA activate A/C heterodimers less than B subunit toward histone H1. Similar experiments using NIH3T3 cells stably expressing HA-tagged B subunit 2 confirmed this hypothesis (Fig. 3D), with striatin and SG2NA complexes approximately 35 and 29% as active, respectively, as B subunit complexes against cdc2 phosphorylation sites. The high level of observed histone H1 phosphatase activity suggests that striatin and SG2NA are not bound to the A/C heterodimers via the catalytic site.
Chemiluminescence quantitation of immunoblots containing both lysates and immunoprecipitations indicated that approximately 0.1-0.5% of the total C subunit present in lysates was immunoprecipitated with the striatin and SG2NA antisera (data not shown). This finding contrasts with the observation that 5% of 1d6 immunoprecipitates contain striatin and SG2NA. Potential explanations for this difference are: 1) not all of the striatin and SG2NA present in lysates was immunoprecipitated; 2) the striatin and SG2NA antisera may partially destabilize complex formation with the A/C heterodimer; or 3) that 1d6 may bind preferentially to striatin/PP2A and SG2NA/ PP2A complexes.
Multiple Additional Proteins Are Present in Striatin and SG2NA Complexes-The observation that striatin and SG2NA bind to both CaM and PP2A and contain WD40 repeats suggested that these proteins might function as molecular scaffolds for PP2A signaling complexes. To test whether striatin and SG2NA interact with additional proteins, NIH3T3 cells were metabolically labeled with [ 35 S]methionine, and whole cell lysates were immunoprecipitated with both the anti-striatin and anti-SG2NA antiseras. Immunoprecipitated complexes were subjected to 2D gel electrophoresis and A subunit, C subunit, striatin, and SG2NA were detected in striatin and SG2NA immunoprecipitations, but not in pre-immune controls ( Fig. 4B and data not shown). Immunoblots of SG2NA immunoprecipitates with affinity-purified SG2NA antisera revealed multiple additional immune-specific bands, suggesting that other members of the SG2NA family may exist ( Fig. 4B and data not shown). Consistent with a role for striatin and SG2NA as molecular scaffolds, several unidentified proteins were coimmunoprecipitated with striatin and SG2NA (Fig. 4B). The Carboxyl Terminus of PP2A C Subunit Present in Striatin-PP2A and SG2NA-PP2A Complexes Is Highly Methylated-We have previously suggested that the methylation state of the C subunit might regulate the association of A/C heterodimers with B-type subunits (22). To determine the methylation state of the C subunit in striatin-PP2A and SG2NA-PP2A complexes, a portion of the C subunit in striatin and SG2NA immunoprecipitations was subjected to demethylation by base treatment. Both untreated and demethylated samples were analyzed by immunoblot with a methylation-sensitive FIG. 3. PP2A subunits and phosphatase activity in striatin and SG2NA immunoprecipitates. A, NIH3T3 whole cell lysate (Lysate) and striatin and SG2NA immunoprecipitates and preimmune controls were simultaneously probed with a mixture of monoclonal antibodies to C subunit (Transduction Laboratories) and A subunit (4 g7 mAb; Ref. 17). These lanes were from the same gel but were not all originally adjacent and have been cut vertically. Although the top and bottom portions of the blot are not shown, no other bands were visible. Whether PP2A C subunit migrates as a doublet or as a single band varies from gel to gel. This behavior on SDS-PAGE has been observed previously (22) and does not appear to be the result of degradation. B, normalized phosphorylase a phosphatase activity of immune complexes prepared with 4e1 anti-PP2A C subunit (PP2A), striatin, and SG2NA antisera. Immunoprecipitates were prepared from approximately 3 mg of total protein and used to measure phosphatase activity. Phosphorylase a phosphatase assays were carried out using the Protein Phosphatase Assay System (Life Technologies, Inc.) according to the manufacturer's instructions with the addition of 1 mM EGTA and the indicated concentrations of okadaic acid. Cpm released was first corrected by subtracting background activity obtained with pre-immune immunoprecipitations. Preimmune Cpm averaged 7 Ϯ 5% of PP2A activity, 34 Ϯ 13% of SG2NA activity, and 45 Ϯ 15% of striatin activity. The average immune-specific activity measured in the absence of okadaic acid was arbitrarily set to 100% for each immunoprecipitate. The effect of okadaic acid was measured by normalizing Cpm released in the presence of okadaic acid against Cpm released in the absence of okadaic acid. The averages and standard deviations of at least three independent experiments are shown. C, normalized histone H1 phosphatase activity of immunoprecipitations prepared from NIH3T3 cells stably expressing HA-tagged PP2A C subunit (HA-C subunit). Immunoprecipitations were performed using 12CA5 anti-HA-tag mAb; 1d6 and 4e1 anti-C subunit mAbs; and polyclonal antisera against striatin and SG2NA. The averages and standard deviations of at least three independent experiments are shown. Two-thirds of each immunoprecipitate prepared from approximately 3 mg of total protein was used to measure phosphatase activity as described previously (22), and one-third was analyzed by SDS-PAGE and immunoblotted with commercial anti-C subunit mAbs (Transduction Laboratories). The amount of C subunit present in immunoblots of each complex was quantitated using a chemiluminescence imager (Bio-Rad). Cpm released was first corrected by subtracting background activity obtained with pre-immune immunoprecipitations. Specific activity (immune-specific Cpm released/chemiluminescence counts) was then calculated, and the level of phosphatase specific activity was finally normalized relative to the amount of specific activity present in 12CA5 immunoprecipitations of HA-tagged PP2A C subunit to obtain the percent of HA-C subunit specific activity. D, normalized histone H1 phosphatase activity of immunoprecipitations prepared from NIH3T3 cells stably expressing HA-tagged PP2A B subunit (HA-B subunit). Immunoprecipitations were perfomed using 12CA5 anti-HA-tag mAb and striatin and SG2NA polyclonal antisera as described in C. The averages and standard deviations of at least three independent experiments are shown. Calculations were performed as described in C, except specific activity was computed as a percent of HA-tagged PP2A B subunit specific activity. (Fig. 5A) that recognizes only the demethylated C subunit. 2 The level of C subunit detected by the methylationsensitive antibody was substantially enhanced in base-treated samples relative to untreated controls, and chemiluminescence quantitation determined that more than 90% of the C subunits associated with striatin and SG2NA are methylated. This result indicates that C subunit methylation does not prevent striatin and SG2NA association with the A/C heterodimer, but it does not indicate whether methylation is needed for formation of striatin-PP2A and SG2NA-PP2A complexes.
Deletion of the Carboxyl Terminus of PP2A C Subunit Does Not Prevent Striatin-PP2A and SG2NA-PP2A Complex Formation-Although the C subunit carboxyl terminus is essential for formation of PP2A heterotrimers containing the cellular B subunit, it is not required for formation of heterotrimers containing the viral B-type subunit, polyoma virus middle tumor antigen (MT) (22). To investigate the importance of the C subunit carboxyl terminus for the formation of A/C/striatin and A/C/ SG2NA heterotrimers, striatin and SG2NA immunoprecipitations were performed from cell lines expressing HA-tagged C subunit mutants (22). The final nine amino acids that contain sites of both phosphorylation (Tyr-307) (28) and methylation (Leu-309) (29) are deleted in one of these mutants (301Stop). The 301Stop mutant was co-immunoprecipitated with striatin and SG2NA at least as efficiently as the wt C subunit (Fig. 5B).
These data indicate that striatin and SG2NA interact with the core A/C heterodimer in a fundamentally different manner than the B subunit, behaving more like polyoma virus MT than B subunit in their binding to the A/C heterodimer. Thus, the association of striatin and SG2NA with the A/C heterodimer is probably not directly affected by the covalent modification of the C subunit carboxyl terminus. However, the association of striatin and SG2NA with the A/C heterodimer could be indirectly affected by modifications that influence competition for the A/C heterodimer by altering the affinity of other B-type subunits. Consistent with this possibility, the C subunit mutant, T304A, which has been shown to have an increased affinity for B subunits (22), was reduced 5-fold in its ability to form complexes with striatin and SG2NA compared with the wildtype HA-tagged C subunit (36wt, Fig. 5C). Taken together, the following observations all strongly suggest that striatin and SG2NA represent a novel family of Btype subunits: 1) striatin and SG2NA complex with the PP2A A/C heterodimer; 2) these complexes contain okadaic acid-sensitive, PP2A-like phosphatase activity; 3) striatin and SG2NA share a conserved epitope with BЈ subunits; 4) no known B-type subunits can be observed in these complexes; 5) unlike B subunits, the association of striatin and SG2NA with the A/C heterodimer is independent of the C subunit carboxyl terminus, yet they can activate the A/C heterodimer toward cdc-2 phosphorylated histone H1; and 6) their relative binding to wt and T304A C subunits is opposite that of B subunit (implying that B subunit may even compete with them for binding to the T304A mutant form of the A/C heterodimer). However, we cannot exclude the formal possibility that some other undiscovered B-type subunit that is not recognized by any of the B, BЈ, or BЈЈ antisera could be present in these complexes. Our conclusion that striatin and SG2NA may be members of a new class of B-type subunits would be further strengthened by additional evidence of competition with other B-type subunits or demonstration of direct interaction between bacterially expressed recombinant striatin and SG2NA and recombinant PP2A A subunit or A/C heterodimer. Should further evidence conclusively demonstrate that striatin and SG2NA are members of a novel family of B-type subunits, we propose that they be designated the BЈЈЈ (or B93/110) family.
The finding that striatin and SG2NA form stable complexes with PP2A and might represent a novel (BЈЈЈ) family of PP2A subunits is the first description of a function for these highly related WD40 repeat proteins. The fact that striatin is highly abundant in post-synaptic membranes, whereas SG2NA appears to be targeted to the nucleus, provides yet another mechanism for the localization of PP2A to different cellular microenvironments. The observation that SG2NA, as well as striatin, binds to CaM in a calcium-dependent manner indicates that these proteins probably link PP2A to calcium-dependent signaling pathways and cellular events. Although FIG. 5. PP2A C subunits associated with striatin and SG2NA are highly methylated, but C subunit methylation is probably not required for formation of these complexes. A, methylation level of striatin-and SG2NA-associated PP2A C subunits. Immunoprecipitates prepared with preimmune and immune striatin antisera and preimmune and immune SG2NA antisera were each divided into two equal portions, and one of each pair (ϩ) was demethylated by treatment with 0.2 N NaOH on ice for 5 min (21). Base-treated samples were neutralized, and preneutralized base solution was added to untreated control (Ϫ) samples. All samples were then analyzed by SDS-PAGE and immunoblotting. The membrane was probed first with 4b7 (Methylation Sensitive Ab panel), an anti-C subunit antibody that recognizes only unmethylated C subunits. Subsequently, the same membrane was probed with Transduction Laboratories anti-PP2A C subunit antibody (Methylation Insensitive Ab panel), which is insensitive to the methylation state of PP2A and therefore reveals the total C subunit in each lane. Ab, antibody. B, deletion of nine carboxyl-terminal residues from the PP2A C subunit does not affect binding to striatin and SG2NA. Cell lysates and striatin and SG2NA immunoprecipitates were prepared from polyomavirus MT-transformed NIH3T3 cell lines expressing empty vector (GRE only), HA-tagged wt C subunit (36wt), or HA-tagged C subunit truncation mutant lacking nine carboxyl-terminal amino acids (301STOP). Samples were analyzed by SDS-PAGE and immunoblotted with anti-HA-tag antibody (16b12, BAbCO, Richmond, CA). Each set of lanes is from the same gel, but the lanes were not all originally adjacent. PP2A C subunit has been previously observed to migrate sometimes as a doublet and sometimes as a singlet on SDS-PAGE (22). C, a single substitution of alanine for threonine 304 in the PP2A C subunit greatly reduces complex formation with striatin and SG2NA. Cell lysates and striatin and SG2NA immunoprecipitates (IP) were prepared from polyomavirus MT-transformed NIH3T3 cell lines expressing empty vector (GRE only), HA-tagged wt C subunit (36wt), or HA-tagged C subunit mutant (T304A). Samples were analyzed by SDS-PAGE and immunoblotted with anti-HA-tag antibody (16b12, BAbCO). Quantitation with a chemiluminescence imager system (Bio-Rad) indicated that T304A bound to striatin and SG2NA approximately 5-fold less efficiently than did 36wt. Each set of lanes is from the same gel, but the lanes were not all originally adjacent.
PP2A complexes with CaMKIV have been detected by crosslinking (11), it is not yet known whether this interaction is direct or requires a molecular scaffold. Although we were not able to detect CaMKII or CaMKIV in immunoblots of striatin or SG2NA immunoprecipitations, we have detected kinase activity in these immunoprecipitates (data not shown). Furthermore, the large number of stably associated proteins observed in SG2NA immunoprecipitations suggests that striatin and SG2NA function as molecular scaffolds for the interactions of PP2A with large signal transduction complexes. The identification of additional cellular components of these complexes will provide new insights into the cellular function of striatin, SG2NA, and PP2A. | 7,877.8 | 2000-02-25T00:00:00.000 | [
"Biology"
] |
Analyzing the impact of feature selection on the accuracy of heart disease prediction
Heart Disease has become one of the most serious diseases that has a significant impact on human life. It has emerged as one of the leading causes of mortality among the people across the globe during the last decade. In order to prevent patients from further damage, an accurate diagnosis of heart disease on time is an essential factor. Recently we have seen the usage of non-invasive medical procedures, such as artificial intelligence-based techniques in the field of medical. Specially machine learning employs several algorithms and techniques that are widely used and are highly useful in accurately diagnosing the heart disease with less amount of time. However, the prediction of heart disease is not an easy task. The increasing size of medical datasets has made it a complicated task for practitioners to understand the complex feature relations and make disease predictions. Accordingly, the aim of this research is to identify the most important risk-factors from a highly dimensional dataset which helps in the accurate classification of heart disease with less complications. For a broader analysis, we have used two heart disease datasets with various medical features. The classification results of the benchmarked models proved that there is a high impact of relevant features on the classification accuracy. Even with a reduced number of features, the performance of the classification models improved significantly with a reduced training time as compared with models trained on full feature set.
Introduction
Heart disease is rapidly increasing across the globe. As per a research report published by the World Health Organization (WHO), in 2016 approximately 17.90 million people died from heart disease [1]. This much number accounts for approximately 30% of all deaths worldwide. Nearly 55% of the heart patient die during the first 3 years, and the treatment costs for heart disease are around 4% of the annual healthcare expenditure. [2]. Observing the increasing stats, accurate and timely detection and treatment of this serious illness is very essential for disease prevention and effective utilization of medical resources.
Due to the recent technological advancements, the field of medical sciences has seen a remarkable improvement over time [3,4]. Specially, machine learning (ML) has been widely used in the field of cardiovascular medicine and has established a potential space [5]. The basic framework of ML is built on models that take input data (such as text or images) and through the usage of some statistical analysis and mathematical optimizations provides the desired 2 prediction results (e.g., disease, no disease, neutral) [6]. ML models can be trained on tons of raw electronic medical data gathered from low-cost wearable devices to allow efficient heart disease diagnosis with less resources and improved accuracy [7].
During the training process, ML models require a large number of data samples to avoid overfitting [8]. However, the inclusion of the large number of data features is not required for reasons related to the curse of dimensionality [9,10]. Mostly, medical datasets cover related as well as redundant features. Unnecessary features do not contribute any meaningful information to the prediction task, and also creates noise in the description of target (output class) which leads to prediction errors [11]. Furthermore, such features increase the complexity of ML models and make the system runs slowly due to increased training time. To overcome the curse of dimensionality only those features which are closely related with the target should be selected/identified from datasets and provided as inputs to ML models [12]. Relevant feature selection can aid in performance improvement by decreasing the model complexity and increasing prediction accuracy which is very important in medical diagnosis [13] Because of the benefits outlined previously, feature selection techniques are being actively used in the area of heart diseases and strokes [14,15,16].
The contributions of this research are listed as follows: • The study uses two datasets of heart disease patients from different sources to cover a broader study of medical features.
• To perform the correlation and interdependence study between different features in datasets with respect to heart disease.
• The identification of the most relevant medical features which aids in the prediction of heart disease using a filter-based feature selection technique.
• Different ML classification models such as Logistic Regression (LR), De-3 cision Tree (DT), Naive Bayes (NB), Random Forest (RF), Multi Layer Perceptron (MLP) etc., are used on the datasets to identify the suitable models for the problem.
• The classification models were tested on full as well as the reduced feature subset to observe the impact of feature selection on the performance of models.
• With the spirit of reproducible research, the code of this article is shared in GitHub. 1
Related Work
ML has appeared to be an effective technique for assisting in the heart disease diagnosis, however the high dimensionality of datasets is a fundamental issue for ML prediction models. Feature selection is one of the techniques which is used to select only the most relevant features from datasets features that influence the disease outcome most. The identification of the most important features from the high dimensional datasets is an important aspect that can improve the accuracy of prediction models hence reduce the number of medical injuries.
In [17], Zhang et al. developed an efficient feature selection technique called weighting-and ranking-based hybrid feature selection (WRHFS) to determine the risk of heart stroke. For the weighing and ranking of features, WHRFS used a variety of filter-based feature selection techniques such as fisher score, information gain and standard deviation. The proposed technique selected 9 important input features out of 28 based on the knowledge provided for heart stroke prediction. In another research [18], the authors worked on the extraction of relevant risk factors form a large feature space for an efficient heart disease prediction. The features were selected based on their individual ranks.
4
The authors used Latent Feature Selection (ILFS) method to rank the features which is a probabilistic latent graph-based feature selection technique.
The results of the model were competitive using only half of the features from the set of 50. In [19], a feature selection model for detecting the risk of heart disease is proposed. The proposed model combined the glow-worm swarm optimization algorithm based on the standard deviation of the features to extract the quality features from a electronic healthcare record (EHR) of a community hospital in Beijing. 6 features including high blood pressure, Alkaline Phosphatase (ALP), age and Lactate Dehydrogenase (LDH) were indicated as important features to detect stroke excluding the family hereditary factors. The authors of [20] focused on finding the most relevant features from EHR to predict the early-stage risk of death from heart disease. The authors used minimum redundancy maximum (mRmR) relevance and recursive feature elimination (RFE) feature selection approaches based on NB for the selection of features. Two medical features i.e., Serum Creatinine and Ejection Fraction were ranked higher by both feature selection technique as compared to other. When provided to a prediction model as input, the selected features proved out to be most important as an overall accuracy of 80% was achieved.
Singh et al. [21], proposed an efficient approach for stroke prediction using is used in [22] to select the most significant features to detect heart disease.
The proposed feature selection algorithm identifies 7 features out of 16 to detect heart disease from Cleveland heart disease dataset. The resultant features were supplied to support vector machine (SVM) for the accuracy evaluation.
The classifier acquired 88.34% using the reduced feature set whereas only 83.34% was achieved when using whole dataset features. In terms of ROC curve, the GA-SVM performed well also when compared with the various existing feature selection algorithms also. This study [23] proposes a new heart disease prediction model by combining ML with deep learning techniques.
The least absolute shrinkage and selection operator (LASSO) penalty method based on LinearSVC was applied as the feature selection module to generate a feature subset closely related to target. 12 most relevant features were chosen from dataset obtained from Kaggle and inputted to the MLP network.
As per the experimental results, the proposed model obtained an accuracy of 98.56% with 99.35% recall and 97.84% precision. In [5], a ML based heart disease diagnosis system is proposed. Seven popular classifiers LR, k-Nearest Neighbor (K-NN), MLP, SVM, NB, DT, and RF were used for the classification of heart disease patients. Three feature selection algorithms RelieF, mRMR, and LASSO were used to select highly correlated features with target class. It was observed that the classification performance of models increased in terms of accuracy and computation time using the feature selection techniques. The LR model showed best accuracy of 89% when used with RelieF.
The main objective of this research [24] was to predict the heart disease using minimal subset of features and adequate accuracy. To achieve this objective, the authors employed a two-stage feature subset retrieving technique. Three popular feature selection techniques i.e., (embedded, filter, wrapper) were used to extract a feature subset based on a boolean process-based common "True" condition. To select the suitable prediction model, RF, SVM, K-NN, NB, XGBoost and MLP models were trained on the data. The experimental results showed that XGBoost classifier integrated with wrapper technique provided the best prediction results for heart disease. A comparative analysis of different classifiers was performed in [25] for the classification of the heart disease with minimal attributes. ML classifiers such as NB, LR, sequential minimal optimization (SMO), RF etc., were trained for the accurate detection of heart disease. To obtain the optimal feature subset, RelieF, chi-squared and correlation-based feature subset evaluator were utilized. 10 features were selected from the set of 13 to train the classifiers. The SMO classifier achieved 6 the highest accuracy of 86.468% when inputted with the optimal feature set obtained by chi-squared feature selection technique.
Despite their relevance, one major drawback of existing works on heart disease prediction is the lack of systematic guidance when selecting the input features for the development of prediction models which is an important aspect in terms of predictive performance. Previous research proposals chose features mostly in an impromptu manner without incorporating latest medical research findings. Mostly the focus is on the prediction models and their final prediction performance. However, a very less attention is paid on the correlation between different medical features and their individual importance in the prediction of heart disease. A few works present analysis of medical features but for the purpose of heart disease detection only. This research aims at addressing the ineffective feature selection in previous studies on heart disease prediction. Two heart disease patient datasets collected from different sources were utilized in this research to cover a broader study of features related to heart disease and to identify various medical procedures. To further analyze the role of each parameter in the prediction task, we obtain the interdependence and importance of the collected set of medical features. A detailed analysis of ML models trained on both full and selected feature set is provided to analyze the impact of feature selection techniques on the prediction performance as well as the identification of suitable classifiers for the specified problem.
Proposed Methodology
This research paper highlights the importance feature selection in the accurate classification of heart disease. Figure 1 demonstrates the workflow of the proposed methodology for heart disease prediction.
Datasets
In this research, two datasets named as cardiovascular disease (CVD) and Framingham were utilized to study the impact of different features on the occurrence of heart disease and to develop ML-based system for heart disease detection. The study uses two datasets to cover a broader study of medical features and various clinical pathways used for the detection of heart stroke.
The datasets were collected from different sources. The datasets contained some main medical features like 'age', 'hypertension', 'glucose levels', 'blood pressure', 'cholesterol' etc. which are closely related to the occurrence of disease and provides a great flexibility for heart disease analysis. The datasets were chosen based on two criteria. The first criterion was the variance in the medical procedures, so to study the different medical procedures and the role of each feature in the context of heart disease. Secondly, the datasets were chosen based on the data availability. Datasets from different sources possess different amount of data and collection of features. So, we have chosen datasets which were offering a good volume of data and having a level of simi-8 larity in terms of features.
CVD
The CVD dataset is controlled by McKinsey & Company which was a part of their healthcare hackathon 2 . The dataset can be accessible from a free dataset repository 3 . The collected dataset included 29072 patient observation with 12 data features. 11 of them are the common clinical symptoms and are considered as input features whereas the 12th feature 'stroke' is the target feature indicating whether a patient has had stroke or not. The complete description of data features for CVD dataset is given in Table 1.
Framingham
The Framingham dataset was created during an ongoing cardiovascular study involving the residents of Framingham, Massachusetts, and is available at the Kaggle website 4 . The dataset is mostly used in classification tasks to identify whether a patient has a chance to develop coronary heart disease (CHD) in 10 years. The dataset contains 4, 240 patient records and 15 features, where each feature indicates a risk factor. 14 input features were used to detect the decisional feature i.e., 10-year risk of CHD. Table 2 shows the description about the data features in Framingham dataset.
Pre-Processing
Data pre-processing is one of the important part of ML life cycle as it makes data analysis easy and increases the accuracy and speed of the ML algorithms [26] . We applied some pre-processing steps as the collected dataset were having smoking_status ("never smoked":0, "formerly smoked":1, "smokes":2) stroke ("yes":1, "no":0) [28,29]. However, only a deep knowledge of specific disease will likely aid in the selection of the suitable data imputation methods. As per the mentioned analysis, we dropped all the observations with null value from both the datasets to avoid any accuracy biases.
Furthermore, looking at the class distribution, both datasets were highly un- The unbalanced nature of the datasets leads to classification errors during the training of ML models [30]. As a result, we adopted a 'Random Down-Sampling' technique to mitigate the adverse effects caused by unbalanced data. We made two classes referred as 'minority' and 'majority' classes. The patients with heart disease were included in minority class, whereas the patients having no symptoms were included in majority class. In the case of CVD dataset, 548 observations were included into the minority class and the remaining 28,524 were considered as majority class. We created a balanced dataset of 1096 observations by selecting all 548 observations from minority class and 548 random observations from a total of 28,524 majority cases.
Same process was performed for framingham dataset where 557 random observations from 3101 majority cases were derived making a total of 1114 obser-vations in a balanced dataset shape. In this way, two balanced datasets were made to study the features importance and disease classification in an efficient manner.
Feature Correlation Analysis
Feature correlation is a method which helps in understanding the underlying relationships between various data features present in a dataset. Feature correlation can be useful in many ways such as determining the interdependencies between the data features and how each feature effects the output feature [31]. As per medical research findings, with aging, major changes can be observed in the heart and blood vessels. For example, the heartbeat rate is not as fast during any physical activity as it could when you are younger. The age-related changes may raise a person's risk of heart disease according to National Heart, Lung, and Blood Institute Trusted Source [32]. Hypertension is an established risk factor for stroke, ischemic heart disease and renal dysfunction [33]. Hypertension causes the blood pressure over the normal range. The higher blood pressure levels make the arteries less elastic and decreases the oxygen and blood flow towards the heart which potentially leads to a heart disease. The diabetic patients are more likely to develop heart disease at an earlier stage.
High blood glucose from diabetes causes stronger contraction of blood vessels that control your heart and blood vessels which leads to heart disease [34].
Over time, this process can lead to a heart stroke.
Feature Selection
The main motivation of this research is to select the medical features that can improve the accuracy of heart disease prediction. Feature selection is the pro- where N is the overall sample size, S is the number of groups, j i is the number of observations in the jth group,K i is the ith group sample mean,K is the overall mean of the data, K ip is the pth observation in the ith out of S
Evaluation Matrices
We have used three popular performance evaluation metrices i.e., Accuracy, F1-score and ROC to evaluate the performance of ML classification models [38]. Confusion matrix is a table that helps ML practitioners to describe the performance of a classification model. Confusion matrix consists of four used to determine the performance matrices of a classifier and can be de- T P R = T P/(T P + F N ) F P R = F P/(F P + T N )
Results and Discussions
In this section, we will discuss the performance of the selected classification models from different perspectives. First, we checked the performance of model individually for both datasets with full features to examine which models work well for each dataset. Secondly, we evaluated the performance of the models on the selected set of feature to analyze the effect of feature selection technique on the accuracy of the classifiers. The classifiers performance was checked using the Accuracy, F1-score and ROC evaluation matrices.
Classification results using full feature set
In this section, all the ML models were tested on both datasets using full set of features to predict the binary disease outcome. We trained all the prediction models on entire data with 80% training and 20% testing subsets. The overall computational time consumed during the training of prediction models was 10.98 iterations per second (it/s) for CVD dataset and 24.20 iterations per second (it/s) using framingham dataset. Table 3 and 4 shows the binary classification results of the ML model in predicting the heart disease for both datasets.
Looking at the classification results listed In Table 3 [39,40]. However, any data manipulation strategy in medical studies may introduce significant biases, that is why we have kept all the feature values unchanged.
Classification results using reduced feature set
Given the goal of identifying the potential bio-markers and to analyze the duces the size feature space, but it also improves performance of ML models also in various aspects.
Conclusion and Future Works
Heart disease is the most fatal disease which is rapidly increasing and became one of the causes of death around the world. performed with full as well as the reduced feature sets to analyze the effect of selected features on the prediction accuracy of various ML prediction models. Using the full feature set the highest accuracy achieved was 0.73 for CVD and 0.66 for the Framingham heart disease dataset. After using the reduced feature set the accuracy increased to 0.75 and 0.71 for both datasets. The analysis showed that even after limiting the number of features, ML models showed better performance as compared to the models using a full feature set.
The experimental results reveal that by employing a feature selection technique, we may accurately classify the heart disease even with a small number of features and less time. We can conclude that using the feature selection only the most important features related to heart disease are selected which reduces the computational complexities and improve the accuracy of prediction models. In the intended future work, we will try to work on enhancing the prediction accuracy by using a vast combination of ML and deep learning models [43] to obtain the best feasible model for the heart disease diagnosis.
We will benchmark our analysis on additional datasets as a part of our future work. We will also try to use more than one feature selection technique to obtain more feasible feature subsets which are more direct with medical studies. | 4,753 | 2022-05-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Hubble’s Constant
It is difficult to imagine that barely 90 years ago, we were ignorant about what our universe looks like. In fact, it was believed that our universe was limited to the Milky Way. This did not change until American astronomer Edwin Hubble used his observations of Cepheids variable stars in spiral nebulae to calculate the distance to these objects. He did so by utilizing the relation between the period of the Cepheid (i.e., the time it takes for its brightness to oscillate) and its luminosity. Comparing the absolute luminosity of the Cepheids to the measured brightness, Hubble was able to obtain an estimate of the distance to these objects. The nebulae where the Cepheids were located were found to be well outside our galaxy. This finally settled the debate on the nature of these nebulae (which were initially named ‘island universes’) as it was agreed that they were galaxies just like the one we live in.
It is difficult to imagine that barely 90 years ago, we were ignorant about what our universe looks like. In fact, it was believed that our universe was limited to the Milky Way. This did not change until American astronomer Edwin Hubble used his observations of Cepheids variable stars in spiral nebulae to calculate the distance to these objects. He did so by utilizing the relation between the period of the Cepheid (i.e., the time it takes for its brightness to oscillate) and its luminosity. Comparing the absolute luminosity of the Cepheids to the measured brightness, Hubble was able to obtain an estimate of the distance to these objects. The nebulae where the Cepheids were located were found to be well outside our galaxy. This finally settled the debate on the nature of these nebulae (which were initially named 'island universes') as it was agreed that they were galaxies just like the one we live in.
Despite this incredible achievement, Hubble's most famous contribution to cosmology had not come yet. He continued his study on distant galaxies, more specifically on the distance to them, by making use of the previously mentioned Cepheids method. In 1929 Hubble published one of the most iconic papers in the history of Astrophysics: "A relation between distance and radial velocity among extra-galactic nebulae". In said paper, he studied the link between the velocity at which the galaxies are moving away from (or towards) us and the distance that separates us. The results presented evidence for one of the greatest discoveries in science: the expansion of the universe. Hubble showed that most galaxies are moving away from us at a velocity that is proportional to the distance between us and the galaxy. A plot of these results can be found in Figure 1.
The discovery of the expansion of the universe is one of the greatest achievements of 20 th century astrophysics, as it reveals a much deeper secret. One would expect that due to the gravitational force between galaxies, the expansion of the universe would slow down (and eventually reverse). However, in the late 1990s it was found that the cosmos is not just expanding, but it also does so at an accelerated rate. In other words, as the universe becomes larger, it grows faster. This led to the conclusion that 'something' had to be providing the energy to overcome the gravitational pull. Due to its unknown nature, this 'substance' was named dark energy. Figure 1. Hubble's original plot of radial velocity against distance for extragalactic nebulae. "Radial velocities, corrected for solar motion, are plotted against distances estimated from involved stars and mean luminosities of nebulae in a cluster. The black discs and full line represent the solution for solar motion using the nebulae individually; the circles and broken line represent the solution combining the nebulae into groups; the cross represents the mean velocity corresponding to the mean distance of 22 nebulae whose distances could not be estimated individually" (Note: the velocity should be in kilometers per second) Thanks to the discovery of dark energy, along with that of dark matter, it became very clear that our universe is made of many substances other than baryonic matter (i.e., the matter we are made of). In reality, regular matter only makes up about 5% of the cosmos, a number that rises to ~25% for dark matter, and ~70% for dark energy. The study of the composition of the universe led to the creation of the Standard Model of Cosmology, which explains our current understanding of the origin and evolution of the cosmos.
Theoretical Background:
In this section we shall discuss a variety of theoretical concepts (that are not necessarily related to each other) which will be of key importance to understand future discussions.
• Hubble's Law: In his 1929 paper, Edwin Hubble noticed a linear relation between the distance to galaxies ( ) and their radial velocity ( ) (see Figure 1). The general formula for this law is: where ( ) is known is the constant of proportionality known as the Hubble constant. Despite its name, its value is not constant, but changes with time . This is as expected from the expansion of the universe. If as time goes on the universe expands faster, we expect the value of the Hubble constant to increase. Normally, we use another form of the Hubble's Law: where 0 is the current value of ( ). This new formula provides an insight on the rate of expansion of the universe at our current time.
In terms of units, in equation (2)
• Scientific Context
At the current time, there are two main methods to determine the value of the constant of proportionality 0 .
On the one hand, the first method relies on a phenomenon called redshift, which is a consequence of Doppler's effect. This effect is named after Christian Doppler, an Austrian mathematician that discovered that the frequency (and thus wavelength) of sound waves changes with the relative motion of the source with respect to the observer. This phenomenon can be extrapolated to any wave, including light. More specifically, when the source moves away from the observer, the wavelength of the wave increases, which in the case of light means that the spectrum is shifted towards reddish colors. The redshift, , is defined like this: Where is the difference between the wavelength of the observed light, , and that of the emitted light ( 0 ). Additionally, for speeds much smaller than that of light ( ), we can define as: Using the concept of redshift, we can compare something whose appearance we know (like the emission spectrum of hydrogen, whose 0 we know) to what we measure ( ), to determine . This way, we can get a decently accurate value for the speed at which a certain galaxy is moving away from us. To figure out the distance we can use a variety of methods. Among these, we can highlight the use of Cepheid Variables, Supernovae IA (both of which will be discussed later), or any other technique from the distance ladder. Once both the distance and the speed have been worked out, it is possible to infer a value of the Hubble constant. The current consensus of such value is about On the other hand, the second procedure of finding a value for the Hubble constant is built on a much more fundamental idea: our understanding of the Universe. Here comes into play the Lambda-CDM model (ΛCDM), which is used to describe what the Universe is made of and in what proportions. First, the Greek letter Λ stands for a cosmological constant associated with dark energy, a substance which we believe is intrinsic to space and makes its expansion accelerate. Secondly, CDM means Cold Dark Matter, a component of the Universe we cannot see but whose gravitational effects can be measured. Finally, ΛCDM also considers baryonic matter (i.e. ordinary matter). ΛCDM is a cosmological model that has been utilized to accurately predict many things, including the macroscopic structures of our Universe, or the amount of Helium that was formed in the early Universe. Astrophysicists realized that, depending on how much of each substance the primordial plasma (i.e. the high-energy content of the early Universe) had before it started expanding, the final light this plasma would emit would be different. This light, which is essentially a snapshot of the early Universe, is called Cosmic Microwave Background (CMB). By measuring the actual CMB, we can estimate the proportions of the main three components of the Universe: dark energy, dark matter and baryonic matter; whose abundances are about 70%, 25% and 5%, respectively. Using these values, we can figure out the rate of expansion of the Universe at our age, which yields a value of the Hubble constant of 67.4 ± 1.4 −1 −1 . Historically, when these two methods started to be used to measure 0 , they produced very high uncertainties (on the order of 5 On the other hand, ΛCDM-supporters argue that the for the redshift method we have not measured a large enough number of galaxies or we are not taking into account possible gravitational interferences between galaxies, which would affect the value of the Hubble constant.
• Distance Measuring -Cepheid Variables: Here we shall discuss the use of Cepheid variable stars to measure distances, as this method will become important later in the experiment. Cepheids are a type of variable star, i.e., a star whose luminosity oscillates periodically with time. As mentioned above, the period of oscillation of the brightness of a Cepheid is related to its average luminosity or absolute magnitude.
This link period-brightness link is known as Leavitt's law, named after Henrietta Leavitt, its discoverer. For the visible part of the spectrum, this relation becomes: = 1.371 ± 0.095 − (2.986 ± 0.094)log 10 ( ) , where is the absolute magnitude at visible light, and is the period measured in days. If one measures the time difference between consecutive peaks (or troughs) in brightness, a value for the period can be obtained. Plugging said value on equation (3), the average absolute magnitude is found. Now all that must be done is compare it to the apparent magnitude , i.e., the magnitude of the object as measured from the Earth. To do so, we must use the following formula: where is the distance to the star in parsecs. Solving for , an estimate of the distance to a Cepheid can be obtained. This is the method used by Edwin Hubble in early 1900s to find the distance to nearby galaxies.
• Distance Measuring -Type IA Supernovae: Supernovae are massive explosions of a star and are considered the biggest explosions that take place in space, as they can be even brighter than a galaxy. They occur at the latest stages of a star's life. Normally, most supernovae happen when the star runs out of fuel to feed the nuclear fusion reactions that take place at its core. These reactions exert an outward pressure that counters gravity. However, when there is no more fuel for these reactions, the lack of outward pressure leads to the collapse of the core of the star, which eventually results in the explosion of the star itself.
Nevertheless, this process is not always the cause of the explosion of stars.
There is a type of supernova that only takes place in very specific conditions: type Ia supernovae. These are thought to originate in binary systems consisting of a white dwarf and a moderately massive star (but more massive than the white dwarf). If these two are too close, the tidal forces exerted by the white dwarf can become stronger than the gravitational force keeping the companion star together. If this happens, the former will rip apart material from the latter, which will be accreted into the white dwarf. However, if the mass of the dwarf exceeds the Chandrasekhar limit (i.e., the maximum mass a white dwarf can have before it becomes stable due to gravity overcoming the outward electron degeneracy pressure), the star will go supernova. These explosions are the brightest of any kind of supernovae, reaching an absolute magnitude of ~− 19.5 at peak luminosity.
Methods:
• Data Collection: Galactic Surveys: The Center for Astrophysics (CfA) is an ongoing collaboration between the Smithsonian Astrophysical Observatory and Harvard College Observatory founded in 1973 in Cambridge, Massachusetts. This joint project had the objective of mapping the large-scale structure of the universe.
From 1977 to 1982, the first major galactic survey was made, aiming to measure the radial velocities of the brightest galaxies (those with apparent magnitudes below 14.5) in the nearby universe: "This survey produced the first large area and moderately deep maps of large-scale structures in the nearby universe, as well as the first crude but truly quantitative measurements of the 3-D clustering properties of galaxies". The procedure followed was using the redshift of the observed light to calculate the radial velocity of the galaxies (equations 5 and 6) and link this to the distance to the galaxies using Hubble's law (equation 2). Thankfully for us, this survey initially looked at some nearby galaxies to find a value of the Hubble constant with which work out the distances to the farthest galaxies (whose distance cannot be measured with conventional methods such as the use of Cepheid Variables or Type Ia supernovae).
This data was a list of observed galaxies with their respective radial velocity. For some of these galaxies, the distance value was included, providing all we needed to find 0 .
• Data Analysis:
In this section we will explore how the data from the survey was analyzed to find a value for the Hubble constant and obtain a plot of the galaxies' radial velocities versus their distance from us.
Firstly, because the data from the survey was incomplete, i.e., for some galaxies it was not specified how far away they are; we had to filter out those galaxies whose distance was unknown. From this new set of galaxies, we just had to extract their respective values of velocity and distance. Figure 6. Plot of raw galactic data of distance and radial velocities. Note that there are several clear outliers, e.g., a few galaxies with negative velocities.
However, there was a problem with our data: there were several outliers. In other words, some galaxies had abnormal velocities that could not be explained with Hubble's law. Most of these were galaxies that were very close and had negative radial velocities (because the gravitational pull from the Milky Way was able to overcome Hubble's expansion). Nevertheless, there were some other galaxies that were very far away and still had abnormal velocities. This is most likely explained by gravitational interactions with neighboring galaxies. Because these datapoints were not useful for studying Hubble's law, they had to be discarded. We shall talk discuss how this was done in a moment.
With a set of velocities and distances a fit could now be performed. To do this, we used the library NumPy on Python, more specifically, the function numpy.polyfit(), which fits a polynomial through the data, whose degree can be specified. The th degree fit function is thus: ( ) = 0 + 1 + 2 2 + . .. + .
In our case, since the relation between the velocity and distance is linear, a firstdegree polynomial was expected to fit the data. So, in reality: Therefore, the expected output of the fit are two parameters: the y-offset and the slope of the linear function, though we are only interested in the latter.
Numpy.polyfit() works by minimization of the squares (also known as least squares fit). Essentially, it minimizes the sum of the squared difference between the datapoint and the fit function's value at that point, through all datapoints: where represents each velocity value, and ( ) represents the polynomial evaluated at the corresponding distance value . In other words, numpy.polyfit() tries a whole range of parameters 0 and 1 until it finds the ones that minimize equation (7). Additionally, this NumPy function can also return the corresponding uncertainty on said parameters.
Having now a fit through all data, the outliers must be filtered out. To do this, all points where the absolute value of the difference between the velocity and the fit at that point was greater than 0.5 times the velocity were removed. In mathematical terms, all points where the following condition was met were deleted: With all outliers now removed, it is time to perform a second linear fit, with more accurate results on the slope and error on the slope. This was done following the same procedure explained earlier.
• Error Estimation:
One of the main advantages of using the NumPy function polyfit() is that it can be ordered to return a covariance matrix for the fit parameters.
In the case of a linear fit, i.e., ( ) = + , the covariance matrix looks like this: However, we are only interested in the error on the slope. This is calculated the following way:
Results:
As explained earlier, the results of the parameters of the fit were estimated by the minimization of the squared error (equation 7). In this case, we are interested in the parameter corresponding to the polynomial term of first order, i.e., the slope. One can better understand how this is done by look at the following figure: Having determined a reasonable value of the slope, a plot of the datapoints and the linear fit was made, resulting in the following figure: Figure 8. Plot of the galaxies' data obtained from the survey (in black), along with the linear fit (in red) that minimizes the sum of the squared differences.
As one can see, in this plot the outliers are no longer present, and the remaining datapoints fall in the vicinity of the linear function very well. However, because of how the filter was constructed (see Equation 8), at the left end of the plot there are many datapoints that are rather far away from the fit. This is due to the fact that in this region the velocity is very small, making it is easy for some point to get past the initial filter. This partially the reason why the bulk of galaxies on the plot is in this region We now must take a look at the residuals plot in Figure 4, which represents the difference between the velocity value and the fit at each point. This plot further supports the point made earlier that the filter is not very effective at low velocities, as one can see that this is where highest residuals are actually located. Figure 9. Plot of the residuals against distance for all galaxies. Discussion:
Results and Procedure:
As outlined in the previous section, the results were quite satisfying since they fall very well and precisely on our linear fit. ) is very small, which places our experimental value 5.97 standard deviations away from the accepted value. According to a normal distribution, the likelihood of getting a difference of 5 standard deviations or higher is of order 10 −7 . This clearly indicates that the uncertainty has been severely underestimated. Therefore, we concluded that the method described above to determine the error on the Hubble constant should be replaced by some other procedure that does not yield such an underestimate.
Additionally, it is worth mentioning that, as it can be seen in Figure 5, the residuals (difference between measured velocity and fit velocity) are a lot bigger at the left end of the plot than they are on the right end. This is, as discussed earlier, most likely due to the way the outliers were filtered out. Because of the procedure that numpy.polyfit() uses to calculate the error on the slope, this may potentially have impacted the resulting uncertainty on the Hubble constant. For all these reasons, it is convenient to utilize some other method to identify and remove the outliers if the experiment is to be repeated.
Potential Improvements:
There are several changes that can potentially be introduced to obtain better and more precise results. These include the already mentioned: utilizing a different method to find 0 and making use of a different filtering method for the outliers. However, there are several more ways we could improve our results.
The most obvious potential enhancement is utilizing more datasets. That is, obtaining more galaxies' data from different galactic surveys. An instance of this could be the Hubble Legacy Archive (HLA). This platform provides access to most observations made by the Hubble Space Telescope, in several types of file formats. Additionally, it also provides a tool to calculate the radial velocity of each galaxy based on the observed wavelength of the light coming from the galaxy. More information on the HLA can be found on: https://hla.stsci.edu/.
Additionally, another potential improvement would have been to calculate the distance to the galaxies ourselves, instead of relying on the distances provided by the CfA galactic survey. To do this we could have made use of a variety of methods, an instance of which is main sequence fitting. This procedure relies on the Hertzsprung-Russel (HR) diagram. Without going into much detail, this is a representation of several star types. An instance of the HR diagram is the following figure: Figure 10. Example of the HR diagram.
This diagram plots the absolute magnitude of stars (or in the case of Figure 7 their luminosity in a logarithmic scale, which is equivalent) against their spectral type. Evolutionary patterns have been shown to relate to the mass, age and composition of the star, which allows us to classify stars in several types. The principal of these is the main sequence, formed by stars on their hydrogenburning phase. If a star falls in this group and its spectral properties are measured, its absolute magnitude can be estimated, which can be compared to its measured apparent magnitude (using equation 4) to work out the distance to the star.
Importance of the Experiment:
This experiment was performed in hopes of finding a value of the Hubble constant that agrees with the theoretical value. And while the result was closer to the value predicted by the ΛCDM model than to the accepted experimental one, it is logical to think that if all the improvements explained above were implemented, we would have obtained a result within the range of the experimental redshift 0 .
Nevertheless, there is some hope, as new techniques of measuring 0 are being developed, such as the tip of the red-giant branch, megamasers or even gravitational waves. Hopefully, these alternative methods will yield new values of the Hubble constant which will help us determine which of the two current values is more accurate.
If the Standard Model of Cosmology (ΛCDM) turns out to provide a correct value of the Hubble constant, astrophysicists will probably have to improve and change the experimental procedure of determining 0 , or accounting for external phenomena (some of which we may not even know about). On the other hand, if the redshift method is the one that is in the right, this might mean that we must rethink our understanding of the Universe. This could involve changing some parameters of the distributions of dark energy, dark matter, and baryonic matter; or maybe finding new ways the universe expands; or even something we simply cannot imagine right now.
Even though this current situation might seem quite frustrating, it shows how little we know about the place we inhabit. Furthermore, we must remember that pretty much every single major scientific discovery has had a wave of confusion and disagreement as precedent (such as the origin of species and evolution; the nature of atoms; quantum mechanics and relativity, etc.). Therefore, for all we know, this situation could very well be the precursor to another outstanding scientific revolution. | 5,559.4 | 2022-01-01T00:00:00.000 | [
"Physics"
] |
The urban and regional impacts of plant closures: new methods and perspectives
ABSTRACT Work on large-scale plant closures has provided a rich vein of scholarship and academic debate. This paper articulates a new set of methods and concepts for understanding how large-scale redundancies associated with the closure of manufacturing plants affects society and the economy at the local, regional and national scales. It posits the need for a more comprehensive exercise in data collection and experimentation with previously unused methods, including the application of discrete-choice experiments in order to understand better the choice and decision-making frameworks adopted by affected workers. The paper argues there is a need to integrate community-wide policy responses into the core of the analyses.
INTRODUCTION
Plant closures, and the associated large-scale displacement of workers, remain an enduring feature of both developed and developing economies Bailey, de Ruyter, Michie, & Tyler, 2010;Bailey, Kobayashi, & MacNeill, 2008;Pfeiffer & Chapman, 2010;Pike, 2005). The process of economic adjustment within the global economy (Martin, Tyler, Storper, Evenhuis, & Glasmeier, 2018) and nationally can see both individual businesses close, and whole industries cease operation (Beer, 2018). Understanding the drivers for, and impacts of, plant closure has been an important theme in regional and urban research. Shutdowns generate important questions of public policy (Bailey & de Ruyter 2015;Productivity Commission, 2014a as well as more theoretically informed analyses as researchers seek to understand how communities respond and look to unpack the implications for the functioning of contemporary labour markets (Bailey, de Ruyter, & Chapain, 2012;MacKinnon, 2017;Weller, 2008). Other research has examined the impacts of public policy responses, especially the effectiveness of labour market assistance post-redundancy (Armstrong, Bailey, de Ruyter, Mahdon, & Thomas, 2008;Bailey, Bentley, de Ruyter, & Hall, 2014). This research has sought to understand the impact of government responses that are informed by a 'workfare' approach to social policy, or which are part of a broader neoliberal approach to policy.
Work on large-scale redundancies has provided a rich vein of scholarship and academic debate, including much cited work by Pike (2005), Fagan and Webber (1994), Healey (1982), and Watts and Kirkham (1999). Researchers have often relied on cross-sectional or short-term longitudinal surveys to shed a light on the employment outcomes for different subgroups of affected workers (Webber & Campbell, 1997). Other work has applied a range of econometric analyses to investigate the impact of particular conditions, such as the presence of industrial subsidies and unions, on the outcomes of plant closures (Productivity Commission, 2014a, 2014b or the health status of workers (Zeirsch, Baum, Woodland, Newman, & Jolley, 2014). This paper articulates a new set of methods and concepts for understanding how large-scale redundancies associated with the closure of manufacturing plants affects society and the economy at the local, regional and national scales. It posits the need for a more comprehensive exercise in data collection and experimentation with previously unused methods, including the application of discrete-choice experiments, in order to understand better the decisions made by affected workers. The paper also argues the need to integrate community-wide responses into the core of the analyses. Too often research has taken an atomistic approach to understanding how individual workers and their families are affected, in the process ignoring community dynamics and responses.
The closure of the Australian car-making industry is the lens through which these ideas, and the need for a new set of methods, will be explored. The paper is structured as follows. It next examines the changing nature of work, and how globalization, alongside the forecast impacts employment of artificial intelligence (AI) and associated technologies, is reshaping labour market opportunities globally. It considers some of the forecasts for future industries and employment before moving on to examine the details of the closure of car manufacturing in Australia. It examines the unique circumstances that resulted in three major car manufacturers -General Motors Holden (GMH), Ford and Toyotaceasing operations nationally within a 16-month window. The paper then turns to examine ways to develop more robust insights into large-scale changes in labour markets, drawing on key debates in the literature to argue for more comprehensive longitudinal data collections, the application of theoretically informed qualitative data collections, community-wide analyses of responses to change and a detailed focus on the choicesforced and voluntarymade by affected workers and their families. We argue that the increased complexity of contemporary working environments calls for a more integrated analytical approach, one which can assign agency to governments, firms and individuals.
A CHANGING WORLD OF WORK
Over the past decade, the ongoing drive to improve productivity has changed many parts of national economies and their constituent industries (Gilpin, 2018). Technological innovations in networked, automated AI and associated robotics have been profound, with many arguing they will transform work and employment over the coming decades. The impacts are expected to be far reaching with workers in a range of unskilled, semi-skilled and highly skilled occupations supplanted by new technologies, resulting in disruptions equivalent to those evident with the onset of the Industrial Revolution in the 18th century (WEF, 2016).
Technological change is now said to threaten entire professions and has been heralded by some as foreshadowing a 'jobless future'. For example, the Australian Industry Report (Australian Government, 2014) concluded 500,000 jobs in Australia could soon be automated, while Frey and Osborne (2017) predicted half of all employment in the UK could be replaced by robotics. However, there is some debate about the extent of these impacts. Chester (2018) has provided a more conservative estimate of the risks of employment loss associated with automation, suggesting 9% of jobs in Australia are at risk, and that while 'manual and routine cognitive jobs have fallen as a proportion of jobs from 50 per cent to 37 per cent; non-routine manual and non-routine jobs have increased from 42 per cent to 53%' (p. 5). Conversely, Deloitte (2014) suggests that one-third of the Australian economy faces impending digital disruptiona 'short fuse, big bang' scenariowith white-collar jobs (accountants, lawyers, bank tellers and supermarket staff) threatened by machine intelligence. On the other hand, technological change could revolutionize manufacturing in advanced economies, including Australia (Vecchi, 2017), as mass customization becomes the norm, and short production runs of high-quality, high-design and high-value goods enables the reshoring of manufacturing employment. However, if these new jobs areas current indications suggestmainly part-time, limited tenure and intermittent in nature, then the world of work will change, and households will need to adjust the ways in which they make their way in the world.
Governments and firms are increasingly challenged to develop better responses to changing economic conditions and labour markets. Businesses seek to ensure the well-being of their current and former employees, while governments recognize the need to find new strategies to assist individuals and places affected by industry transition (Productivity Commission, 2017). Globally, the need to adapt to the anticipated technology-led 'fourth industrial revolution' is challenging governments to investigate new forms of policy and innovative interventions in labour markets, training and education (WEF, 2016). Private sector organizations also need to find a way through this new economic and political landscape: firms are increasingly held to account for their social, environmental and community impacts, and this focus on corporate social responsibility challenges individual firms to account for their decisions and actions. Technological innovation appears to be displacing globalization as a key driver of industrial restructuring and job displacement globally. In studies of value chains and production networks, for example, interest is shifting to lead firms' consolidation of production chains and the associated reorganization of work and technology (Bamber, Brun, Frederick, & Gereffi, 2017;Gereffi, forthcoming).
Over the past three decades, there has been a small number of investigations into the impacts of large-scale restructuring in Australia and the associated impacts on workers and their households. In large measure, these have been conventional analyses drawing upon the intellectual traditions of a small number of academic disciplines, including public health, economics and geography. Importantly, most have been undertaken in the context of one industry participant closing and being supplanted in the marketplace by an alternative local businesses. They have also taken place within a largely unchanging industry structure in which conventional employment options are potentially available for displaced workers. Of these major studies, the first was undertaken in the late 1990s by researchers examining closures within the textile clothing and footwear (TCF) industry (Webber & Weller, 2001;Weller, 2000aWeller, , 2000b. A second study undertaken in the early 2000s examined employment and other outcomes for workers made redundant by the closure of a major airline (Weller, 2008(Weller, , 2009(Weller, , 2012. Finally, several papers were produced on the impacts of the closure of Mitsubishi's Lonsdale engine-making and components plant in Southern Adelaide (Beer et al., 2006;Beer & Thomas, 2007;Verity & Jolley, 2008).
The investigation of the Mitsubishi Motors Ltd (MMAL) closure found employment outcomes for retrenched workers were unfavourable and compared poorly with a similar closure in the UK (Bailey et al., 2010). After three years, one-third of workers from MMAL had left the workforce, one-third had found full-time work and one-third were either unemployed or underemployed. The majority of workers who secured employment post-redundancy reported lower incomes and that redundancy had adversely affected their health (Zeirsch et al., 2014). It found few workers were willing or able to relocate to find employment, with most seeking jobs in the manufacturing or mining sectors. Moreover, many who gained a job were made redundant again within three years, but few undertook further education or training to increase their employability. Housing tenure had an impact, with tenants more likely to find work and outright owners prone to depart the workforce (Beer, 2008). Later research found that the community as a whole paid a price for the plant closure, through reduced incomes, lower levels of employment in well-paid manufacturing industry and a greater reliance of lower paid industries (Beer, 2015). This outcome was consistent with earlier analysis that suggested there were significant deficiencies in how the Australian government responded to the closure (Beer & Thomas, 2007).
These three studies represent an important empirical and conceptual contribution to the understanding of industry change and the impacts of economic shocks on communities, workers and economic systems. In large measure, however, they examined a process of change that is more typical of the 20th century rather than of the 21st. Governments can no longer assume manufacturing workers will move to other manufacturing jobs; nor will full-time workers necessarily find continuing, permanent or full-time employment again. In large measure, the technological changes that are emerging in the second decade of the 21st century present profound challenges for both policy and theory; for governments and for the conceptualizations of scholars. It is important that we acknowledge that shifts in the world of work present a significant challenge to the economy as a whole, the affected communities and individual workers and their households. Recent work by Beer (2016) suggests automation and shifts in employment opportunities are leading displaced workers and their communities to doubt both their ability to find further employment and the economy's capacity to create meaningful work. Similarly, Turnbull and Wass (2000) argue a worker's experience of involuntary separation shapes their perception of the future labour market and job security. The experience of unemployment compounds this sense of unease and contributes to disengagement from the world of work.
Evidence in Australia of entrenched long-term and intergenerational welfare dependence has highlighted the cost of worklessness and the consequent erosion of mental health (Perales et al., 2014). While work on how the economies of places evolve over time (Boschma & Martin, 2007) has provided the intellectual foundation for recent writing on the 'branching' of workers into new forms of economic activity (MacKinnon, 2017), and the ways in which post-retrenchment pathways are shaped by both local opportunities and previous decisions. It acknowledges that workers face pressure to remain economically active to sustain themselves and their households, while also seeking outcomes that maximize their (existing or to-be-acquired) skills (MacKinnon, 2017).
THE RISE AND DECLINE OF THE AUTOMOTIVE INDUSTRY IN AUSTRALIA
Australia was home to an automotive manufacturing industry for 70 years, and its demise is indicative of globally evident challenges to car-making across the world. The on-going, transnational, challenge facing this industry was highlighted by General Motors' (GM) announcement in late 2018 that it would mothball five plants and terminate 14,000 jobs (Shih, 2018). A number of worrying trends for established car producers have been evident over the past two decades. There has been a long-established movement of car production to the Global South, where labour costs are lower, and these new plants in lower cost nations have increased competition within an already saturated market, which in turn has threatened established manufacturers . The automotive sector has also remained fragile since the economic shocks of 2008 and 2009, reducing the resilience of the major corporations and making them more inclined to terminate loss-making operations. Critically, the original equipment manufacturers (OEMs) areperhaps for the first timethreatened by digital disruption, with software firms such as Google and Microsoft potentially winning the race to develop driverless vehicles, and along the way relegate conventional car manufacturers to the role of mere suppliers in a larger 'mobility solution' for the next generation of consumers.
The Australian passenger vehicle industry was established in 1948 when the first car rolled off the GM assembly line. Other car manufacturers soon entered the Australian market, with local production a necessary device to overcome the substantial tariff barriers that protected virtually all segments of industrial and agricultural production. The majorpredominantly US-headquarteredautomotive producers established plants throughout Australia, with some simply assembling cars from imported components, while others designed and built cars in their totality. From 1973, tariff protections were reduced as the Whitlam government sought to create a more modern economy (Emmery, 1999). Under the Hawke governments of the 1980s, a state-auspiced rationalization programme encouraged car-makers to source similar components from local suppliers, effectively binding the lead firms to a common trajectory and industry structure. The OEMs were fierce competitors, but local plants relied on a limited pool of suppliers to provide the components they needed. This depended on second-and third-tier providers made the industry vulnerable to change, and eventually resulted in cascading closures as OEMs and supplier firms ceased production one after the other.
By the end of the 20th century, protection for the car industry in Australia had fallen significantly, but tariff rates remained close to 20%. When tariffs across the economy were set at 5% by 2005, the impacts on automotive manufacturing were cushioned by alternative government supports such as the Automotive Industry Structural Adjustment Program (AISAP) and the Automotive Competitiveness and Investment Scheme (ACIS). The ACIS alone cost the Australian government A$7 billion over the period 2001-11. In the early 2000s, the Australian economy entered a period of unexpected prosperity as growth in China created a surge in demand for Australian commodities, especially iron ore and coal. The impact on Australia was profound, with labour shortages reported in many regions and industries, and it resulted in increased investment in fixed assets such as housing, and profound wage rises in booming sectors. One effect of this resources boom was the rising value of the Australian dollar, which reduced the local car industry's competitiveness in both domestic and export markets.
In 2004, MMAL closed its Lonsdale engine plant in southern Adelaide (Beer & Thomas, 2007), and four years later announced the shutdown of its Tonsley Park assembly line. In 2007 and 2008, a decision by Ford to close its Geelong operations was reversed after a federal government intervention. But this, of course, turned out to be a temporary reprieve with an industry-wide shutdown evident just six year later. In early 2014, Toyota Australia announced it would cease the manufacture of vehicles in Australia. This news came on top of the announcement by GMH in December 2013 that all car production would end in 2017, while Ford Australia made public in May 2013 its intention to close its production facilities (Beer, 2018). In just 24 months, all remaining elements of the car manufacturing industry in Australia signalled their departure, putting in question tens of thousands of jobs. Widely cited estimates include the Productivity Commission's forecast of 40,000 direct and indirect job losses nationally from 2013 to 2018 (Productivity Commission, 2014a) and the Federal Chamber of Automotive Industries' (FCAI) (2013) forecast of 50,000 job losses, although some have made estimates as high as 100,000 job losses (National Economics, 2014). Employer interviews in Adelaide's northern suburbs indicated significant direct and indirect impacts were expected at the local level (Ranasinghe, Hordacre, & Spoehr, 2014). Importantly, the closures announced in 2013 and 2014 did not represent a radical change in trajectory: instead, they followed a well-established pattern of exits by major producers, with Mitsubishi closing in 2008 (Beer, 2014), Nissan ceasing production in 1992, Chrysler terminating local car building in 1981 and Leyland Australia closing in 1974. Effectively, 70 decades of mass car production in Australia ended in 2017, resulting in large-scale labour market disruption, community uncertainty, shocks to regional economies and considerable challenges for government agencies at the national, state and local levels in seeking to manage this process of change.
Recent scholarship has sought to understand why the Australian car industry came to an end over such a short period of time (Beer, 2018). A number of explanations have been put forward: Nieuwenhuis and Wells (2015) argued that Australian policyespecially, the withdrawal of subsidieswas the key factor behind the closures. The US government's support for the 'reshoring' of manufacturing (Vecchi, 2017) consolidated the US-based automotive manufacturers' low priority for Australian producers, while other nationsincluding Canadaprovided capital subsidies designed to drive investment into their plants rather than elsewhere (Yates, Sweeney, & Mordue, 2017). Shifts in the market were also a factor. Australian producers remained wedded to the production of large sedans, while the production of more promising sectors, such as sports utility vehicles (SUVs), went to Korean or other Asian producers. The limited extent of car production in Australia was an additional factor in the decline of the sector. Australian car plants were simply too small to be viable: at its peak, the GMH plant at Elizabeth produced 160,000 vehicles per annum, well short of the 250,000 units per annum considered the industry minimum in the 21st century (Orsato & Wells, 2007). Other manufacturers in Australia had even lower production volumes: Ford produced fewer than 70,000 units per year and Mitsubishi 35,000 cars annually.
Finally, it is important to acknowledge that the Australian automotive industry has long occupied a position on the very margins of global production networks, and that Australian manufacturing contributed few technological advances to the global automotive sector (Beer, 2018). Key decisions on investment in, and the future of, the industry were made in New York or other global capitals, and the heavy reliance on US-based producerswho had become vulnerable in the aftermath of the 'great recession' further escalated the level of risk confronting the Australian industry.
TOWARDS A NEW UNDERSTANDING OF THE IMPACTS OF PLANT CLOSURE
Contemporary economies in the developed and developing worlds continue to experience profound change. In many nations the production of services, rather than goods or commodities, has become the most important sector of the economy. New phases of accumulation have been driven by digital disruption and are challenging existing industries in the transport, accommodation and professional services sector. Manufacturing remains important for many economies, but the innovations of Industry 4.0 (WEF, 2016), driven by ongoing inputs of design, new technology and mass customization, are revolutionizing the production process. While many industries remain robust, others appear to be at risk. Even in prosperous sectors individual enterprises are vulnerable. These changes instigate knock-on effects that permeate throughout the labour market. Governments increasingly prioritize skills acquisition for retrenched workers to equip them to work in 'in-demand' services sectors, and there appears to be few alternative responses in an era where the remaining large-scale manufacturing plants are unlikely to be replaced. This social and economic transformation generates new questions for research: at a broad level, policy-makers and research scholars alike share a need to understand how new technologies will reshape labour markets and access to meaningful employment. At a more detailed level, better data are needed on how displaced workers navigate a labour market post-redundancy, and this focus recognizes the agency of workers and their households as well as the structural impact of broader economic circumstances.
For workers, large-scale redundancies associated with plant closures bring the uncertainties of future employment to the fore. Many workers struggle to comprehend how they will 'fit' into a labour market that is transforming under the weight of the rising incidence of 'precarious' work (Standing, 2011), labour-displacing innovations in digital and robotic technology (CEDA, 2015) and the rise of 'platform' and 'gig' economies (Flanagan, 2017). Reflecting on their experience, workers may be reluctant to take on training, doubting that it will enhance their long-term employment prospects, or they may not warm to new employment options in the care professionssuch as those in the disability sector or aged caredespite strong immediate job prospects. Increasingly displaced workers face futures of less secure work in poorer quality jobs. In addition, redundancy and associated unemployment has an impact on mental and physical health (Bohle, Quinlan, McNamara, Pitts, & Willaby, 2015), household income (Beer et al., 2006) and the well-being of children (Newman & McDougall, 2009). Nonetheless, we still know too little about how workers leaving skilled and unskilled employment in manufacturing and related sectors find jobs, re-establish careers and sustain themselves and their families. In Australia, this challenge is especially acute as previous studies are more than a decade old (Beer et al., 2006;Webber & Weller, 2001) and much has changed in the labour market and economy over the intervening period. The international literature is no more advanced than Australian scholarship in addressing these questions (Beer, 2016;MacKinnon, 2017). Over the past five years there has been substantial growth in the use of administrative data setssometimes created as linked data sets from across a number of government portfolios (Rafi, 2017). These new forms of analysis have shed a new light on the size and direction of the spillover effects associated with plant closure (Gathman, Helm, & Schonberg, 2014; Jofre-Monseny, Sanchez-Vidal, & Viladecans-Marsal, 2017), the efficiency of government expenditures in ameliorating the impacts of redundancy (Rafi, 2017), and the advantages and disadvantages associated with occupational mobility post-redundancy (Eriksson, Hane-Weijman, & Henning, 2018). However, as retrospective investigations they offer limited insights into the emerging world of workor non-workfor displaced workers.
Large-scale redundancies challenge regional resilience and call into question the ability of places to shape their own future. Gathman et al. (2014Gathman et al. ( , 2017 have shown how mass layoffs have profound, and persistent, impacts regionally although the impact on the national economy may be negligible. Importantly, the remaining businesses in the region often suffer the most (Gathman et al., 2017). These findings reinforce the importance of a regional focus for research and policy action although, as Bristow and Healy (2014) argue, human agency has been neglected in studies focusing on regional resilience and path development (Grillitsch & Sotarauta, 2018). Importantly, when bringing human agency into the debates on regional resilience, the focus is not solely on the formulation of better policies and their implementation, but also the ways actors come together to pool dispersed resources, capabilities and powers. Collaboration across institutional and organizational partitions is notoriously difficult; it does not happen by itself. Therefore, we need to learn more about how actors are organized in complex, regional economies and how they act collectively (Bristow & Healy, 2014).
Place-based leadership studies have focused on the deliberative actions of key actors, coalitions of them and organizations in both charting and implementing a new future for a city or region. Many researchers have focussed on the ability of place leaders to influence others (Beer & Clower, 2014;Sotarauta, 2009;Sotarauta & Beer, 2017), and it is this focus on achieving change through horizontal and vertical persuasion that differentiates leadership in regions and communities from the formal authority structures of governments, large corporations and institutions. Place-based leadership is context dependent, and while there are similarities in how it is expressed across nations and regions, there are also profound differences . Critically, leadership needs to be forward facing, rather than focussed on historical legacies (Safford, 2009), and able to gain access to resources (Bailey & Berkeley, 2014;Beer, 2014;Kurikka, Kolehmainen, & Sotarauta, 2017).
Policy-makers have increasingly looked to local leadership to address the challenges of economic change at the community scale. The OECD (2009) acknowledged the importance of placebased leaders in the revival of economies, largely through their capacity to shape culture and control land and other resources. Within Australia, the Productivity Commission, a central government advisory agency, has looked to place-based leadership to address the local consequences of the economic changes produced by the removal of industry subsidies, the removal of trade barriers (Productivity Commission, 2014a. The Productivity Commission was a key agent in the creation of a policy environment in which the closure of the Australian automotive industry was, arguably, inevitable (Beer, 2018). Only recently has it acknowledged that the resulting industry adjustment has had long-lasting impacts on affected communities (Productivity Commission, 2017). Consequently, it has looked to the development of leadership at the local or regional scale to repair damage to local economic structures. However, it has been unable, as yet, to specify the nature, shape and drivers of the community leadership it sees as a solution to this policy conundrum.
There is a pressing need to understand the processes, consequences and dynamics evident in the labour market and the community when an entire sector disappears. The removal of car manufacturing in Australia may presage future events for many other sectors and industries in developed and developing economies. Our knowledge of the outcomes associated with individual plant closures is unlikely to serve as a worthwhile model of events and outcomes when entire segments of the economy close. There is therefore a need to focus on all parts of an industry, including its supply chain, as the re-employment outcomes for workers formerly working in large plants owned are unlikely to be reproduced amongst small suppliers. Simultaneously, there is a higher likelihood that small, relatively nimble, enterprises will be able to reshape their business to take up new opportunities. Other businesses, of course, will close and some of their staff will not find ready re-employment.
In the contemporary global economy, it is inevitable that the impacts of industrial change will be differentiated by location, with some places increasingly by-passed while others assume a more central position in economies. Understanding this geographical differentiation and its drivers must take centre stage in the further evolution of studies of plant and industry closures (Pike, 2005). Studies need to compare and contrast outcomes across locations and with reference to national, local and global economic and labour market conditions, as well as place-based, or community, leadership. In addition, future research needs to be undertaken at a greater scale than previously with respect to the number and diversity of respondents, as a more complex and differentiated labour market calls for nuanced and differentiated insights, and these should be findings that provide a long-term perspective.
The next generation of research into plant closures needs to shed a light on precarious labour markets. Better information and a stronger evidence base is called for on labour market precariousness associated with large-scale redundancies. There is a need to understand its impact on the functioning of labour markets and the decisions taken by displaced workers, as well as the policies needed to overcome a position at the margins of the labour market. Recent studies have shown that successful policies are likely to 'extend beyond merely providing "jobs" or "job opportunities"' also to grapple with questions surrounding the quality of employment (Bailey & de Ruyter, 2015, p. 379). Standing (2011) provides a means to operationalize these issues by outlining seven forms of uncertainty in employment: labour market security (access to adequate paid work); employment security (protection against arbitrary dismissal); job security (opportunities for occupational/career progression and upward mobility); work security (protection against hazards and unsociable working hours); skill reproduction (opportunities for skill deployment and attainment); income security (liveable wages); and representation security (collective voice at work). This framework provides a start-point for advancing our understanding of precariousness after retrenchment.
However, there are deficiencies in Standing's framework that need to be addressed before application to the understanding of closures. First, Standing underplays the embodied characteristics of individuals. There is a clear association between labour insecurity and factors related to gender, age and ethnicity (McDowell, 2008), which result in quite different pathways post-closure (Bailey et al., 2012;Weller, 2008). We need to keep in mind that frameworks based on precarious work are not the same as those based on precarious workers (Campbell & Price, 2016). Since Standing's framework is gender-blind, it overlooks the different labour market positions of men and women. Second, Standing ignores household and family responsibilities, and does not accommodate insecurity at the household scale, but in reality workers' labour market participation is shaped by their household responsibilities (Hanson & Pratt, 2003), especially post-retrenchment (Weller & Webber, 1999). The impacts of contemporary job loss are complicated by the increasing likelihood that spouses will be active in the workforce, which reduces the degree of financial hardship but also restricts the family's capacity to relocate to take up opportunities. Importantly, a working spouse disqualifies workers from unemployment assistance in Australia, which implies a large proportion of workers navigate the labour market without government support.
The third gap in Standing's framework is the absence of an appreciation of the spatial dimensions of labour market processes. Standing's aggregations obliterate the differences between places and regions, and therefore also the interplay between processes at different spatial scales, such as differences in occupational labour markets Bailey & de Ruyter, 2015;Weller, 2008).
Fourth, Standing's framework is too static for the dynamic nature of labour market processes. Standing's checklist cannot capture the ways that different types of economic security interact and reinforce each other, or how people make trade-offs, for example, by forgoing 'job' security' to improve 'labour market' security (Burgess & Campbell, 1998).
Finally, Standing's framework cannot integrate the community-wide and policy impacts and responses or how these are mediated by the wider political economy .
METHODOLOGICAL CHALLENGES AND OPPORTUNITIES
Future research into plant closures must address these critical information gaps and provide a synthesis that informs policy development and implementation, while at the same time advancing knowledge. It needs to bring together insights into the functioning of economies and industries, with a detailed understanding of changes in labour markets. One pathway forward is to build upon work on how the economies of places evolve over time (Boschma & Martin, 2007). Recent writing on the 'branching' of workers into new forms of economic activity is a potentially promising pathway, as it examines the ways in which those pathways are shaped by both local opportunities, place-based leaders and the previous decisions of workers and policymakers. It acknowledges that workers face pressure to remain economically active to sustain themselves and their households, while also seeking outcomes that maximize their existing (or to-be-acquired) skills. As MacKinnon (2017) has argued, there is a pressing need to understand better the everyday practices deployed by workers affected by redundancy as they seek to move into related industries, relocate to more buoyant labour markets, adopt informal coping strategies, initiate new enterprises or reshape household dynamics to sustain their families. Scholarship needs to illuminate the ways in which labour markets change with industrial transformation.
Previous studies of large-scale plant closures have examined the employment position of individuals but have displayed a tendency to treat them as the recipients of outcomesof local labour market conditions, government policies, etc.without acknowledging their agency in shaping their own future. Little is known about the decisions taken by retrenched workers, which factors determine the opportunities taken up or declined, the relative influence in the decision-making process of household factorsspousal employment, family ties to the community and so onrelative to fiscal considerations such as wages offered. In the 21st century some workers made redundant will experience chequered and interrupted careers while others pick up 'gigs' via labour hire agencies or internet platforms. Others engage in voluntary work, unremunerated work for friends and family, or underpaid work in the informal economy. There will be workers who choose to leave the labour market permanently; workers who want to work but become discouraged; and workers who settle into welfare dependence. Others will move to well-paid and productive careers in related occupations. These variegated outcomes are poorly understood because previously it was accepted individual outcomes were a product of broader structural conditions. Future investigations into the impacts of industry restructuring and closure need to address this gap.
Qualitative data collection is especially valuable in understanding change in 21st-century labour markets because of its ability to apprehend a diversity of life experiences and circumstances. Contemporary society and workplaces are increasingly complex spaces (Beck, 2006), with significant differences between individuals on the basis of gender, age, ethnicity, education and prior work history. Members of this differentiated workforce are likely to experience very different combinations of outcomes, with complexity that defies capture through large-scale cross-sectional or longitudinal survey instruments. The integration of qualitative methods into the examination of plant and industry closure needs to be undertaken longitudinally through repeat face-to-face discussions over time in order to understand better the ways in which people understand the changes affecting them. There is a need to focus this attention on those likely to be most at riskwomen workers, those from ethnic or other minorities, older workers, and others likely to be vulnerable once retrenched (Bailey & de Ruyter, 2015) whose circumstances are often excluded as outliers in quantitative studies. Qualitative data collection should include the spouses and families of retrenched workers in order to understand better how career trajectories interact with household circumstances. Previous studies in Australia have rarely considered the role of spouses in structural labour market change (Gibson, 1992), although Newman and McDougall (2009) examined the impact of redundancy on the children and grandchildren of displaced auto workers.
In-depth qualitative data can be used to design and develop longitudinal discrete-choice experiments (DCE). DCE's provide a robust statistical method to examine the choices that consumers, households, firms and other agents make (Train, 2009). Whilst their application in this context is new, DCEs have been successfully applied over the last two decades in health economics to assess the relative determinants of health, workers' employment choices, and the effects of policies designed to address human resources problems (Mandeville, Lagarde, & Hanson, 2014). DCEs have the capacity to build on the in-depth qualitative data to further interrogate workers' decision-making processes and their weighting of options in relation to future training and employment. Such methodological innovation promises to illuminate why retrenched individuals make particular decisions about their futures. Applied longitudinally it provides a sequence of insights into the decision-making of individuals and the ways in which the choices taken soon after retrenchment shape subsequent decisions and future trajectories.
There is a now-acknowledged need to understand the community impacts and dynamics associated with plant closure alongside the analysis of individual outcomes. This realization has emerged in both the academic literature (Anderton, 2017;Horlings, Collinge, & Gibney, 2017;Quinn, 2017;Rossiter & Smith, 2017) and amongst policy-makers. For example, in Australia the Productivity Commission (2017) has explicitly accepted that central governments lack the capacity to lead a process of economic and social change at the local scale, and that actors mobilized locally need to take charge of their own future (Bailey & Berkeley, 2014;Beer & Clower, 2014). Closures have the potential to give rise to both negative and positive social impacts including long term unemployment, lower regional incomes and the opportunity to reshape local economies. These consequences could include the flow-on effects for families whose businesses are affected by the closures as well as the capacity for improved community cohesion as people unite to rebuild their communities. Place-specific factors, including the strength of local government, local leadership, geography, resource endowments and industry structure, are likely to serve as important mediating factors. We need to know more about how, and under what circumstances, effective local responses arise, and whether broader community attitudes have a determinant role in the emergence of leaders (Safford, 2009). We also need to understand better how local leaders can effectively interact with the formal responses of governments.
The realization that the impacts of plant closure do not terminate at the workplace gate represents a change in direction for research, with a need to include community-wide surveys into future analyses as a way of ascertaining the extent of indirect 'knock-on' impacts of the plant closures for households. There is the opportunity to determine whether the uncertainties of the labour market give rise to feelings of helplessness at the community scale, and the degree to which individuals look to governments or businesses to identify solutions. There is also potential value in examining relationship between structural adjustment programmes and political (dis) enfranchisement (Weller, 2017), and the degree to which individuals are aware of government assistance. Such investigations build upon existing analyses by researchers into community engagement with political governance and processes (Shelton & Garkovich, 2013), and shed a light on questions of household demography, social capital, the perceived leadership of the community and broader community expectations.
CONCLUSIONS
In a world of on-going economic change and rapid technological innovation there is an acute need to understand better the processes of industry decline and the fate of workers affected by the demise of their industry. The 'shock of the new' (Toffler, 1971) may well see large parts of our established economic structure disappear within the space of one or two decades, and if some of the commentators are to be believed (Deloitte, 2014;Frey & Osborne, 2017) the very institution of paid employment may well be brought into question for large sections of the population in developed and developing economies alike. These are critical challenges for cities and regions locally and globally. The impacts potentially touch on many areas of public policylarge-scale tertiary and higher education, income support arrangements, taxation, trade and social developmentas well as culturally important institutions such as shared social values, family structures, parenting practices and the relationships between generations. In addressing these issues there is a pressing need for both a stronger evidence base, one that is in tune with contemporary labour markets; that integrates explanations based on both structural conditions and the agency of individuals and groups; and that truly takes advantage of both quantitative and qualitative methodologies. We also need this evidence base to be applied to the development of better urban and regional policy and programmes, as the impacts find their clearest expression at this scale, and are best addressed through stronger city policies.
DISCLOSURE STATEMENT
No potential conflict of interest was reported by the authors. | 9,072 | 2019-01-01T00:00:00.000 | [
"Economics"
] |